article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
every year significant efforts are directed to conservation and restoration of natural ecosystems . in order to develop efficient protection strategies of those systems identification and understanding of mechanisms that lend them their stability are required . starting with the pioneering work of elton , food webs are a central organizing theme in studying ecosystems .they depict feeding relationships among species in ecological ecosystems .since their introduction by elton , they have been subject of intensive studies from both theoretical and experimental perspective .although recent decades have seen a significant progress in understanding of food web properties , our knowledge remains sketchy . until 1970sthe predominant belief of ecologists was that large and highly complex ecosystems were more stable that the simpler ones .it was then questioned by may , who showed mathematically that large and complex random communities are inherently unstable .may s have initiated a debate on stability of ecosystems which lasts to this day and is still far from being over .several mechanisms have been identified as factors leading to stability of food webs .one of them is compartmentalization , i.e. existence of subsets of species in the food web that interact more frequently among themselves than with other species in the system .it was shown that compartments are benefitial to an ecosystem , because they act to buffer the propagation of extinctions .a special case of compartments are groups of consumers associated with each particular plant species , called component communities .a food web consisting of such communities is stable , because a disturbance associated with fluctuactions in species density is confined largely to that species component community .generalist consumers constitute another stabilizing factor .a generalist is able to switch from one food source to another , which is more abundant .this switching tends to keep a food web stable , since it allows to control the abundant species and lets the less common one recover .a process that often occurs in food webs is a top - down control of lower trophic levels by apex predators .the top - level predators play a key role in controlling the population of prey and , as a consequence , they limit the degree to which the prey endanger primary producers .many population collapses may be traced down to altered top - down forcing regimes associated with the loss of native apex predators or the introduction of exotic species .detritus consisting mostly of bodies or fragments of dead organisms has been long recognized as one of the important factors in ecology .however , the theories of food webs and trophic dynamics have largely neglected detritus - based chains and have focused almost exclusively on grazing food chains .the `` green - world '' view of these works has been severely criticized and it is expected that only merging this approach with the one considering also the detritus could yiels a satisfactory ecological theory .the existing theoretical studies indicate that the flow of energy from detritus to living chain may increase the extinction cascades .there are also arguments that consumption of prey from detritus chain may weaken top - down regulations of stability .thus , many fundamental questions asked in ecology , concerning the structure of food webs , the length of food chains , the size and direction of extinction cascades , may have a different form and interpretation when detritus is taken into account. food webs consist of species using dispersal as a mean of both procuring key resources and avoiding natural enemies .therefore they are inherently spatial entities , a fact which is also neglected in the vast majority of studies leading to a rather poor understanding of spatial food web dynamics . an ubiquitous feature of different spatial systems in nature are waves traveling through them .those waves are of big importance , because they control the speed of many dynamical processes including chemical reactions , epidemic outbreaks and biological evolution .although great effort has been expended into the description of the waves , their rigorous description is still lacking mainly due to their sensitivity to different kinds of fluctuations .thus , any contribution that provides insight into the mechanisms governing the traveling waves is valuable to our understanding of complex systems with spatial structures . in our recent study a three species food web model with a detritus pathwas analysed by means of monte carlo simulations .our findings indicate that under certain conditions complex spatio - temporal patterns in form of density waves appear in the food web .analysis of those waves is the main goal of this work .the paper is organized as follows . in section [ model ]the model is introduced .simulation results are discussed in section [ results ] . and finally in section [ conclusions ] conclusions are drawn .[ fig : food web 2 ] illustrates a hypothetical system we are going to investigate .the food web consists of three trophic levels .the basal level species will be called _ resources _ ( ) within our model .it corresponds to primary producers or autotrophs in real ecosystems , i.e. to organisms able to take energy from the environment ( sunlight or inorganic chemicals ) and convert it into energy - rich molecules such as carbohydrates .the species at intermediate level in fig .[ fig : food web 2 ] will be called _ consumers _ ( ) .it relates to herbivores in real systems , which principally feed on primary producers .the consumers themselves constitute food for the top level species called _ predators _ ( ) . in general, the predators correspond to carnivores in real systems .the remains of the consumers and predators form detritus ( ) , which provides nutrient for the resources .feed on the resources and are themselves food for predators . detritus consisting of dead consumers and/or predators provides nutrient for resources .[ fig : food web 2 ] ] the model food web was built up under the tacit assumption that the conversion of dead fragments into nutrient occurs immediately and without any external help . in reality ,dead organisms are broke down and converted into useful chemical products by decomposers . for the sake of simplicitywe will neglect them in the present work and address their role for the viability of the whole food web in a forthcoming paper .it should be noted that the model shown in fig .[ fig : food web 2 ] resembles some real marine food webs with phytoplankton being the primary producer , zoo plankton and fish occupying the higher trophic levels and bacteria playing the role of decomposers .we are going to investigate the model by making use of an agent - based monte carlo simulation . to this endwe put individuals of each species on a square lattice .each individual is characterized by two parameters : a death rate and a birth rate .the death rate determines the probability that an organisms dies in a given time step if it is not able to feed .the birth rate stands for the ability of an individual to convert food reserves into breeding success .theoretically , at each trophic level there may live different species .we will assume however that each level is occupied by only one species .moreover , to keep the model as simple as possible , we do not differentiate between organisms of a given species .in other words , each individual of the species ( or ) will have exactly the same pair of the parameters .once put randomly on the lattice , the system will evolve due to the following rules : 1 . while populating the nodes we are applying an exclusion principle stating that each lattice node may be occupied by at most one agent of a given type .multiple occupancy of nodes by individuals of different species is allowed .if a appears at a node populated by an , then is eaten and removed from the lattice .this may correspond to a herbivore eating a plant .next , a progeny of is created with a probability at a randomly chosen node in its moore neighborhood , provided there is no other at the new node yet .the rule reflects the fact that in the real life there is usually a close correspondence between food reserves of an individual and its breeding success .3 . if a appears at a node occupied by a , an analogous situation takes place . feeds on and then produces an offspring with a probability .if there is no at a node occupied by a , then the consumer dies with a probability .its body stays at the node until used by an .5 . similarly ,if there is no at a node populated by a , the predator dies with a probability and turns into a detritus portion .if an meets a at a node , it feeds and then produces offspring at empty nodes in its moore neighborhood .the eaten detritus is removed from the lattice .while the upper level species and are allowed to produce at most one progeny after each breeding success , for the basal species we assume that it may populate all empty sites ( i.e. 8 nodes at most ) in the neighborhood after breading , each of which with the probability .this assumption complies with the fact that the lower the trophic level the higher its productivity .if there is no at a node occupied by an , it dies with a probability and is removed from the lattice .note that there is no conversion into a in this case ..look up table in our model . , , , and stand for empty node , resource , consumer , predator and dead individual ( consumer or predator ) , respectively .the values in parentheses represent a possible outcome of an action in the neighborhood of a given site.[look - up ] [ cols="<,<,<",options="header " , ] as it follows from the above rules , a single node of the lattice may be in one of 16 states .depending of its actual state different actions may be performed according to the rules .possible states and outcomes of those actions are summarized in table [ look - up ] .looking at this table one can immediately see that the system reveals a very rich dynamics .that is why we made some further assumptions .first of all , we take only occurrences of into account , not its actual quantity . as a consequence ,if a consumer or a predator dies at a node already containing a , then its content will remain one .in other words , we simply assume that resources are able to absorb all detritus they find .thus , its quantity does not really matter .a further presumption is that the dead resources do not contribute to the nutrient pool .a living species in our model does not move on the lattice , it remains localized .the only way of invading new lattice sites is via proliferation . in the simplest version of the modelthe directions of proliferation are chosen at random .however , it is possible to put some intelligence into the behavior of the species .the impact of different proliferation strategies was already discussed in ref . . in that workthe following strategies has been taken into account : * - a mixture of the randomness and exclusion , i.e. the aforementioned basic rules .* - consumers put their progeny only on a site occupied by , i.e. the direction of their invasion is driven by food availability .if there is no in the neighborhood of , no offspring is produced .other species follow the strategy .* - consumers put their progeny on a site not occupied by .other species follow the strategy .* - this is a mixture of the above strategies , i.e. consumers put their progeny on a site occupied by , if its not occupied by at the same time .other species follow the strategy .* - predators put their progeny only on a site occupied by .other species follow the strategy .* - predators put their progeny only on a site occupied by , whereas avoid sites occupied by .resources follow the strategy .if not stated otherwise , the results presented in the next sections of this paper have been obtained with the strategy .however we will refer to other strategies at many points for the sake of comparison .with the assumptions mentioned in the previous section , the model has seven important parameters : birth and death rates of each species ( ) and the linear size of the lattice . to reduce the number of parameters we arbitrarily fix all death rates at .we set the linear size of the lattice equal to 200 throughout the simulations .it was already shown in ref . that for system sizes larger than 200 the viability of the food web depends only weakly on .thus , constitutes a reasonable compromise between the computational efforts and quality of the results .all results presented in this section were obtained in monte carlo simulations with the moore neighborhood ( i.e. 8 neighboring sites ) on the square lattice , because it turned out to be a good compromise between the richness of the resulting dynamics of the system and the computational requirements . among other choiceswe have tried the von neumann neighborhood with only 4 neighboring sites and extended moore - like ones with 12 and 20 sites . while the von neumann neighborhood led to an absorbing state ( see section [ asymptotics ] for more details ) independently of the actual model parameters , the extended neighborhoods yielded results which were qualitatively the same compared to the less computational demanding moore case .most simulations were performed up to monte carlo steps ( mcs ) .this particular value turned out to be a reasonable choice as well , because changing it to higher values did not significantly modified the results . in other words , it is highly unlikely in our model that a system , which is alive at , will die out afterwards .if required ( e.g. for phase diagrams in sec . [ phase diagrams ] ), we averaged the results over 100 independent runs in order to get a good statistics . since the asymptotic states of the system have been already discussed in refs . and , we give here only a brief summary of the most important findings . in case of proliferation strategies , and ( see sec .[ model ] for definitions ) the system ends up in one of two distinct asymptotic states depending on the particular values of the model parameters : in an absorbing one with all species being extinct or in a coexisting one , in which all species survive till the end of a simulation .theoretically , one could expect that a state with species and having non - zero concentrations and being extinct is possible as well .such a state was not observed in the simulations .thus the predators seem to be indispensable for the survival of the system .three other strategies considered here , i.e. , and , end up in an absorbing state independently of the actual model parameters .the reason is as follows .the presence of the predators in the system is required to control the number of consumers . otherwise , consumer population will grow rapidly and kill all available resources driving the entire system to extinction .according to table [ look - up ] , only nodes in the states , , and can lead to the migration of the predators , which is needed to maintain their population .in general , there are three sources of those four states : ( 1 ) states resulting from initial conditions , ( 2 ) predators which put their progeny on nodes occupied by c s and ( 3 ) consumers which put their offspring on nodes occupied by p s . in the strategies that end up in a coexisting state , all those sources are present . however , in the , and strategies consumers avoid nodes occupied by predators , i.e. the third source is missing .although predators from initial and states ( source 1 ) are able to proliferate or even migrate farther if there are some consumers in their neighborhoods ( source 2 ) , there is no way for them to proceed with proliferation after all consumers in their vicinity will be eaten up because no node will be populated by a new . as a consequence, they will die out after a while .closer look at the absorbing states reveals that there are two different ways for the system to reach them .depending on the parameters of the model , in some cases the concentrations of species decrease almost monotonically towards zero .possibility is that well pronounced peaks in the , and densities appear before extinction .examples of each asymptotic state in case of strategy are shown in fig .[ fig : asymptotics ] .the curves were obtained from single runs .the parameters of the simulations may be found in the plots and their captions . strategy . in the alive state ( top plot )all species survive till the end of the simulation . in the absorbing state ( bottom plots )the entire system dies out .the are two distinct ways for the system to reach the absorbing state : almost monotonic decrease of the species densities ( bottom left ) and well pronounced peaks of the densities before the extinction ( bottom right ) .birth rates used in the simulations are indicated in the plots .death rates were all set to 0.01 and the linear size of the lattice was equal to 200 .[ fig : asymptotics],title="fig : " ] + strategy . in the alive state ( top plot )all species survive till the end of the simulation . in the absorbing state( bottom plots ) the entire system dies out .the are two distinct ways for the system to reach the absorbing state : almost monotonic decrease of the species densities ( bottom left ) and well pronounced peaks of the densities before the extinction ( bottom right ) .birth rates used in the simulations are indicated in the plots .death rates were all set to 0.01 and the linear size of the lattice was equal to 200 .[ fig : asymptotics],title="fig : " ] strategy . in the alive state ( top plot )all species survive till the end of the simulation . in the absorbing state ( bottom plots )the entire system dies out .the are two distinct ways for the system to reach the absorbing state : almost monotonic decrease of the species densities ( bottom left ) and well pronounced peaks of the densities before the extinction ( bottom right ) .birth rates used in the simulations are indicated in the plots .death rates were all set to 0.01 and the linear size of the lattice was equal to 200 .[ fig : asymptotics],title="fig : " ] we will show that the peaks seen in the bottom right plot in fig . [fig : asymptotics ] correspond to density waves traveling through the system .analysis of those waves is the main goal of this paper . in fig .[ fig : asymptotics 2 ] the time evolution of and in the system from the bottom right plot of fig [ fig : asymptotics ] is presented again in a smaller time window . as far as the resources concerned , we observe rapid explosions of their population followed by almost equally rapid declines .this behavior bears a resemblance to characteristics of an excitable media .moreover , a similar time evolution may be found in processes that take place in plankton populations and are known as `` spring blooms '' and `` red tide '' phenomena .( left plot ) and ( right plot ) .see bottom right plot in fig .[ fig : asymptotics ] for more details .[ fig : asymptotics 2],title="fig : " ] ( left plot ) and ( right plot ) .see bottom right plot in fig .[ fig : asymptotics ] for more details .[ fig : asymptotics 2],title="fig : " ] let us now have a closer look at the system at a microscopic level .snapshots of the system at different mc steps are shown in fig .[ fig : patterns ] . at mcs ,i.e. just before the last peak in the density of shown in fig .[ fig : asymptotics 2 ] occurs , there is almost no activity in the system .most of the nodes are occupied by a or a and there are only a few resources and predators .however , there is a small cluster of in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .note , that it is followed by an even bigger cluster of s .both clusters continue to grow and spread over the entire system .this fact is reflected by the peaks in the densities as may be seen in fig .[ fig : asymptotics 2 ] .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food - the densities of both species decline rapidly .mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] + mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] + mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] mcs , most of the nodes are in a ( yellow ) or ( grey ) state .there is only a small cluster of ( red ) in the top center part of the lattice .this cluster grows , splits into two parts and moves away from its origin , as it may be seen in the snapshot at mcs .it is followed by even bigger cluster of s .both clusters continue to grow and spread over the entire system .finally the resources hit the boundaries of the system ( or the front of another wave ) and have no place to further escape from s .they will be diminished very quickly by the consumer s wave .then the consumers will partially die out due to the lack of food and again , most of the nodes will be in a or state. parameters of the simulation : ,,,.[fig : patterns],title="fig : " ] since the traveling waves presented above were observed only for some particular values of the simulation parameters , i.e. the birth rates of the species at different trophic levels , it is to expect that special conditions are necessary for an wave to emerge ( see fig .[ fig : rwaves ] ) .first , there must be an node , so that is protected from being eaten by a for some time interval .if dies due to the lack of food , it provides nutrient for , which then proliferates to the neighboring sites .second , if the origin node was surrounded by s , the progenies are able to feed immediately and produce offspring by themselves . in this way the initial cluster seen in the top left snapshot in fig .[ fig : patterns ] is created .third , the overall concentration of must be high in order to feed the wave through the entire process .a resource on an node is protected by a predator from being eaten by a consumer .if dies , it provides food for and lets it proliferate .if the origin node is surrounded by the detritus , newborns get enough food to reproduce and an avalanche starts.[fig : rwaves],title="fig:",scaledwidth=28.0% ] blooms .a resource on an node is protected by a predator from being eaten by a consumer .if dies , it provides food for and lets it proliferate .if the origin node is surrounded by the detritus , newborns get enough food to reproduce and an avalanche starts.[fig : rwaves],title="fig:",scaledwidth=28.0% ] blooms .a resource on an node is protected by a predator from being eaten by a consumer .if dies , it provides food for and lets it proliferate .if the origin node is surrounded by the detritus , newborns get enough food to reproduce and an avalanche starts.[fig : rwaves],title="fig:",scaledwidth=28.0% ] as may be seen from fig .[ fig : patterns ] , the wave is followed by a bloom .an explanation for this phenomenon may be found in fig .[ fig : cwaves ] .the avalanche is triggered by the front of an wave hitting a node occupied by a consumer . since in all proliferation strategies under consideration the species follows a random behavior pattern , in the situation presented in fig .[ fig : cwaves ] it will put an offspring on the node as well . as a result , the consumer gets food and starts to proliferate . since there is plenty of food brought by the wave , the new consumers start to invade the lattice - a wave emerges . blooms . if an wave hits a node populated by a , it gets food and starts to produce offspring . since there is plenty of food for newborns brought by the wave , they start to invade the lattice and a wave emerges . [fig : cwaves],title="fig:",scaledwidth=28.0% ] blooms .if an wave hits a node populated by a , it gets food and starts to produce offspring .since there is plenty of food for newborns brought by the wave , they start to invade the lattice and a wave emerges .[ fig : cwaves],title="fig:",scaledwidth=28.0% ] blooms .if an wave hits a node populated by a , it gets food and starts to produce offspring .since there is plenty of food for newborns brought by the wave , they start to invade the lattice and a wave emerges .[ fig : cwaves],title="fig:",scaledwidth=28.0% ] in the previous section it has been already mentioned that an node and detritus states in its vicinity are necessary to feed the wave front ( see fig .[ fig : rwaves ] ) .let us now investigate these conditions in more detail .we begin with checking the frequency of events . as it follows from fig .[ fig : rp freq ] , although their frequency decreases slightly in the course of time , such events occur in the low density phase not only just before the emergence of a new wave .thus , an rp node surrounded by the detritus is not a sufficient condition for the wave .events overimposed on the resource density .proliferation strategy : .parameters of the simulation : , , .[fig : rp freq ] ] in fig .[ fig : d cluster dist ] , distributions of detritus clusters at two different mc steps are presented .both distribution were measured in the low density state of the resources . however , while the left plot falls into the time interval just after the density decline , the right plot describes a phase before the formation of a new wave .we see that the two distributions differ significantly . at the beginning of the low density phase , there are plenty of small detritus clusters . if an node is surrounded by such a small cluster , the emerging wave is suppressed sooner or later due to the lack of food for the newborns . , , .at mcs ( left plot ) , there are many small clusters on the lattice .the cluster distribution at mcs is different - there is one big cluster comparable with the size of the lattice and only few small ones . bottom : snapshots of clusters corresponding to the above distributions .[ fig : d cluster dist],title="fig : " ] , , . at mcs ( left plot ), there are many small clusters on the lattice .the cluster distribution at mcs is different - there is one big cluster comparable with the size of the lattice and only few small ones .bottom : snapshots of clusters corresponding to the above distributions .[ fig : d cluster dist],title="fig : " ] , , . at mcs ( left plot ) , there are many small clusters on the lattice .the cluster distribution at mcs is different - there is one big cluster comparable with the size of the lattice and only few small ones .bottom : snapshots of clusters corresponding to the above distributions .[ fig : d cluster dist],title="fig : " ] , , . at mcs ( left plot ), there are many small clusters on the lattice .the cluster distribution at mcs is different - there is one big cluster comparable with the size of the lattice and only few small ones .bottom : snapshots of clusters corresponding to the above distributions .[ fig : d cluster dist],title="fig : " ] the situation changes upon entering the state corresponding to the right plot in fig .[ fig : d cluster dist ] .there are just a few small clusters and a big one with the size comparable with the size of the lattice .if an wave starts to form at the boundary or in the bulk of the big cluster , it can then travel through the entire system .of course , the cluster itself will be destroyed by the wave and it will take a while to rebuild it . that is why not all events lead to a density bloom and why we observe low density states in the system .our hitherto findings indicate that there could be a connection between the emergence of density waves and the percolation of detritus on the lattice . in order to verify this hypothesis we checked in every step of our simulations if there exists a cluster of detritus spanning the whole lattice .the results are shown in fig .[ fig : r vs perc ] .we see that indeed there are percolation periods ( indicated by the rectangles ) separated by time intervals , in which no spanning clusters occur .moreover , the peaks in the resource density accrue at the ends of the percolation intervals .the reason is simply that after the emergence of a spanning cluster it takes some time for the wave to form and then to travel .vs percolation intervals of . to plot the intervals , in our simulations we assigned the value of 10 to each time step in which a detritus spanning cluster was detected , and 0 otherwise .parameters of the simulation : , , .[ fig : r vs perc ] ] it should be mentioned that no spanning clusters were observed in case of a coexistence state or an almost monotonic decline ( see section [ asymptotics ] for details ) . in both cases in every time step the cluster distributions were similar to the left plot of fig .[ fig : d cluster dist ] .it seems that the density blooms are possible only if the percolation of the detritus occurs in the system .percolating detritus clusters shown in fig .[ fig : r vs perc ] may be interpreted as an accumulation of nutrients in the system . in ecology ,a response of an ecosystem to increased levels of nutrients is known as eutrophication .high level of nutrients stimulates the primary production , causing a quick population increase ( i.e. a bloom ) of species such as algae in aquatic systems .these blooms may have many ecological effects , among others a decreased biodiversity , changes in species composition and dominance and toxicity effects . to collect more information on the behavior of the system we performed a series of simulations with different sets of the model parameters .all birth rates of the species were varied from 0.1 to 0.9 with step 0.1 . for each parameter set we run 100 independent runs .a set was tagged as an alive one if at least one out of those 100 runs ended up in a coexisting state .similarly , a set was labeled as a wave one if at least in one of the runs the density waves were observed .the collection of all alive sets constitutes the alive phase .the remaining sets form the dead phase , which is divided into two subphases : the ( almost ) monotonic one and the wave one ( see section [ asymptotics ] for more details ) . from our experience it follows that the transitions between phases are very sharp , i.e. if for a particular set of parameters one run ended in a coexistence state , then the most of the 100 runs led to the coexistence state as well . from the results of the above simulations one could generate a discrete 3d phase diagram in space .an example of such diagram is shown in fig .[ fig : phase diag 3d ( 2 ) ] for the strategy .however , since those diagrams are not really readable , we will analyse its 2d sections at fixed resource s birth rates instead .space obtained for strategy .circles correspond to the wave states , crosses are the alive states .all birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 3d ( 2 ) ] ] in fig .[ fig : phase diag 2d ] , 2d phase diagrams in planes at for different proliferation strategies are shown .for other values of we would get similar diagrams with only small changes in the areas of different phases .it is due to the fact that the viability of the system depends only weakly on .plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] + plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] + plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] plane at for different proliferation strategies .birth rates take values from 0.1 to 0.9 , with the step 0.1 .[ fig : phase diag 2d],title="fig : " ] we see that there is a critical value of for each proliferation strategy , above which the density waves appear in the system . in other words , only if the birth rate of the consumers is high enough , they are able to follow the resource s wave front according to the mechanism shown in fig . [fig : cwaves ] .otherwise they will invade the lattice too slowly and the resources will die out before being caught up by the consumers . in this casewe will observe after an initial density bloom the almost monotonic decline of the species densities towards the absorbing state ( bottom left plot of fig .[ fig : asymptotics ] ) .moreover , in the case of , and strategies , i.e. the strategies that support the alive state as well , we also observe a critical value of , below which the density blooms occur .thus the emergence of the waves require the birth rate of predators to be small enough to allow an almost uncontrolled bloom of the consumers .if is higher than the critical value , the predators are able to suppress the quick increase of the consumers density and the system arrives at a coexistence state .these results corroborate the existence of `` top - down '' interactions that impart stability to food webs . in order to provide more information on different proliferation strategies ,we have counted both the alive and wave states in the discrete 3d phase diagrams as the one shown in fig .[ fig : phase diag 3d ( 2 ) ] .the results are shown in fig .[ fig : total wave and alive ] . ]again , we see that the proliferation strategies split into two groups - those which support only the two absorbing states ( , and ) and those which end up in a coexisting state as well ( , and ) . as far as the first group is concerned , the strategies are very similar to each other , since the volume of the wave phase depends only little on the particular strategy . among the strategies building up the other group ,there is a strong similarity between and .while the volume of the alive phase is slightly bigger for the strategy , we observe more wave states in the phase diagram of the one .the strategy differs from the other two in the size of the phases .however , the differences are smaller in case of the wave states .a simple spatial model of a three level food web with a closed nutrient cycle has been investigated .the results complement our previous findings on the stability of such a system .the time evolution of the model food web reveals two asymptotic states ( fig .[ fig : asymptotics ] ) : an absorbing one with all species being extinct , and a coexisting one , in which concentrations of all species are non - zero .theoretically , one could expect that a state with species and having non - zero concentrations and being extinct is possible as well .such a state was not observed in the simulations , because the predators are indispensable for the survival of the system .we found two possible ways for the system to reach the absorbing state depending on the particular values of the model parameters . in some cases the densities increase very quickly at the beginning of a simulation and then decline slowly and almost monotonically ( small fluctuations disregarded ) . in others , well pronounced peaks in the , and densitiesappear regularly before the extinction .those peaks correspond to density waves traveling through the system , a phenomenon which is often observed in many complex systems in nature . understandingthe mechanism that triggers those waves was the focus of the present work .we have shown that several conditions have to be met for the waves to emerge : 1 .an node surrounded by the detritus is needed to initiate the resource wave .2 . a spanning cluster of the detritus on the lattice is required to feed the front of the resource wave .3 . the consumers must be able to reproduce quickly to follow the resource wave .in other words , their birth rate must be higher than a critical value .4 . the birth rate of the predators must be lower than a critical value to keep the number of the predators low and to allow almost uncontrolled grow of consumers followed by their quick decline due to natural causes . the existence of a critical value of , above which the food web enters the coexistence phase ( see fig . [fig : phase diag 2d ] ) is consistent with the hypothesis of top - down interactions being essential for the stability of food webs .indeed , the emergence of the waves require the birth rate of predators to be small enough to allow an almost uncontrolled bloom of the consumers .if is higher than the critical value , the predators are able to suppress the quick increase of the consumers density and the system survives .the coupling between the percolation and spatio - temporal patterns has been already assumed or observed in many complex systems .since the percolation of the detritus in our model corresponds to an accumulation of nutrients , the processes we observe in the model reveal some resemblance to the eutrophication phenomena occuring in both aquatic and terrestrial ecosystems . within our simple modelthis accumulation induces density blooms which drive the entire food web to extinction . in real ecosystemsthe outcome of eutrophication is usually not lethal for a system as a whole . nevertheless , its ecological impact is severe , since it may result in loss of species , decreased biodiversity , changes in species composition and dominance or toxicity effects . identifying percolation as one of the triggers for the density blooms may have a practical impact for the biological control . from the theory of percolation of disordered media it follows that it is enough to destroy some critical fraction of `` transmission - promoting '' sites to suppress the propagation of waves and their negative effects on the system .however , it should be checked which asymptotic state the system reaches after such an intervention . 00 ch .s. elton , _ animal ecology _ , sidgwick and jackson , london ( 1927 ) .reprinted several times . p.j .morin and s.p .lawler , ann.rev.ecol.syst .* 26 * , 505 ( 1995 ) .r. j. willliams and n.d .martinez , nature * 404 * , 180 - 183 ( 2000 ) .b. drossel , adv . phys . * 50 * , 209 ( 2001 ) . l.a .nunes amaral and m. meyer , phys .lett . * 82 * , 652 ( 1999 ) .a. pkalski , j. szwabiski , i. bena and m. droz , phys.rev .e * 77 * , 031917 ( 2008 ) .deangelis , p.j .mulholland , a.v .palumbo , a.d .steinman , m.a .huston and j.w .elwood , ann.rev.ecol.syst . *20 * , 71 ( 1989 ) .r. macarthur , ecology * 36 * , 533 ( 1955 ) .r. m. may , nature * 238 * , 413 ( 1972 ) .d. b. stouffer and j. bascompte , pnas * 108 * , 3648 - 3652 ( 2011 ) .e. thbault and c. fontaine , science * 329 * , 853 - 856 ( 2010 ) .r. m. thompson , ecology * 88 * , 612 - 617 ( 2007 ) .neutel , j. a. p. heesterbeek , j. van de koppel , g. hoenderboom , a. vos , c. kaldeway , f. berendse and p. c. de ruiter , nature * 449 * , 599 - 602 ( 2007 ) .j. a. estes et al . , science * 333 * , 301 - 306 ( 2011 ) .moore , e.l .berlow , d.c .coleman , p.c .de ruiter , q. dong , a. hastings , n.c .johnson , k.c .mccann , k. melville , p.j .morin , k. nadelhoffer , a.d .rosemond , d.m .post , j.l .sabo , k.m .scow , m.j .vanni and d.h .wall , ecology lett ., * 7 * , 584 ( 2004 ) .g. polis and d.r .strong , am.nat . *147 * , 813 ( 1996 ) .g. a. polis , w. b. anderson and r. d. holt , ann .* 28 * , 289 ( 1997 ) .mccann , nature , * 405 * , 228 ( 2000 ) .p. amarasekare , annu .. syst . * 39 * , 479500 ( 2008 ) .y. kuramoto,_chemical oscillations , waves , and turbulence _ , courier dover publications ( 2003 ) .b. grenfell , o. bjornstad and j. kappey , nature 414 , 716723 ( 2001 ) .r. snyder , ecology * 84 * , 13331339 ( 2003 ) .o. hallatschek , pnas * 108 * , 1783 - 1787 ( 2011 ) .j. szwabiski , a. pkalski , i. bena and m. droz , physica a * 389 * , 2545 - 2556 ( 2010 ) .j. szwabiski , physica a * 391 * , 5479 - 5489 ( 2012 ) .d. mackenzie , http://www.fisherycrisis.com/fisheggs.html , accessed at november 6th 2012 .h. t. odum , ecological monographs * 27 * , 55 - 112 ( 1957 ) . j. m. murray , _ mathematical biology _ , springer verlag , berlin ( 1990 ) . j. e. truscott and j. brindley , bulletin of mathematical biology * 56 * , 981 ( 1994 ). j. m. malmaeus and l. hakanson , ecological modelling * 171 * , 35 - 63 ( 2004 ) .l. horrigan , r. s. lawrence and p. walker , environmental health perspectives * 110 * , 445456 . m. d. bertness , p. j. ewanchuk and b. r. silliman , pnas * 99 * , 13951398 .d. m. anderson , scientific american * 271 * , 62 - 68 ( 1994 ) .k. magori , w.i .bajwa , s. bowden and j. m. drake , plos comput . biol .* 7 * , e1002104 ( 2011 ) .b. t. grenfell , o. n. bjrnstad and j. kappey , nature * 414 * , 716723 ( 2001 ) .g. kondrat and k. sznajd - weron , phys .e * 79 * , 011119 ( 2009 ) .a. lipowski , physica a * 268 * , 6 ( 1999 ) .d. stauffer and a. aharony , _ introduction to percolation theory _, cpc press ( 1994 ) .
|
a spatial three level food web model with a closed nutrient cycle is presented and analyzed via monte carlo simulations . the time evolution of the model reveals two asymptotic states : an absorbing one with all species being extinct , and a coexisting one , in which concentrations of all species are non - zero . there are two possible ways for the system to reach the absorbing state . in some cases the densities increase very quickly at the beginning of a simulation and then decline slowly and almost monotonically . in others , well pronounced peaks in the , and densities appear regularly before the extinction . those peaks correspond to density outbursts ( waves ) traveling through the system . we investigate the mechanisms leading to the waves . in particular , we show that the percolation of the detritus ( i. e. the accumulation of nutrients ) is necessary for the emergence of the waves . moreover , our results corroborate the hypothesis that top - level predators play an essential role in maintaining the stability of a food web ( top - down control ) . monte carlo simulations , food webs , food web viability , predator - prey systems , nutrient cycle , density outbursts , traveling waves , percolation
|
over the last years agent - based modeling has become a powerful simulation technique for a number of applications , varying from ecology to economics and financial systems , and from traffic and supply chain networks to biological and social systems .agents are treated as unique and discrete entities , allowing a straightforward way to incorporate detailed interactions between individuals via a set of microscopic rules .modelers use the fine - scale evolution at the individual agent level to study the macroscopic , population level dynamical behavior , often referred to as _ emergent _ behavior . it would of course be desirable ( and computationally much more efficient ) to simulate this emergent behavior through a continuum macroscopic model .however , accurately accounting for the consequences of individual interactions at the macroscopic level can be highly nontrivial . as a consequence , accurate macroscopic equations are typically unavailable , even in cases when a clear separation of time scales strongly suggests that such equations _ should _ exist . in this work, we will consider a financial market agent - based model , in which a stochastic process describes the `` motion '' of an individual agent along a spatial domain representing the agent s propensity to buy or sell .this stochastic process needs to be simulated for a large number of ( interacting ) agents over long times .doing this over the entire spatiotemporal domain of interest to observe the relevant macroscopic variables ( e.g. , agent density ) incurs a prohibitive computational cost . the recently developed _ equation - free framework _( ) can be applied to exploit the time - scale separation between the ( fine - scale ) individual dynamics and the ( coarse - scale ) population - level behavior by performing the expensive agent - based simulations in a grid of small patches , which cover only a fraction of the spatiotemporal domain . exploiting smoothness ,the solution for the entire region is approximated by repeated interpolation / extrapolation of the macroscopic variables recorded within these patches .this framework is built around the central idea of a coarse time - stepper , which advances the coarse variables over a time interval of size .it consists of the following steps : ( 1 ) _ lifting _ , i.e. , the creation of appropriate initial conditions for the microscopic model ; ( 2 ) fine - scale _ evolution _ ; and ( 3 ) _ restriction_ , i.e. , the estimation of macroscopic observables from the fine - scale solution .this coarse time - stepper can subsequently be used as input for time - stepper - based `` wrapper '' algorithms performing macroscopic numerical analysis tasks .these include , for example , time - stepper based bifurcation codes to perform coarse bifurcation analysis for the unavailable macroscopic equation , and other system level tasks such as rare event analysis , control and optimization .the patch dynamics scheme is a combination of two sub - schemes - the gap - tooth scheme ( interpolate macroscopic properties in space ) and the coarse projective integration scheme ( extrapolate macroscopic properties in time ) .first , in the gap - tooth scheme , microscopic simulations are only performed in a number of small subdomains of the spatial domain ( intervals for the case of one space dimension ) ; in - between these patches lie what we call gaps .we continue the discussion for the one - space - dimension paradigm that fits our application .a coarse time- map is then constructed as follows .we first choose a number of macroscopic grid points and a small interval ( tooth " ) ) around each grid point ; we initialize the fine scale consistently with a macroscopic initial condition profile ; apply the microscopic solver within each interval , using appropriate boundary conditions for time ; and , finally , obtain macroscopic information at time ( e.g. , by computing the average density in each tooth ) . in , the initial conditions for each of these teeth are obtained via interpolation over neighboring teeth ; this can be seen to be equivalent to constructing a macroscopic finite difference approximation . in the gap - tooth scheme , this procedure is then repeated .we refer to for an illustration of this scheme with particle - based simulations of the viscous burgers equation .second , exploiting the smoothness of coarse variable evolution in the _ temporal _ domain , one can combine the gap - tooth scheme with a projective integration scheme . in this context, we perform a number of gap - tooth steps of size to obtain an estimate of the _ time derivative _ of the unavailable macroscopic equation .based on this estimate , a projective step is subsequently taken over time .this combination has been termed _patch dynamics _ . in previous work, the gap - tooth and patch dynamics schemes have been applied to study a diffusion homogenization problem , as well as a biological dispersal model .for the kind of agent - based problems we study in this work , a conservation law needs to be satisfied .for this reason , instead of basing the gap - tooth scheme on a finite difference approximation , we design a finite volume - based patch dynamics scheme .we first apply this scheme to an approximate continuum model of the agent - based dynamics to illustrate the scheme , as well as to facilitate our error analysis . in this case , our `` inner '' microscopic model is a fine discretization of this continuum model , and our microscopic simulator is a classical finite volume scheme within each tooth . in general , a given microscopic code only allows us to run it with a set of predefined boundary conditions .it is highly non - trivial to impose macroscopically inspired boundary conditions on such microscopic codes , see , e.g. for a control - based strategy . as in ,the boundary conditions on the teeth are imposed _ via the use of buffer regions _ , surrounding the teeth , to protect " the tooth inside each simulation unit from boundary artifacts .we call simulation unit " the union of a tooth and its edge buffers .after several gap - tooth steps , we project the macroscopic properties forward in time for a bigger step . based on the projected macroscopic properties, we again construct consistent microscopic initial conditions for the simulation units and the cycle repeats .we study the performance of this patch dynamics scheme , and compute its order of accuracy both analytically and numerically .subsequently , we apply the scheme to the agent - based model , and observe that , typically , this requires the expensive microscopic simulations in only of the spatial domain and of the temporal domain. these micro / macro ideas also form the basis of the heterogeneous multiscale method . there , a macro - scale solver is combined with an estimator for quantities that are unknown because the macroscopic equation is not available. this estimator subsequently uses appropriately constrained runs of the microscopic model . for a recent overview of results obtained in that framework, we refer to .the remainder of this paper is organized as follows : in section 2 we introduce an agent - based financial market model based on the work of and the associated continuum approximation they derive . in section 3we describe the `` inner '' finite volume scheme for the continuum model . in section 4we describe the patch dynamics scheme . in section 5we first present patch dynamics results using the discretization introduced in section 3 as the inner solver , and then analyze the order of consistency of this scheme both analytically and numerically .the main result , the agent - based patch dynamics computations , are also presented in section 5 . we conclude with a brief summary and discussion in section 6 .we consider a financial market model initially described by omurtag and sirovich .this model simulates the actions of buying and selling by a large population of interacting individuals in the presence of mimesis . in this model ,the -th agent s propensity to buy or sell is indicated by its state which evolves according to two coupled processes .the first process is the exponential decay , at constant rate , of towards the neutral state .this decay implies that each agent gradually forgets its current state and tends to eventually become neutral in its preference for buying or selling .the second process is a stochastic process denoted by which represents the effect of incoming information to each agent s state . incorporates three factors : ( 1 ) the arrival times of incoming information ; ( 2 ) the type of this information ( i.e. , `` good '' or `` bad '' ) ; and ( 3 ) the quantitative effect of this information on .the arrival of incoming information is assumed to follow a poisson distribution with mean arrival frequency , where denotes the mean arrival frequency for good news and denotes the one for bad news .the mean arrival frequencies are given by where the parameters and represent the contribution of _ external _ information each individual receives from its environment ( e.g. , mass media news or opinions of stock market consultants ) and are assumed time - independent .the quantities and are the _ buy rates _ and _ sell rates _, respectively , defined as the number of buys or sells happening in the market per unit time over a small finite time horizon ( which we call _ reporting horizon _ ) normalized by the total number of agents in the market .the parameter is a feedback constant which quantifies the extent to which the individuals propensity to buy or sell is determined by the buy and sell rates .note that the buy and sell rates are collective properties of the population ( i.e. , all the agents in the market ) , which implies that the second term of ( [ nu : eqn ] ) , , embodies how each individual agent s state is affected by the collective behavior of the entire population at that moment in time .each arriving `` quantum '' of information has probability to be good news and to be bad news .when good news arrives for agent , the value of is increased instantaneously - jumps - by a fixed amount ; similarly , when bad news arrives , decreases instantaneously - jumps - changing by .if , after a positive jump , the value of exceeds the right boundary ( i.e. , ) , then a buy " is considered to have occurred and the number of buys for that time interval is increased by one .similarly , after crosses the left boundary ( i.e. , ) , the number of sells is increased by one . in either case, is set back to the neutral state ( i.e. , with a small random offset to prevent discontinuities ) after the number of buys or sells is updated . in this way , each individual agent s decision affects the population s collective behavior .this discrete jump process , combined with the previously described exponential decay , form the evolutionary rule for each agent s state , which can be summarized as the following stochastic differential equation it is possible to derive a concise approximate description of the dynamics of a large assembly of agents by keeping track of only the density of agents at each state , rather than the individual states of every agent in the population . the resulting approximate evolution equation for , averaged over a large number of replicas of the population ,is given by for small by expanding about in terms of and truncating terms higher than second order in , one obtains a fokker - planck - type approximation , where and because agents leave the domain at and are restored at the origin , we have , and a dirac birth term at the origin .the buy and sell rates are defined as the outgoing fluxes at the boundaries , .the fokker - planck - type equation ( [ ori : eqn ] ) , with its non - local reinjection term , is the approximate continuum model in our study .we are now ready to describe a finite volume discretization of , which will serve as the first `` inner '' fine - scale simulator on which our exploratory patch dynamics scheme ( described in section 4 ) will be based .we divide the one dimensional spatial domain into equal - sized grid cells and keep track of our approximations to the average densities in these cells . at each time stepwe update the average cell densities using approximations of the fluxes through the edges of the cells .as shown in , we denote the grid cell by denotes the approximate average cell density in the cell at time , where denotes the size of the cell . except for the central grid cell , which will be treated separately because of the particle reinjection ,can be rewritten as define the function as the flux term so that the integral form ( in space ) of gives integrating in time from to and dividing by gives \ , .\label{complete : eqn}\ ] ] based on , can be rewritten as where is the average density defined in eq.([q : eqn ] ) , and and denote the average fluxes ( i.e. fluxes averaged over ) across cell edges , that is , f_i-1^n = _t_n^t_n+1f(x_i-1,t)dt and f_i^n = _ t_n^t_n+1f(x_i , t)dt . based on ( [ f : eqn ] ) , the average fluxes over the time step are approximated by for a detailed discussion about the construction of diffusion fluxes for finite volume schemes see .the fluxes at the outer two boundaries are expressed as ( recall that and ) f_0^n = -r^- -^2 and f_n^n = r^+ ^2 .we use an odd number of grid cells for this scheme .since the central grid cell centered at contains the source point at , two additional terms need to be added to for this cell , clearly , the above scheme is conservative by construction . in section 5, this scheme is used as the inner " solver to mimic the agent - based simulator . moreover , the associated patch dynamics scheme , introduced in the next section , will be inspired by this scheme , and will also be conservative at the agent level .as mentioned previously , the patch dynamics scheme combines gap - tooth and projective integration schemes .the gap - tooth scheme is illustrated in .we have divided the one dimensional space into two kinds of unequal - sized grid cells : the narrower cells , which we call teeth , and the wider cells , which we call gaps .we intend to solve the microscopic model only within the teeth ( plus some buffer regions to handle the teeth boundary conditions ) . however , we keep track of the average densities in _ both _ the teeth and the gaps to obtain a conservative scheme . as in the finite volume scheme ,we want to update the average cell densities based on the fluxes at the edges of the cells , where and denote the tooth and gap size respectively .however , in contrast to the finite volume scheme , the fluxes and are not computed based on a known macroscopic equation ( which is usually not available for agent - based models ) . instead , our goal is to compute these fluxes based on the microscopic simulations " inside a small fraction of the spatial domain - the simulation unit ( " shown in fig . [ fig : lifting_2 ] ) .these separated simulation units are the only locations where microscopic simulations take place .the numerical simulation of the microscopic model in each simulation unit provides information on the evolution of the global state at the spatial location of the simulation unit , as if we were running the microscopic simulations in the entire spatial domain .therefore , it is crucial to choose appropriate boundary conditions , so that the solutions inside the teeth evolve as if the simulations were performed over the entire domain .for some cases it is possible to implement macroscopically - inspired constraints on the microscopic model as boundary conditions . however , this is not always the case . to overcome this difficulty ,as discussed in , we use a larger box - the simulation unit - to run the microscopic simulations , but still compute the flux used to update the macroscopic state at the teeth boundaries .each simulation unit consists of one tooth and its edge buffers ; the simulation units on the outer boundaries of the spatial domain are treated slightly differently , as discussed later .the purpose of the additional computational domains , the buffers , is to `` protect '' the teeth simulations from boundary artifacts .this can be accomplished over short enough time intervals , provided the buffers are large enough .as shows , we divide the one - dimensional spatial domain into an odd number , , of grid cells .the size of each cell is .we then put teeth ( narrow bins ) at the edges of these cells and put gaps ( wide bins ) between the teeth .there are teeth and gaps .the average densities for the teeth at time are denoted as , and the densities for the gaps are denoted as .we want to update these macroscopic properties at time based on the microscopic simulations inside the simulation units during the time step from to .the patch dynamics algorithm to proceed from to is given below : + + 1 ._ lifting_. at time create initial conditions inside each simulation unit for the microscopic simulator , consistent with the spatial profile of the macroscopic properties - the average densities inside the teeth and gaps .+ + 2 . _simulation_. based on the microscopic initial condition constructed in step 1 compute and by running the microscopic simulator inside the simulation units from time to ._ restriction_. obtain the spatially averaged densities inside the teeth and gaps based on eq .( [ patch1:eqn])-([patch2:eqn ] ) ._ projective step_. estimate the macroscopic time derivative at time , e.g. , as + and use it in a time integration method of choice , e.g. , forward euler , except for the two outer teeth that receive special treatment , we create the microscopic initial condition inside the simulation unit by constructing a local quadratic polynomial approximation , , to the density that matches the average density in a tooth and its two neighboring gaps as follows .let be the width of the tooth ( so is the width of the gap ) and require as shown in fig .[ fig : lifting_2]a , the microscopic initial conditions are constructed such that the averaged function values inside the tooth ( region 3 ) , and inside the neighboring gaps ( regions 1 and 5 ) equal the average densities in these regions . at the two boundaries , as fig .[ fig : lifting_2]b and c show , there are gaps only on _ one _ side of the boundary teeth so the microscopic initial conditions are created in a slightly different manner . at the left boundary , a local quadratic polynomial is constructed such that similarly , at the right boundary a local quadratic polynomial is constructed such that note that the _ left edge _ of the leftmost tooth ( and similarly the right edge of the rightmost tooth ) is put at the two boundaries , instead of their respective centers .for this reason , the outermost gaps are of a slightly different size ( ) compared to the inner ones ( which are of equal size ) .as mentioned previously , for our initial experiment and its analysis a fine - grid finite volume scheme is first used as theinner " , microscopic simulator .we emphasize again that the purpose of first using a fine discretization of as our inner " microscopic solver is to facilitate a numerical study of the errors involved - something we can not do when dealing directly with the agent - based simulator .we divide each simulation unit into fine bins , where denotes the number of fine bins inside the tooth while denotes the number of fine bins inside each buffer .all these fine bins are of equal size . based on the previously constructed local quadratic polynomial we initialize the averaged densities in the fine bins as follows : where denotes the center of each fine bin and the average densities inside the fine bin of the simulation unit ( as the red cross marks in fig .[ fig : simulation ] and [ fig : simulationbc ] denote ) .[ fig : simulation ] illustrates the simulation step in the bulk of the spatial domain .starting from the previously constructed local quadratic polynomial , we first initialized the average densities inside the fine bins using . to evolve for one microscopic step , as eq .( [ overall : eqn ] ) and ( [ flux : eqn3]-[flux : eqn2 ] ) show , it requires , and . because of that , we are not able to evolve the solutions for the two bins _ at the boundaries _ ( the two bins denoted in grey in the upper - left part of fig . [ fig : simulation ] ) .therefore , after the first microscopic time step , these two bins are discarded .for the same reason , after another microscopic time step dt , two more ( now outer ) fine bins need to be discarded , and so on until all the bins in the buffer regions are discarded .there are fine bins in each buffer , so we can run the microscopic simulator for microscopic time steps . during the -th step ( ) , we save the fluxes computed at the left ( resp .right ) edge of the -th tooth , ( resp. ) .the simulations in the leftmost and rightmost simulation units are performed slightly differently . as fig .[ fig : simulationbc ] shows , a buffer is only used on one side of the tooth ( the side to the interior of the spatial domain ) . on the other side , the outer boundary of the spatial domain , the boundary condition needs to be enforced .this is achieved using a ghost bin ( colored pink in fig .[ fig : simulationbc ] ) with density equal in magnitude but opposite in sign to the density of the leftmost ( resp .rightmost ) inner bin .( see fig .[ fig : simulationbc ] for an illustration of the left boundary ; the right boundary is treated in a similar way . ) at the end of the step , we update the _ total _ fluxes at the edges of each tooth for the coarse time interval as follows then , based on eqn .( [ patch1:eqn ] - [ patch2:eqn ] ) , the coarse variables - the average densities inside the teeth and gaps - are updated .is now used to estimate the macroscopic time derivative and is used to project the macroscopic solutions forward to obtain .we compare the performance of the patch - dynamics scheme with two other schemes : one is the fine grid finite volume scheme i.e. , our microscopic simulator _ applied to the entire region _ ; the other is a coarse grid finite volume scheme , which mimics the usually unavailable macroscopic solver .all three schemes are first applied to the continuum approximation of the agent - based model .[ fig : evolution ] shows several snapshots of the solutions of these three schemes along the time - path towards their steady states .starting from the same gaussian - like initial conditions ( fig .[ fig : evolution]a ) we evolve the solutions of the three systems for a long time ( fig . [ fig : evolution]b shows one snapshot of the transient solutions ) until they have reached their steady states ( fig .[ fig : evolution]d ) .as we can see from the figure , the solution of the patch - dynamics scheme agrees well with the other two schemes along the entire process .the steady state solutions of the three schemes also agree well with the analytical steady state solution of the problem ( fig .[ fig : evolution]d ) . in fig .[ fig : evolution]c and [ fig : evolution]e we plot the differences between the fine finite - volume scheme and the other two methods , patch dynamics and coarse finite volume scheme .the solution to the fine finite - volume scheme is an approximation to the true solution , so these differences can be viewed as a first approximation to the errors in the other two methods . also , since the patch dynamics uses the fine finite - volume scheme as its microscopic solver , the differences between it and patch dynamics are an indication of the error introduced by the patch dynamics scheme . as we can see from fig .[ fig : evolution]c and [ fig : evolution]e , during the transient phase the two schemes give comparable errors , while as we further evolve the system to steady state the patch dynamics scheme gives smaller errors .this is probably due to the fact that our patch dynamics scheme utilizes more microscopic level information .we now turn to the computational savings of patch dynamics . for the patch - dynamics scheme, we divide the spatial domain into big cells , giving 42 teeth and 41 gaps .we run microscopic simulations " in only of the spatial domain ( since the width of each simulation unit is equal to only of each big cell ) . inside each simulation unitwe put equal - sized fine bins inside each tooth and each buffer to run the microscopic simulations ( mimicked here by the fine grid finite volume scheme ) .we use as the time step for the microscopic simulator ( is the width of the fine " bin ) . after running the microscopic simulator for microscopic steps ( dt ) , or equivalently one coarse time step , we project macroscopic variables 90 coarse steps forward in time .this means that the microscopic simulations are performed in only of the temporal domain . for the fine finite volume scheme ,we use fine bins , and use the same time step size dt as the one used in the microscopic simulator for the patch dynamics scheme . for the coarse finite volume scheme we use the same number of big cells ( ) as the one used for the patch - dynamics scheme .the time step size of this scheme is set to be equal to the effective time step size of the patch dynamics scheme ( recall that , with ) .overall , in this illustrative example the patch dynamics scheme runs the microscopic simulations in only of the spatial domain and of the temporal domain .the patch - dynamics scheme uses a microscopic integrator that is not specified ( in this paper we have used an agent - based model as well as a fine - scale finite volume method to illustrate the method and to permit better analysis ) .the purpose of the gap - tooth part of the scheme is to estimate time derivatives of the macroscopic tooth and gap densities ( more precisely , the chords of their time - dependent solutions over a small interval ) , and these are input to an unspecified outer integrator or other macroscopic numerical analysis procedure ( in this paper we have used projective forward euler integration , but any scheme could be used ) .the errors thus arise from three sources : the microscopic integrator itself , the patch - dynamics scheme , and the outer integrator ( or whatever other numerical procedure we apply to the chord estimates ) .the final error is a non - linear combination of the errors from all three sources that depends on details of the problem being solved and the details of each method , making it very difficult to isolate each part of the process .hence it seems advisable to consider each step from a _ backward error _ perspective . in the backward error view , the microscopic integrator integrates a _ perturbed _ equation exactly .the size of the perturbation is a function of the microscopic integrator and the equation being integrated , but for our purposes we can ignore it and assume that the microscopic integrator is _ exact _ ( it is for the perturbed equation ) .then we can examine the size of the errors introduced by the gap - tooth scheme .the equations for estimating the teeth and gap chord slopes ( for other than the center and end teeth ) are : and note that if we compute the fluxes and exactly , these equations are exact . if , for example , we ran the microscopic integrator over all space and time , these equations would give the exact values of the average teeth and gap densities over time .however , we introduced the teeth to avoid running the microscopic simulation over all of space , so that after a finite time the buffer regions fail to protect " the teeth from the lack of simulation information from the bulk of the gaps . at that pointwe have to stop the microscopic integration , and lift from the macroscopic information using the process described in to get a new microscopic description . under the assumption that the microscopic integrator is correct , the error introduced by the patch dynamics scheme is simply _ a change of initial values _ at the start of the integration block . in the next subsectionwe will examine this error analytically and then report on some computational estimates of the errors in our finance model in the following subsection .suppose that the computed microscopic solution at is .for each tooth the lifting process creates a quadratic polynomial approximation , , such that the average densities of and agree in the tooth and its neighboring gaps .the microscopic integration then continues starting from the values rather than from .the effect is to add r(x ) = u(x ) - v(x ) to the initial values for the next integration interval .we analyze each tooth separately .let us set the spatial origin , , to be the tooth center and assume that the taylor series for is v(x ) = v_0 + v_1x + v_2x^2/2 + v_3x^3/6 + , i.e. , is the -th spatial derivative of . it will be convenient to define and ( so the left and right gaps are ] .because of the average density condition in the tooth and its surrounding gaps we have where is given in .let us define as u(x ) = r_0 + v_0 + ( r_1+v_1)x + ( r_2 + v_2)x^2/2 . hence r(x )= u(x ) - v(x ) = r_0 + r_1x + r_2x^2/2 - v_3x^3/6 - v_4x^4/24 + solving for in terms of we get thus , over the region comprising the tooth and its neighboring gaps the largest error that can be introduced is e = _ x |[(d^2+h^2)x -4x^3]v_3/24| + o(h^4 + d^4 ) which is . how does the error in eq .[ maxerr ] affect the solution of the integration problem ?unfortunately it depends on the problem , but if the problem is such that modest perturbations do not cause serious difficulties ( for example , if the system after semi - discretization in space satisfies a lipschitz condition ) the final error is equal to the sum of the errors , each multiplied by bounded amplifications .thus , if the number of steps in the outer integrator is , the global error will be bounded .if the average time step size is the global error is over any finite interval , and if the average time step size is the global error is .if the solution tends to a stable stationary state , then the final error may reflect only the most recent lifting errors and if many of the components are stiff , this may be true over much of the integration interval .to numerically validate that the order of consistency for the patch dynamics scheme is two , we compare solutions of the patch - dynamics scheme ( using a fine - grid finite volume scheme as the inner " microscopic simulator ) with the solutions of a reference scheme , which is also a fine - grid finite volume scheme ( but over the entire physical domain ) . in this reference scheme ,the spatial domain is divided into fine bins with bin width .the microscopic time step is set to be . as fig .[ fig : error]a shows , starting from the same gaussian - like initial condition , we evolve both the patch - dynamics scheme and the reference scheme for some time ( , or equivalently time steps in the reference scheme ) , and then compute the differences between the solutions of these two schemes in order to construct the subsequent log - log plot for the consistency order analysis . for the patch - dynamics scheme ,two distinct limits have been numerically simulated .the first limit corresponds to cases where the tooth size becomes very small . in practice ,the tooth size in this limit is set to be equal to the fine bin size in the reference scheme ( ) .the size of the buffers is set to be the same as the one of the teeth . by fixing the size of the teeth and buffers , we studied cases with different sizes of big cells and therefore different sizes of gaps .recall that , as fig .[ fig : lifting_2 ] shows , each big cell is of size , which is equal to the sum of the sizes of a pair of tooth and gap . in the second limit , instead of fixing the teeth size to have a very small value, we keep the ratio between the sizes of the simulation unit and the big cell fixed , while varying both of them simultaneously .the second limit is closely related to the envisioned practical simulation cases , because in practice we need to use teeth with some finite size , in order to capture meaningful macroscopic properties . for each limitwe compute the differences between the average densities inside the teeth and the gaps for the patch dynamics scheme on the one hand , and the average densities inside the corresponding regions of the reference scheme on the other .we define the errors for the teeth and gaps as the l2 norm of these differences : where , and denote the average densities in the teeth and gaps for the patch - dynamics scheme , while and denote the average densities inside the corresponding regions of the reference scheme .the detailed results of the different cases in these two limits are shown in table [ tab : numlimits ] . to estimate the order of consistency, we generated the log(error ) v.s .log( ) plot and performed linear fitting .the fitted plots for the two limits are shown in fig .[ fig : error ] ( b ) and ( c ) .the slope of the fitted lines together with their respective confidence interval estimates are shown in table [ tab : order ] .the numerical consistency order estimates are all close to ; the slight deviation is most likely due to the discontinuity at the origin and the boundary conditions ..numerical results of the consistency order analysis for the two limits .n is the number of big cells in the patch - dynamics scheme . denotes the size for the big cells .the error terms are defined in eq .( [ eqn : norm1]-[eqn : norm2 ] ) . [ cols="^,^,^,^,^,^,^ " , ] finally , we turn to the agent - based model ( discussed in section [ sec : agent ] ) for which the patch dynamics scheme has been designed .we divide the spatial domain into big cells , and set the ratio of the size of the simulation unit to the big cell ( ) to 0.2 .this means we are running the microscopic simulations in of the spatial domain .our macroscopic observables ( our coarse variables ) are the average agent densities inside the teeth and gaps . inside each simulation unit the tooth and the two buffers are of equal size , and each of them has been subdivided into 10 fine bins . based on the lifting procedure discussed in section [ sec : lifting ] , microscopic agent densities are assigned to these fine bins .we then convert these density values into numbers of agents based on the total amount of agents available in the system ; in this case the total number of agents we use is .once the number of agents for each fine bin is assigned we start the agent - based simulations . as illustrated in section [ sec : simulation ] , the coarse time step size - the time we can run the agent - based simulator before we stop for reconstruction - is closely related to the size of the buffers .this is because the buffers are gradually `` contaminated '' due to boundary artifacts ; we need to stop a little before these artifacts get propagated to the tooth .our coarse time step size is chosen to be . within this time interval ,the chance for any agent to travel a distance of the size of a buffer or more is very low . in this wayagents outside each simulation unit will not affect the solution of the tooth ( although they do affect the solution of the buffers ) , and the tooth is therefore protected . along the simulation process , the fluxes of agents across the edges of the teeth are tracked . in section [ sec : simulation ] , the fluxes are updated based on equations of the artificial microscopic simulator ( a pde ) . in the agent - based case we track the fluxes by counting the number of agents going across those edges . due to the stochastic nature of the agent - based modeling , we also need to reduce the noise for the macroscopic variables .the agent - based simulations are therefore repeated inside each simulation unit for copies before the restriction procedure is applied . at the end of each coarse time step , we perform the restriction procedure to compute the averaged fluxes at the edges of the teeth . based on these fluxes we update the numbers of agents inside the teeth as well as the neighboring gaps , and compute the corresponding densities based on the numbers of agents .following the same procedure discussed in section [ sec : project ] , based on the macroscopic solutions ( the densities ) at the start and the end of each coarse time step , we estimate local time derivatives of the macroscopic variables and project them forward in time .what allows us to do this is the smoothness of these coarse variables in the time domain . in this case, we run the agent - based simulations over the duration of one coarse time step , and then jump " nine coarse time steps ahead . in other words, we are running the agent - based simulations over of the temporal domain .[ fig : agent ] shows four snapshots along a patch dynamics agent - based trajectory on its way to the eventual steady state .the solutions for the agent - based patch - dynamics simulations are plotted in blue ( teeth ) and red ( gaps ) respectively . to compare the performance ,the solutions of a fine - grid finite volume scheme for the continuum model are also plotted ( in blue curve)as a reference .the analytical steady state for the continuum model is shown in fig .[ fig : agent](d ) .overall , the solutions of the agent - based patch - dynamics simulations match well ( although not perfectly ) with the solutions of the continuum model .the deviations are most likely due to the approximations made to derive the continuum model from the agent - based one . in fig .[ fig : agent](d ) we can see that the steady state of the patch - dynamics agent based simulations matches the stationary state of the full scale agent - based simulations better than the analytical steady state of the continuum model matches it .we reiterate that the patch - dynamics agent - based simulations require the expensive " agent - based computations to be performed in only of the temporal domain and of the spatial domain compared to the full scale agent - based simulations .we described a patch dynamics scheme for conservative agent - based problems .this scheme approximates an unavailable effective equation over macroscopic time and length scales , when only a microscopic agent - based simulator is given .it only uses appropriately initialized simulations of the agent - based model over small subsets ( patches ) of the spatiotemporal domain , thus significantly reducing the computational cost .because this scheme mimics a finite volume scheme for the underlying effective equation , it is conservative by construction .since it is often not possible to impose macroscopically inspired boundary conditions on a microscopic agent - based simulation , we have used buffer regions around the patches , which temporarily shield the internal region of the patches from boundary artifacts .we have explored numerical characteristics of the scheme based on the continuum approximation ( a fokker - planck - type evolution equation ) of the agent - based model , and demonstrated its effectiveness for agent - based computations involving a financial market agent - based model . in this paperthe agent - based model was mainly used for illustration purposes , to demonstrate the effectiveness of the approach .several factors that can affect the performance of the patch dynamics scheme still need to be explored in more detail .one of the most interesting ones to study should be the effect of the noise ( in the teeth and the gaps ) on the estimation of relevant quantities , and through them to the overal performance of this scheme .a. armaou , c. i. siettos , and i. g. kevrekidis .time - steppers and coarse control of distributed microscopic processes . _ international journal of robust and nonlinear control _ , 140 ( 2):0 89111 , 2004 .a. bindal , m. g. ierapetritou , s. balakrishnan , a. armaou , a. g. makeev , and i. g. kevrekidis .equation - free , coarse - grained computational optimization using timesteppers . _ chemical engineering science _ , 610 ( 2):0 779793 , 2006 .e. blanchart , n. marilleau , j. l. chotte , a. drogoul , e. perrier , and c. cambier .sworm : an agent - based model to simulate the effect of earthworms on soil structure ._ european journal of soil science _, 600 ( 1):0 1321 , 2009 .r. erban , i. g. kevrekidis , and h. othmer . an equation - free computational approach for extracting population - level behavior from individual - based models of biological dispersal _physica d _ , 20( 215):0 1 - 24 , 2006 .g. gassner , f. lorcher , and c. munz .a contribution to the construction of diffusion fluxes for finite volume and discontinous galerkin schemes ._ journal of computational physics _ ,2240 ( 2):0 10491063 , 2007 .m. a. janssen , l. n. alessa , m. barton , s. bergin , and a. lee . towards a community framework for agent - based modelling ._ jasss - the journal of artificial societies and social simulation _ , 110 ( 2 ) , 2008 .i. g. kevrekidis , c. w. gear , j. m. hyman , p. g. kevrekidis , o. runborg , and c. theodoropoulos .equation - free coarse - grained multiscale computation : enabling microscopic simulators to perform system - level analysis . _ comm .math . sciences _ , 10 ( 4):0 715762 , 2003 . r. rico - martinez , c. w. gear , and i. g. kevrekidis .coarse projective kmc integration : forward / reverse initial and boundary value problems ._ journal of computational physics _ , 1960 ( 2):0 474489 , 2004 .c. i. siettos , i. g. kevrekidis , and n. kazantzis. an equation - free approach to nonlinear control : coarse feedback linearization with pole - placement ._ international journal of bifurcation and chaos _ , 160 ( 7):0 20292041 , 2006 . c. i. siettos , r. rico - martinez , and i. g. kevrekidis. a systems - based approach to multiscale computation : equation - free detection of coarse - grained bifurcations ._ computers & chemical engineering _ , 300 ( 10 - 12):0 16321642 , 2006 . c. theodoropoulos , y. h. qian , and i. g. kevrekidis .`` coarse '' stability and bifurcation analysis using time - steppers : a reaction - diffusion example ._ proceedings of the national academy of sciences of the united states of america _ , 970 ( 18):0 98409843 , 2000 .d. tykhonov , c. jonker , s. meijer , and t. verwaart .agent - based simulation of the trust and tracing game for supply chains and networks ._ jasss - the journal of artificial societies and social simulation _ , 110 ( 3 ) , 2008 .
|
in recent years , individual - based / agent - based modeling has been applied to study a wide range of applications , ranging from engineering problems to phenomena in sociology , economics and biology . simulating such agent - based models over extended spatiotemporal domains can be prohibitively expensive due to stochasticity and the presence of multiple scales . nevertheless , many agent - based problems exhibit smooth behavior in space and time on a macroscopic scale , suggesting that a useful coarse - grained continuum model could be obtained . for such problems , the equation - free framework can significantly reduce the computational cost . patch dynamics is an essential component of this framework . this scheme is designed to perform numerical simulations of an unavailable macroscopic equation on macroscopic time and length scales ; it uses appropriately initialized simulations of the fine - scale agent - based model in a number of small `` patches '' , which cover only a fraction of the spatiotemporal domain . in this work , we construct a finite - volume - inspired _ conservative _ patch dynamics scheme and apply it to a financial market agent - based model based on the work of omurtag and sirovich . we first apply our patch dynamics scheme to a continuum approximation of the agent - based model , to study its performance and analyze its accuracy . we then apply the scheme to the agent - based model itself . our computational experiments indicate that here , typically , the patch dynamics - based simulation requires only of the full agent - based simulation in space , and need occur over only of the temporal domain . , , , , patch dynamics scheme , equation - free framework , agent - based modeling , mimetic market models
|
in addition to being nonlinear , many components in a signal processing or communication system have a dynamic range constraint .for example , light emitting diodes ( leds ) are dynamic range constrained devices that appear in intensity modulation ( i m ) and direct detection ( dd ) based optical wireless communication ( owc ) systems . to drive an led, the input electric signal must be positive and exceed the turn - on voltage of the device .on the other hand , the signal is also limited by the saturation point or maximum permissible value of the led .thus , the dynamic range constraint can be modeled as two - sided clipping .the same situation may happen in other applications such as digital audio processing .both nonlinearity and clipping result in distortions which may cause system performance degradation .sndr is a commonly used metric to quantify the distortion that is uncorrelated to the signal - .previous work in this area mainly concentrated on a family of amplitude - limited nonlinearities that is common in radio frequency ( rf ) system design involving nonlinear components such as power amplifiers ( pas ) and mixers .different from the previous work , our study discusses the class of nonlinearities with a two - sided dynamic range constraint that is more commonly found in optical and acoustic systems .authors in - illustrated the impact of led nonlinearity and clipping noise in owc systems .some predistortion strategies were proposed in - . however , to the best of our knowledge , the optimal nonlinear mapping under the two - sided dynamic range constraint has not been studied .there are two major differences from the amplitude - limited nonlinearity .first , the signal will be subject to turn - on clipping and saturation clipping to meet the dynamic range constraint .second , dc biasing must be used to shift the signal to an appropriate level to minimize distortion . in this paper, we will show that the ideal linearizer that maximizes the sndr is a double - sided limiter that has an affine response .the parameters of the response can be calculated from the distribution of the input signal and the noise power . in additional to deriving the sndr - optimal predistorter, we also relate a lower bound on channel capacity to the sndr , further motivating the sndr considerations .finally , we employ another common distortion metric , dynamic signal - to - noise ratio ( dsnr ) to provide an upper bound on the double - sided clipping channel .the remainder of this paper is organized as follows : section ii introduces the system model for dynamic range limited nonlinearity and the corresponding sndr definition . in section iii, we derive the optimal nonlinear mapping that maximizes the sndr and illustrate some examples . in section iv, we related the sndr to the capacity of the nonlinear channel . finally , section vii concludes the paper .the detailed proofs of this paper are deferred to the appendices .let us consider a system modeled by where is a real - valued signal with mean and variance ; is a zero - mean additive noise process with variance ; is a memoryless nonlinear mapping with dynamic range constraint . for notational simplicity , we omit the -dependence in the memoryless system and replace and by and . then we have an equivalent system modeled by where is a memoryless nonlinear mapping with dynamic range constraint and is a zero - mean signal with variance . according to bussgang s theorem , the nonlinear mapping in ( [ mapping ] ) can be decomposed as where is the distortion caused by and is a constant , selected so that is uncorrelated with , i.e. , =0 ] is the variance of and =e[g^2(\gamma)]-e^2[g(\gamma)] ] is decreased while the variance of is increased .thus , the }{var[g(\gamma)]-e^2[\gamma g(\gamma ) ] + \sigma_v^2/a^2} ] , and if ; or , and if . within the class of satisfying , the following maximizes the sndr expression in ( [ sndrg ] ) : for , or for , where the and are found by solving the following transcendental equations : with and is the probability density function ( pdf ) of .the optimal sndr is found as where and see the proofs of _ lemma 1 _ , _ lemma 2 _ and _ lemma 3_. _ theorem 1 _ establishes that the nonlinearity in the shape of fig . [ fig5:subfig ] is optimal .predistortion is a well - known linearization strategy in many applications such as rf amplifier linearization . for the dynamic range constrained nonlinearities like led electrical - to - optical conversion, predistortion has been proposed to mitigate the nonlinear effects .specifically , given a system nonlinearity , it is possible to apply a predistortion mapping the overall response is linear . according to_ theorem 1 _ , it is best to make equal to the function given in ( [ solution1 ] ) or ( [ solution2 ] ) if is normalized with dynamic range constraint .using the analytical tools presented above , we can answer the questions regarding the selection of the gain factor , dc biasing and the clipping regions on both sides , or equivalently , the sets and . _ theorem 1 _ shows that these optimal parameters ( in terms of sndr ) depend on the pdf of and the dynamic signal - to - noise ratio .thus , our work can serve as a guideline for the system design . in the next subsection, examples are given to illustrate the calculations of the optimal factors and .in the last subsection , we learned that the optimal factors and can be calculated by solving two transcendental equations ( [ eta_star ] ) and ( [ beta_star ] ) .however , there may not be closed - form expressions for the solutions . additionally , solving ( [ eta_star ] ) and ( [ beta_star ] ) may result in multiple solutions ,but we only keep the real - valued ones since all the signals here are real - valued . here, let us take into account a specific class of input signals whose distributions exhibit axial symmetry , such as uniform distribution and gaussian distribution . when the distribution of the input signal is axial symmetric , the optimal clipping regions and are also symmetric .thus , , and .then the factors and can be calculated : we see that the dc biasing will be the midpoint of the dynamic range .when the gain factor , it can be further expressed as : when the gain factor , it can expressed as : there is still no closed - form expression for gain factor .next , as examples , let us consider the calculations for uniform distribution and gaussian distribution specifically .when the original signal is uniformly distributed in the interval ] with the pdf for the case with , it is straightforward to calculate substituting ( [ c1uu ] ) and ( [ c0uu ] ) into ( [ etastarequ ] ) , we obtain equation ( [ quadratic ] ) can be rewritten as a quadratic equation thus , we can obtain a closed - form solution for the optimal : we know that there should be two solutions for equation ( [ etastarequex1 ] ) .in fact , the other solution is , which means that both and are 0 .thus , the solution given by ( [ etasolex1 ] ) is the unique optimal selection for the gain factor .if is desired , the optimal solution is when the original signal is gaussian distributed , then the normalized signal has a standard gaussian distribution with the pdf for the case with , we have where is the error function with the definition substituting ( [ c1ug ] ) and ( [ c0ug ] ) into ( [ etastarequ ] ) and simplifying , we obtain here the optimal does not have a closed - form expression but can be easily calculated numerically .we can draw the similar conclusion for the case with .[ eta ] shows the optimal as a function of dsnr for the above examples . as a function of dsnr for _ example 1 _ and _ example 2 _ with .,width=336 ]next , we illustrate the sndr of two different nonlinear mappings . is the optimal solution chosen by _1_. is a fixed mapping given below : the corresponding sndr curves are shown in fig .this example illustrates that the nonlinearity yields a higher sndr as compared to the other nonlinearity , as expected according to _ theorem 1_. with different nonlinear mappings.,width=336 ]the capacity is given by where is the mutual information between and . to obtain the capacity of the dynamic range constrained channel ,we need to solve the following optimization problem : for a specific zero - mean noise with variance .moreover , it can be simplified as : which means that we need to find an input distribution in the interval ] .however , in most cases , we are most interested in the achievable data rate given a nonlinear channel mapping with any input and any noise . similar to the work in , we obtain a lower bound on the information rate : + 1}{\frac{a^2}{\sigma_v^2 } var[g(\gamma)]+1-\frac{a^2}{\sigma_v^2}e^2[\gamma g(\gamma)]}\right ) \notag \\ & = & h(x)-\frac{1}{2}\log(2\pi e \sigma_x^2)+\frac{1}{2}\log(1+\mathrm{sndr})\end{aligned}\ ] ] by referring to ( [ sndrg ] ) .since for any input distribution , by setting to be the pdf of a zero - mean gaussian r.v ., we obtain with the sndr evalutated for a gaussian . in this subsection, we find an upper bound for the capacity .similar to , supposing is the pdf of that maximizes the capacity , i.e. , .\end{aligned}\ ] ] we can write the capacity as next , we bound the entropy with the entropy of a gaussian , yielding }{\sigma_v^2}\right)+\frac{1}{2}\log(2\pi e \sigma_v^2)-h(v ) \notag\\ & \leq & \frac{1}{2 } \log \left(1+\frac{a^2}{4\sigma_v^2}\right)+\frac{1}{2}\log(2\pi e \sigma_v^2)-h(v)\end{aligned}\ ] ] where \leq \frac{1}{4} ] .specifically , if the noise is gaussian , we have the upper bound : since and \leq \frac{1}{4}a^2 $ ] , we must have is the defined dsnr which is the same as that in . since sndr is determined by dsnr and the distribution of signal , we plot the bounds as functions of dsnr for gaussian distributed signal , which is shown in fig . [ bound ] .we also compare the lower bounds given by two different nonlinear mappings and , which are introduced in the last section .this example illustrates that the nonlinearity chosen according to _ theorem 1 _ yields a tighter lower bound as compared to the other nonlinearity .in addition , we can see that the capacity of gaussian channel as determined by smith is between the lower bounds and upper bound that we have .the main contribution of this paper is the sndr optimization within the family of dynamic range constrained memoryless nonlinearities .we showed that , under the dynamic range constraint , the optimal nonlinear mapping that maximizes the sndr is a double - sided limiter with a particular gain and a particular bias level , which are determined based on the distribution of the input signal and the dsnr .in addition , we found that provides a lower bound on the nonlinear channel capacity , and serves as the upper bound .the results of this paper can be applied for optimal linearization of nonlinear components and efficient transmission of signals with double - sided clipping .since we are solving the optimization problem w.r.t . a function ,the functional derivative is introduced here . by using the dirac delta function as a test function, the notion of functional derivative is defined as : }}{\delta{g(\gamma_0 ) } } = \lim_{\epsilon \rightarrow 0 } { \frac{f[g(\gamma)+\epsilon \delta(\gamma-\gamma_0)]-f[g(\gamma)]}{\epsilon}}.\ ] ] just as the variable derivative operation , the linear property , product rule and chain rule hold for functional derivative .in addition , from ( [ functionalderivative ] ) , we infer that to maximize the sndr w.r.t , we need we infer that & = e[i_l(\gamma)g(\gamma ) ] + e[i_s(\gamma)g(\gamma ) ] + e[i_u(\gamma)g(\gamma)]\\ & = e[i_s(\gamma ) g(\gamma ) ] + e[i_u(\gamma)]\\ & = e[i_s(\gamma ) g(\gamma ) ] + c_0^u . \end{split}\ ] ] similarly , & = & e[i_s(\gamma)\gamma g(\gamma ) ] + c_1^u \label{egammag } , \\ e [ g^2(\gamma ) ] & = & e[i_s(\gamma)g^2(\gamma ) ] + c_0^u \label{egs}.\end{aligned}\ ] ] and are defined as in ( [ cdenotation ] ) .it follows easily that and substituting ( [ eg ] ) , ( [ egammag ] ) and ( [ egs ] ) into ( [ sndrg ] ) }{d[g(\gamma)]}\ ] ] where = e[i_s(\gamma ) \gamma g(\gamma ) ] + c_1^u,\ ] ] = q^2[g(\gamma)],\ ] ] = e[i_s(\gamma)g(\gamma ) ] + c_0^u,\ ] ] & = e[i_s(\gamma ) g^2(\gamma ) ] + c_0^u + \frac{\sigma_v^2}{a^2 } \\ & - q^2[g(\gamma)]- y^2[g(\gamma ) ] . \end{split}\ ] ] denote by the pdf of the random variable . then = \int i_s(\gamma ) g^2(\gamma)p(\gamma)d\gamma.\ ] ] taking the functional derivative w.r.t , we obtain }{\delta g(\gamma_0 ) } \notag \\ & = & \int i_s(\gamma ) 2g(\gamma)\delta(\gamma-\gamma_0)p(\gamma)d\gamma\\ & = & 2g(\gamma_0)p(\gamma_0).\end{aligned}\ ] ] similarly , }{\delta g(\gamma_0 ) } = \gamma_0p(\gamma_0),\ ] ] }{\delta g(\gamma_0 ) } = p(\gamma_0).\ ] ] therefore , }{\delta g(\gamma_0 ) } = 2q[g(\gamma)]\gamma_0p(\gamma_0),\ ] ] }{\delta g(\gamma_0 ) } = 2g(\gamma_0)p(\gamma_0 ) - 2q[g(\gamma)]\gamma_0p(\gamma_0 ) - 2y[g(\gamma)]p(\gamma_0).\ ] ] condition ( [ eq : condition ] ) requires }{\delta g(\gamma_0)}d[g(\gamma ) ] = \frac{\delta d[g(\gamma)]}{\delta g(\gamma_0)}n[g(\gamma)].\ ] ] substituting and simplifying , we obtain where + c_1^u}{e[i_s(\gamma ) g^2(\gamma ) ] + c_0^u - \beta^2 + \sigma_v^2/a^2},\ ] ] = e[i_s(\gamma ) g(\gamma ) ] + c_0^u\ ] ] as the solution for ( [ eq : condition ] ) . since ( [ ggamma0 ] ) holds , we must have substituting ( [ ggamma ] ) into ( [ eta0l1 ] ) and ( [ beta0l1 ] ) , we obtain where , and are given by ( [ cdenotation ] ) . solving for and ,we further simplify them to ( [ etan ] ) and ( [ beta ] ) . in summary , under the dynamic range constraint, the optimal that maximizes the sndr is given by ( [ ggamma ] ) , where and are given by ( [ etan ] ) and ( [ beta ] ) .comparing ( [ sdefinition ] ) with ( [ ggamma ] ) , we infer that on .therefore , the set must be a subset of if or if .the objective here is to determine the optimal such that the sndr is maximized . since for , we infer that & = & c_0^u + c_1^s/\eta + \beta c_0^s,\\ e [ g^2(\gamma ) ] & = & c_0^u + c_2^s/\eta^2 + 2\beta c_1^s/\eta + \beta^2c_0^s,\\ e[ \gamma g^2(\gamma ) ] & = & c_1^u + c_2^s/\eta + c_1^s\beta.\end{aligned}\ ] ] next , we would like to compare and to help us establish the optimal maximizing the sndr .however , it is a challenge to make the comparison directly since there are too many terms in the objective expression . here , we utilize a two - step comparison . additionally , _case 1 _ and _ case 2 _ also imply that the sndr can be increased if can be enlarged by occupying the subsets of and .thus , _ lemma 2 _ holds and the optimal is implied to be if or if .h. ochiai and h. imai , performance of the deliberate clipping with adaptive symbol selection for strictly band - limited ofdm systems , " _ ieee journal on select areas in commmunications _ ,2270 - 2277 , nov . 2000 .j. rinne and m. renfors , the behavior of orthogonal frequency division multiplexing signals in an amplitude limiting channel , " in proc . _ ieee international conference on communications _ , vol .381 - 385 , may 1994 .h. elgala , r. mesleh and h. haas , a study of led nonlinearity effects on optical wireless transmission using ofdm , " in proc . _ ifip international conference on wireless and optical communications networks _ , pp . 1 - 5 , apr .j vucic , c kottke , s nerreter , kd langer and j. w. walewski 513 mbit / s visible light communications link based on dmt - modulation of a white led , " _ journal of lightwave technology _ , vol .24 , pp . 3512 - 3518 , dec .z. yu , r. j. baxley and g. t. zhou , evm and achievable data rate analysis of clipped ofdm signals in visible light communication , " _ eurasip journal on wireless communications and networking _ , vol .2012 , no .321 , pp . 1 - 16 , oct .2012 .h. elgala , r. mesleh and h. haas , non - linearity effects and predistortion in optical ofdm wireless transmission using leds , " _ international journal of ultra wideband communications and systems _ , vol . 1 , no .143 - 150 , nov . 2009 .a. m. khalid , g. cossu , r. corsini , p. choudhury and e. ciaramella , 1-gb / s transmission over a phosphorescent white led by using rate - adaptive discrete multitone modulation , " _ ieee photonics journal _ , vol .1465 - 1473 , oct .g. stepniak , j. siuzdak and p. zwierko , compensation of a vlc phosphorescent white led nonlinearity by means of volterra dfe , " _ ieee photonics technology letters _25 , no . 16 , pp .1597 - 1600 , aug .
|
many components used in signal processing and communication applications , such as power amplifiers and analog - to - digital converters , are nonlinear and have a finite dynamic range . the nonlinearity associated with these devices distorts the input , which can degrade the overall system performance . signal - to - noise - and - distortion ratio ( sndr ) is a common metric to quantify the performance degradation . one way to mitigate nonlinear distortions is by maximizing the sndr . in this paper , we analyze how to maximize the sndr of the nonlinearities in optical wireless communication ( owc ) systems . specifically , we answer the question of how to optimally predistort a double - sided memory - less nonlinearity that has both a `` turn - on '' value and a maximum `` saturation '' value . we show that the sndr - maximizing response given the constraints is a double - sided limiter with a certain linear gain and a certain bias value . both the gain and the bias are functions of the probability density function ( pdf ) of the input signal and the noise power . we also find a lower bound of the nonlinear system capacity , which is given by the sdnr and an upper bound determined by dynamic signal - to - noise ratio ( dsnr ) . an application of the results herein is to design predistortion linearization of nonlinear devices like light emitting diodes ( leds ) . nonlinear distortion , dynamic range , clipping , predistortion , optical wireless communication .
|
relationships between information theory and statistical physics have been extensively recognized over the last few decades , and they are drawn from many different aspects .we mention here only a few of them .one such aspect is characterized by identifying structures of optimization problems pertaining to certain information theoretic settings as being analogous to parallel structures that arise in statistical physics , and then borrowing statistical mechanical insights , as well as powerful analysis techniques ( like the replica method ) from statistical physics to the dual information theoretic setting of interest . a very partial list of works along this line includes , , , , , , , ( and references therein ) , , , , , , , , , and .another aspect pertains to the philosophy and the application of the maximum entropy principle , which emerged in statistical mechanics in the nineteenth century and has been advocated during the previous century in a wide variety of more general contexts , by jaynes ,, , and by shore and johnson , as a general guiding principle to problems in information theory ( see , e.g. , ( * ? ? ?11 ) and references therein ) and other areas , such as signal processing , in particular , speech coding ( see , e.g. , ) spectrum estimation ( see , e.g. , ) , and others .yet another aspect is related to ideas and theories that underly the notion of ` trading ' between information bits and energy , or heat . in particular ,landauer s erasure principle is argued to provide a powerful link between information theory and physics and to suggest a physical theory of information ( comprehensive overviews are included in , e.g. , and ) . according to landauer s principle, the erasure of every bit of information increases the thermodynamic entropy of the world by , where is boltzmann s constant , and so , information is actually physical . finally , to shift gears more to the direction of this paper , we should mention the aspect of the interface between statistical physics and large deviations theory , a line of research advocated most prominently by ellis , , and developed also by oono , mcallester , and others .the main theme here evolves around the identification of chernoff bounds and more general large deviations rate functions with free energies ( along with their related partition functions ) , thermodynamical entropies , and the underlying maximum entropy / equilibrium principle associated with them . in particular , ellis book is devoted largely to the application of large deviations theory to the statistical physics pertaining to models of ferromagnetic spin arrays , like ising spin glasses and others , in order to explore phase transitions phenomena of spontaneous magnetization ( see also ) .this paper , which is mostly expository in character , lies in the intersection of information theory , large deviations theory , and statistical physics .in particular , we establish a simple identity between two quantities as they can both be interpreted as the rate function of a certain large deviations event that involves multiple distributions of sets of independent random variables ( as opposed to the usual , single set of i.i.d .random variables ) .the analysis of this large deviations event is of a general form that is frequently encountered in numerous applications in information theory ( cf .section 4 ) .its informal description is as follows : let be an arbitrary ( deterministic ) sequence whose components take on values in a finite set , and let be a sequence of random variables where each component is generated independently according to a distribution , . for a given function and a constant , we are interested in the large deviations analysis ( chernoff bound ) of the probability of the event assuming that the relative frequencies of the various symbols in stabilize as grows without bound , and assuming that is sufficiently small to make this a rare event for large . there are ( at least ) two ways to drive a chernoff bound on the probability of this event .the first is to treat the entire sequence of rv s , as a whole , and the second is to partition it according to the various symbols , i.e. , to consider the separate large deviations events of the partial sums , , , for all possible allocations of the total ` budget ' among the various .these two approaches lead to two ( seemingly ) different expressions of chernoff bounds , but since they are both exponentially tight , they must agree .as will be described and discussed in section 2 , the identity between these two chernoff bounds has a natural interpretation in statistical physics : it is viewed as a situation of thermal equilibrium ( maximum entropy ) in a system that consists of several subsystems ( which can be of different kinds ) , each of them with many particles . as will be shown in section 4 , the above described problem of large deviations analysis of the event ( [ event ] ) is encountered in many applications in information theory , such as rate distortion coding , channel capacity , hypothesis testing ( signal detection , in particular ) , and others .the above mentioned statistical mechanical interpretation then applies to all of them .accordingly , section 4 is devoted to expository descriptions of each of these applications , along with the underlying physics that is inspired by the proposed thermal equilibrium interpretation .the reader is assumed to have very elementary background in statistical physics .the remaining part of this paper is organized as follows . in section 2, we establish some notation conventions . in section 3 ,we assert and prove our main result , which is the identity between the above described chernoff bounds .finally , in section 4 , we explore the application examples .throughout this paper , scalar random variables ( rv s ) will be denoted by the capital letters , like ,, , and , their sample values will be denoted by the respective lower case letters , and their alphabets will be denoted by the respective calligraphic letters .a similar convention will apply to random vectors and their sample values , which will be denoted with same symbols superscripted by the dimension .thus , for example , will denote a random -vector , and is a specific vector value in , the -th cartesian power of .the notations and , where and are integers and , will designate segments and , respectively , where for , the subscript will be omitted ( as above ) .sequences without specifying indices are denoted by .sources and channels will be denoted generically by the letter or .specific letter probabilities corresponding to a source will be denoted by the corresponding lower case letter , e.g. , is the probability of a letter .a similar convention will be applied to a channel and the corresponding transition probabilities , e.g. , , , .the cardinality of a finite set will be denoted by .information theoretic quantities like entropies , and mutual informations will be denoted following the usual conventions of the information theory literature .notation pertaining to statistical physics will also follow , wherever possible , the customary conventions .i.e. , will denote boltzmann s constant ( joules per kelvin degree ) , absolute temperature ( in kelvin degrees ) , the inverse temperature ( in units of or ) , energy , the letter will be used to denote partition functions , etc .let and be finite is finite , is made mostly for the sake of convenience and simplicity .most of our results extend straightforwardly to the case of a continuous alphabet .the extension to a continuous alphabet is somewhat more subtle , however . ]sets and let be a given function .let be a probability mass function on and let be a matrix of conditional probabilities from to .next , let us define for each , the partition function : and for a given in the range let .\ ] ] further , for a given constant in the range let .\ ] ] let denote the set of all vectors , where each component satisfies ( [ range ] ) , and where .our main result , in this section , is the following : the expression on the right hand side is , of course , more convenient to work with since it involves minimization w.r.t .one parameter only , as opposed to the left hand side , where there is a minimization over for every , as well as a maximization over the vector .while the proof of theorem 1 below is fairly short , in the appendix ( subsection a.1 ) , we outline an alternative proof which , although somewhat longer , provides some additional insight , we believe .as described briefly in the introduction , it is based on two different approaches to the analysis of the rate function , , pertaining to the probability of the event : where are rv s taking values in and drawn according to , and is a given deterministic vector whose components are in , with each appearing times ( ) , and the related relative frequency , is exactly .it should be noted that the proof in the appendix pertains to a slightly different definition of the set , where the individual upper bound to each is enlarged to .thus , is extended to a larger set , which will be denoted by in the appendix .but the maximum over is always attained within the original set ( as is actually shown in the proof below ) . _ proof ._ here we prove the identity of theorem 1 directly , without using large deviations analysis and chernoff bounds .we first prove that for every , we have and then , of course , as well .this follows from the following chain of inequalities : \nonumber\\ & = & \sum_{v\in{{\cal v}}}\min_{\beta\ge 0}[\beta p(v)e_v+p(v)\ln z_v(\beta)]\nonumber\\ & \le&\min_{\beta\ge 0}\left[\beta\sum_{v\in{{\cal v}}}p(v)e_v+ \sum_{v\in{{\cal v}}}p(v)\ln z_v(\beta)\right]\nonumber\\ & \le&\min_{\beta\ge 0}\left[\beta e+\sum_{v\in{{\cal v}}}p(v)\ln z_v(\beta)\right]\nonumber\\ & = & \bar{s}(e),\end{aligned}\ ] ] where in the second inequality we used the postulate that . in the other direction ,let be the achiever of , i.e. , is the solution to the equation : {\beta=\beta^*}.\ ] ] for each , let ] .obviously , the vector lies in , and {\beta=\beta^*}\nonumber\\ & = & -\left[\frac{\partial}{\partial\beta}\sum_vp(v ) \ln z_v(\beta)\right]_{\beta=\beta^*}\nonumber\\ & = & e.\end{aligned}\ ] ] thus , \nonumber\\ & = & \beta^*\sum_{v\in{{\cal v}}}p(v)e_v^*+\sum_vp(v)\ln z_v(\beta^*)\nonumber\\ & = & \beta^*e+\sum_vp(v)\ln z_v(\beta^*)\nonumber\\ & = & \bar{s}(e).\end{aligned}\ ] ] this completes the proof of theorem 1 . the function is similar to the well known partition function pertaining to the boltzmann distribution w.r.t . the hamiltonian ( energy function ) , except that each exponential term is weighted by , as opposed to the usual form , which is just . before describing the statistical mechanical interpretation of eq .( [ identity ] ) , we should note that defined in ( [ zvb ] ) can easily be related to the ordinary partition function , without weighting , as follows : suppose that are rational and hence can be represented as ratios of two positive integers , , where is common to all ( and ) .now , imagine that every value of actually represents a ` quantization ' of a more refined microstate ( call it a `` nanostate '' ) , , so that , where is a many to one function , for which the inverse image of every consists of many values of .suppose further that the hamiltonian depends on only via , i.e. , .then , the ( ordinary ) partition function related to is given by thus , the weighted partition function is , within a constant factor , the same as the ordinary partition function of .this factor cancels out when probabilities are calculated since it appears both in the numerator and the denominator .moreover , it affects neither the minimizing that achieves or , nor the derivatives of the log partition function .we now move on to our interpretation of eq .( [ identity ] ) from the viewpoint of elementary statistical physics : consider a physical system which consists of subsystems of particles .the total number of particles in the system is and the total amount of energy is joules . for each ,the subsystem indexed by ( subsystem , for short ) contains particles , each of which can lie in any microstate within a finite set of microstates ( or an underlying nanostate in a set ) , and it is characterized by an additive hamiltonian .the total amount of energy possessed by subsystem is given by joules .as long as the subsystems are in thermal isolation from each other , each one of them may have its own temperature , where is the achiever of the normalized ( per particle ) entropy associated with an average per particle energy , i.e. , .\ ] ] the above mentioned rate function of is then given by the negative maximum total per particle entropy , , where the maximum is over all energy allocations such that the total energy is conserved , i.e. , .this maximum is attained by the expression of the r.h.s .( [ identity ] ) , where there is _ only one _ temperature parameter , and hence it corresponds to _ thermal equilibrium_. in other words , the whole system then lies in the same temperature , where is the minimizer of .thus , the energy allocation among the various subsystems in equilibrium is such that their temperatures are the same ( cf .the above proof of theorem 1 ) .theorem 1 is then interpreted as expressing the second law of thermodynamics . at this point ,a few comments are in order : 1 .it should be pointed out that in the above physical interpretation , we have implicitly assumed that the particles within each subsystem are distinguishable , and so the partition function corresponding to a set of particles is given by the partition function of a single particle raised to the power of , without dividing by .this differs then from the indistinguishable case only by a constant factor ( as long as is indeed constant ) and hence the difference between the distinguishable and the indistinguishable cases is not essential for the most part of our discussion .as mentioned in the above paragraph , our conclusion is that . at first glance, this may seem peculiar as it appears that may be negative .however , one should keep in mind that is induced by a ( convex ) combination of weighted partition functions , rather than ordinary partition functions , like . referring to eq .( [ wpf ] ) , the ordinary notion of entropy as the normalized log number of ( nano)states with normalized energy , is given by \nonumber\\ & = & \min_{\beta\ge 0}\left[\beta e+\sum_vp(v)\ln z_v(\beta)\right]+\ln m\nonumber\\ & = & \bar{s}(e)+\ln m.\end{aligned}\ ] ] thus , which is always non negative .3 . the identity ( [ identity ] )can be thought of as a generalized concavity property of the entropy : had all the entropy functions been the same , this would have been the usual concavity property .what makes this equality less trivial and more interesting is that it continues to hold even when , for the various , are different from each other .4 . on the more technical level , since this paper draws analogies with physics , we should say a few words about physical units .the products , , , etc ., should all be pure numbers , of course . since , where is boltzmann s constant and is absolute temperature , and since has units of energy ( joules or ergs , etc . ), it is understood that , , and the like , should all have units of energy as well . in the applications described below ,whenever this is not the case , i.e. , the latter quantities are pure numbers rather than physical energies , we will sometimes reparametrize by , where is an arbitrary constant possessing units of energy ( e.g. , joule or erg ) , and we absorb in the hamiltonian , i.e. , redefine . thus , in this case , , where is the now the energy in units of , is redefined as .\ ] ] this kind of modification is not essential , but it may help to avoid confusion about units when the picture is viewed from the aspects of physics .equipped with the main result of the previous section and its statistical mechanical interpretation , we next introduce a few applications that fall within the framework considered . in all these applications , there is an underlying large deviations event of the type of eq .( [ ld ] ) , whose rate function is of interest .the above described viewpoint of statistical physics is then relevant in all these applications .let designate the vector of letter probabilities associated with a given discrete memoryless source ( dms ) , and for a given reproduction alphabet , let denote a single letter distortion measure .let denote the rate distortion function of the dms .one useful way to think of the rate distortion function is inspired by the classical random coding argument : let be drawn i.i.d . from the optimum random coding distribution and consider the event , where is a given source vector , typical to , i.e. , the composition of consists of occurrences of each .this is exactly an event of the type ( [ ld ] ) with , , , independently of , , and . i.e. , the hamiltonian is given by and the total energy is in units of .suppose that this probability is of the exponential order of .then , it takes about } ] and zero elsewhere .this random coding distribution is suboptimal , but it corresponds , and hence is well motivated , by many results in high resolution quantization using uniform quantizers ( see , e.g. , and references therein ) . for every , the partition function is given by when is very small , is very large , and then the finite interval integral pertaining to can be approximated by an infinite one , provided that the support of is included is negligibly small . ] in the interval ] to a threshold , and decides in favor of if this threshold is exceeded , otherwise , it decides in favor of .the false alarm probability then is the probability of the event \le ne_0\ ] ] under .this , again , fits our scenario with the substitutions , , , , independently of , ] and . note that when is the uniform distribution over , the missed - detection event can also be interpreted as the probability of excess code length of an arithmetic lossless source code w.r.t . .another situation of hypothesis testing that is related to our study in a similar manner is one where the signal is always underlying the observations , but the decision to be made is associated with two hypotheses regarding the noise level , or the temperature . in this case , there is a certain hamiltonian for each , and we assume a boltzmann gibbs distribution parametrized by the temperature where note that here is an ordinary partition function , without weighting ( cf .( [ wpf ] ) ) .we shall also denote .\ ] ] as is induced by a convex combination of non - weighted partition functions , it has the significance of the normalized logarithm of the number of microstates with energy about .thus , , where is boltzmann s constant , is the thermodynamic entropy . given two values and ( say , ) , the hypotheses now are the following : * is distributed according to . * is distributed according to .it then follows that the error exponent under is given by \nonumber\\ & = & \frac{1}{k}\int_{e_0}^{e_2}\left[\frac{1}{t(e)}-\frac{1}{t_2}\right]\mbox{d}e \nonumber\\ & = & \frac{1}{k}\int_{t_0}^{t_2}\left(\frac{1}{t}-\frac{1}{t_2}\right ) \bar{c}(t)\mbox{d}t,\end{aligned}\ ] ] where is the temperature corresponding to energy , , , and is the average heat capacity per particle of the system , which is the weighted average of heat capacities of all subsystems , i.e. , where {\beta= 1/(kt)}.\ ] ] thus , which is interpreted as the weighted average of the relative contributions of all subsystems , which all lie in the same temperature . in a similar manner ,the rate function of the probability of error under is given by : \nonumber\\ & = & \frac{1}{k}\int_{e_1}^{e_0}\left[\frac{1}{t_1}- \frac{1}{t(e)}\right]\mbox{d}e\nonumber\\ & = & \frac{1}{k}\int_{t_1}^{t_0}\left(\frac{1}{t_1}- \frac{1}{t}\right)\bar{c}(t)\mbox{d}t.\end{aligned}\ ] ] the expression in the square brackets of the second line pertaining to has a simple graphical interpretation ( see fig .1 ) : it is the vertical distance ( corresponding to the vertical line ) between the curve and the line tangent to that curve at ( whose slope is ) .the two other expressions of , in the last chain of equalities , describe the error exponent in terms of slow heating from temperature to temperature .similar comments apply to ( cf .fig . 1 ) .thus , the error exponents are linear functionals of the average heat capacity , , in the range of temperatures $ ] .the higher is the heat capacity , the better is the discrimination between the hypotheses .this is related to the fact that fisher information of the parameter is given by namely , again , a linear function of .however , while the fisher information depends only on one local value of ( as it measures the sensitivity of the likelihood function to the parameter in a local manner ) , the error exponents depend on in a cumulative manner , via the above integrals . the tradeoff between and is also obvious : by enlarging the threshold , or , correspondingly , , the range of integration pertaining to increases at the expense of the one of and vice versa . in the extreme case ,where , we get in this application example , we are back to the problem area of lossy data compression , but this time , it is about scalar ( symbol by symbol ) compression .this setup is motivated by earlier results about the optimality of time shared scalar quantizers within the class of causal source codes for memoryless sources , both under the average rate / distortion criteria and large deviations performance criteria . in particular , it was shown that under both criteria , optimum time sharing between at most two ( entropy coded ) scalar quantizers is as good as any causal source code for memoryless sources . here, we will focus on the large deviations performance criteria , namely , source coding exponents .consider a time varying scalar quantizer , acting on a dms , , drawn from , where is an arbitrary ( deterministic ) sequence of quantizers from a given finite set , where , being the reproduction alphabet corresponding to , .in other words , for every , , for a certain arbitrary sequence of ` states ' , ( known to the decoder ) with components in .the distortion incurred by such a time varying scalar quantizer , over units of time , is .the total code length is , where the per symbol length functions may correspond to either fixed rate coding , where for all , or any other length function satisfying the kraft inequality , . for the sake of simplicity of the exposition , let us assume fixed rate coding .we will denote by , , the number of times that occurs in , and is the corresponding relative frequency . in , among other results , the rate function of the excess distortion event was optimized across the class of all time varying scalar quantizers ( each one corresponding to a different sequence ) subject to a code length constraint , or equivalently , , for a given pair . in the notation of our generic model , here we have , , , independently of , , and . and , where , in order to work with non negative quantities . ] and the excess distortion exponent is of the same form as before ( see also ) . here, however , unlike the previous application examples , we have a degree of freedom to select the relative frequency of usage , , of each member of , i.e. , the time sharing protocol , but we also have the constraint . from the statistical physics point of view , these additional ingredients mean that we have a freedom to select the number of particles in each subsystem ( though the total number , , is still fixed ) , and the additional constraint , , which is actually equivalent to the equality constraint ( in the interesting region of pairs ) can be viewed as an additional conservation law with respect to some other constant of motion , in addition to the energy ( e.g. , the momentum ) , where in subsystem , the ( average ) value of the corresponding physical quantity per particle is .while in , we have considered the problem of maximizing the rate function ( the source coding exponent ) of the excess distortion event , a related objective ( although somewhat less well motivated , but still interesting ) is to minimize the rate function ( or maximize the probability ) of the small distortion event in this case , the optimum performance is given by ,\ ] ] where is the class of all probability distributions with . from the viewpoint of statistical physics , this corresponds to a situation where the various subsystems are allowed to interact , not only thermally , but also chemically , i.e. , an exchange of particles is enabled in addition to the exchange of energy , and the maximization over ( maximum entropy ) is achieved when the chemical potentials of the various subsystems reach a balance . as the maximization over subject to the constraint , for a given , is a linear programming problem with one constraint ( in addition to ) , then as was shown in , for each distortion level ( or energy ) , the optimum may be non zero for at most two members of only , which means that at most two subsystems are populated by particles in thermal and chemical equilibrium under the two conservation laws ( of and of ) . however , the choice of these two members of depends , in general , on , which in turn depends on the temperature .thus , when the system is heated gradually , certain _ phase transitions _ may occur , whenever there is a change in the choice of the two populated subsystems . finally , referring to comment no . 1 of section 3, we should point out that here , in contrast to our discussion thus far , the difference between the ensemble of distinguishable particles and indistinguishable particles becomes critical since the factors are no longer constant .had we assumed indistinguishability , the normalized log partition function would no longer be affine in , thus the maximization over would no longer be a linear programming problem , and the conclusion might have been different . in the source coding problem , the indistinguishable case corresponds to a situation where the sequence of states is chosen uniformly at random ( with the decoder being informed of the result of the random selection , of course ) . in this case , the chernoff bound corresponding to each composition of should be weighed by the probability of this composition , which is .now , each factor of can be absorbed in the corresponding partition function of subsystem , with the interpretation that in each subsystem the particles are now indistinguishable .the maximum over would now correspond to the dominant contribution in this weighted average of chernoff bounds .one can , of course , extend the discussion to any i.i.d .distribution on , thus introducing additional bias and preferring some compositions over others .in this subsection , we outline another proof of theorem 1 using a large deviations analysis approach .in particular , consider the large deviations event , as described in section 2 .assuming that the relative frequencies all stabilize as , let us compute the rate function of the probability of this event in two different methods , where one would yield the left hand side of ( [ identity ] ) and the other would give the right hand side of ( [ identity ] ) .in the first method , we partition the sequence according to its different letters . specifically , let where is the number of occurrences of the symbol along .let denote the set of all possible vector values that can be taken on by the vector .now , obviously , if and only if there exists a vector such that for all and .the `` if '' part follows from the `` only if '' part follows by setting for all .therefore , denoting ( where is defined as in section 2 ) , we have : and on the other hand , at this point , the only gap between the upper bound ( [ ub1 ] ) and the lower bound ( [ lb1 ] ) is the factor . the number of different values that can take does not exceed the number of different type classes of sequences of length over the alphabet , which is upper bounded by .thus , \right)\right\}\nonumber\\ & = & \exp\left\{|{{\cal v}}|\cdot(|{{\cal u}}|-1)\log\left ( \frac{n}{|{{\cal v}}|}+1\right)\right\}\nonumber\\ & = & \left(\frac{n}{|{{\cal v}}|}+1\right)^{|{{\cal v}}|\cdot(|{{\cal u}}|-1)},\end{aligned}\ ] ] and therefore is only polynomial in , and hence does not affect the exponential behavior .now , each one of the terms is bounded exponentially tightly by an individual chernoff bound , \right\},\ ] ] and so , the dominant term of their product is of the exponential order of =\max_{\tilde{e}\in{{\cal h}}_g(e)}\sum_vp(v)s_v(e_v).\ ] ] finally , as , the set becomes dense in the continuous set , and by simple continuity arguments , the maximum over tends to the maximum over .the other method to evaluate the rate function is as follows .let be a fixed positive integer that divides , and denote , ( assume that is chosen large enough that is well approximated by the closest integer with a very small relative error ) .now , re order the pairs ( periodically ) , according to the following rule : assuming , without loss of generality , that , the first symbol pairs of each of are such that , the next symbol pairs of each are such that , and so on . in other words , each , , , consists of the same relative frequencies as the entire sequence , .now , for the re ordered sequence of pairs , let us define , .obviously , are i.i.d . and therefore the probability of the large deviations event can be assessed exponentially tightly by the chernoff bound as follows : \right\}\nonumber\\ & = & \exp\left\{\frac{n}{\ell}\cdot\min_{\beta\ge 0 } \left[\beta\cdot\ell e+\ln\left(\prod_{v\in{{\cal v } } } \sum_{u^{\ell_v}}q(u^{\ell_v}|v^{\ell_v } ) \exp\left\{-\beta\sum_{i=1}^{\ell_v}f(u_i , v)\right\}\right)\right ] \right\}\nonumber\\ & = & \exp\left\{\frac{n}{\ell}\cdot\min_{\beta\ge 0 } \left[\beta\cdot\ell e+\ln\left(\prod_{v\in{{\cal v } } } \left[\sum_{u\in{{\cal u}}}q(u|v)e^{-\beta f(u , v)}\right]^{\ell_v}\right)\right ] \right\}\nonumber\\ & = & \exp\left\{\frac{n}{\ell}\cdot\min_{\beta\ge 0 } \left[\beta\cdot\ell e+\ell\cdot\sum_{v\in{{\cal v } } } p(v)\ln\left(\sum_{u\in{{\cal u}}}q(u|v)e^{-\beta f(u , v)}\right)\right ] \right\}\nonumber\\ & = & \exp\left\{n\cdot\min_{\beta\ge 0}\left[\beta e+\sum_{v\in{{\cal v } } } p(v)\ln\left(\sum_{u\in{{\cal u}}}q(u|v)e^{-\beta f(u , v)}\right)\right ] \right\}\nonumber\\ & = & e^{n\bar{s}(e)}.\end{aligned}\ ] ] since both approaches yield exponentially tight evaluations of , they must be equal .the exact derivation of eq .( [ equipartition ] ) for the finite interval integration , is as follows : \nonumber\\ & = & -\frac{\partial}{\partial \beta}\ln\left [ \beta^{-1/\theta}\cdot\int_{-\beta^{1/\theta}(a+x)}^{\beta^{1/\theta}(a - x ) } \exp\{-\epsilon_0|\beta^{1/\theta}({\hat{x}}-x)|^\theta\ } \mbox{d}(\beta^{1/\theta}({\hat{x}}-x))\right]\nonumber\\ & = & -\frac{\partial}{\partial \beta}\ln\left [ \beta^{-1/\theta}\cdot\int_{-\beta^{1/\theta}(a+x)}^{\beta^{1/\theta}(a - x ) } \exp\{-\epsilon_0|z|^\theta\ } \mbox{d}z\right]\nonumber\\ & = & -\frac{\partial}{\partial \beta}\ln \left(\beta^{-1/\theta}\right)-\frac{\partial}{\partial \beta}\ln \left[\int_{-\beta^{1/\theta}(a+x)}^{\beta^{1/\theta}(a - x ) } \exp\{-\epsilon_0|z|^\theta\}\mbox{d}z\right]\nonumber\\ & = & \frac{1}{\beta\theta}\left\{1-\frac{\beta^{1/\theta } [ ( a - x)\exp\{-\beta\epsilon_0|a - x|^\theta\ } + ( a+x)\exp\{-\beta\epsilon_0|a+x|^\theta\ } ] } { \int_{-\beta^{1/\theta}(a+x)}^{\beta^{1/\theta}(a - x ) } \exp\{-\epsilon_0|z|^\theta\}\mbox{d}z}\right\}.\end{aligned}\ ] ] when is very large , the denominator of the second term of the expression in the curly brackets of the right most side , goes to , which is a constant . now if , in addition , , then the numerator tends to zero as grows without bound .thus , the dominant term , for low temperatures , is .an exact closed form expression , for every finite , can be derived for the case , since in this case , the integral at the denominator has a simple expression .for example , setting , and in the above expression , yields : note that this expression is valid only in the range where it is monotonically increasing in .( beyond this point , the minimizing is no longer the point of zero derivative ) .r. s. ellis , `` the theory of large deviations and applications to statistical mechanics , '' lectures for international seminar on extreme events in complex dynamics , october 2006 .available on line at : [ http://www.math.umass.edu//pdf-files/dresden-lectures.pdf ] .r. m. gray , a. h. gray , g. rebolledo , and j. e. shore , `` rate distortionspeech coding with a minimum discrimination information distortion measure '' , _ ieee trans . inform .theory _ , vol .it27 , no . 6 , pp . 708721 ,november 1981 .d. guo and s. verd , `` multiuser detection and statistical physics , '' in _ communications , information and network security _ , v. bhargava , h. v. poor , v. tarokh , and s. yoon , eds . , chap . 13 , pp .229 - 277 , kluwer academic publishers , norwell , mass , usa , 2002 .a. lapidoth and s. shamai ( shitz ) , `` a lower bound on the bit - error rate resulting from mismatched viterbi decoding , '' technical report , cc pub no . 163 , department of electrical engineering , technion i.i.t . , august 1996 .j. e. shore and r. w. johnson , `` axiomatic derivation of the principle of maximum entropy and the principle of minimum cross - entropy , '' _ ieee trans .theory _ , vol .it26 , no . 1 , pp . 2637 , january 1980 .
|
an identity between two versions of the chernoff bound on the probability a certain large deviations event , is established . this identity has an interpretation in statistical physics , namely , an isothermal equilibrium of a composite system that consists of multiple subsystems of particles . several information theoretic application examples , where the analysis of this large deviations probability naturally arises , are then described from the viewpoint of this statistical mechanical interpretation . this results in several relationships between information theory and statistical physics , which we hope , the reader will find insightful . * index terms : * large deviations theory , chernoff bound , statistical physics , thermal equilibrium , equipartition , thermodynamics , phase transitions . department of electrical engineering + technion - israel institute of technology + haifa 32000 , israel +
|
when analyzing collections of imaging data , a general goal is to quantify similarities and differences across images . in medical image analysis and computational anatomy , a common goal is to find patterns that can distinguish morphologies of healthy and diseased subjects aiding the understanding of the population epidemiology .such distinguishing patterns are typically investigated by comparing single observations to a representative member of the underlying population , and statistical analyses are performed relative to this representation . in the context of medical imaging , it has been customary to choose the template from the observed data as a common image of the population .however , such an approach has been shown to be highly dependent on the choice of the image .in more recent approaches , the templates are estimated using statistical methods that make use of the additional information provided by the observed data . in order to quantify the differences between images , the dominant modes of variation in the datamust be identified .two major types of variability in a collection of comparable images are _ intensity variation _ and variation in _ point - correspondences_. point - correspondence or _ warp _ variation can be viewed as shape variability of an individual observation with respect to the template .intensity variation is the variation that is left when the observations are compensated for the true warp variation .this typically includes noise artifacts like systematic error and sensor noise or anatomical variation such as tissue density or tissue texture .typically one would assume that the intensity variation consists of both independent noise and spatially correlated effects . in this work, we introduce a flexible class of mixed - effects models that explicitly model the template as a fixed effect and intensity and warping variation as random effects , see figure [ fig : layer ] .this simultaneous approach enables separation of the random variation effects in a data - driven fashion using alternating maximum - likelihood estimation and prediction .the resulting model will therefore choose the separation of intensity and warping effects that is most likely given the patterns of variation found in the data . from the model specification and estimates ,we are able to denoise observations through linear prediction in the model under the maximum likelihood estimates .estimation in the model is performed with successive linearization around the warp parameters enabling the use of linear mixed - effects predictors and avoiding the use of sampling techniques to account for nonlinear terms .we apply our method on datasets of face images and 2d brain mris to illustrate its ability to estimate templates for populations and predict warp and intensity effects .the paper is structured as follows . in section [ sec : background ] , we give an overview of previously introduced methods for analyzing image data with warp variation .section [ sec : mod ] covers the mixed - effects model including a description of the estimation procedure ( section [ sec : est ] ) and how to predict from the model ( section [ sec : pred ] ) . in section [ sec : invb ] , we give an example of how to model spatially correlated variations with a tied - down brownian sheet .we consider two applications of the mixed - effects model to real - life datasets in section [ sec : app ] and section [ simstud ] contains a simulation study that is used for comparing the precision of the model to more conventional approaches .the model introduced in this paper focuses on separately modelling the intensity and warp variation .image registration conventionally only focuses on identifying warp differences between pairs of images .the intensity variation is not included in the model and possible removal of this effect is considered as a pre - or postprocessing step .the warp differences are often found by solving a variational problem of the form see for example .here measures the dissimilarity between the fixed image and the warped image , is a regularization on the warp , and is a weight that is often chosen by ad - hoc methods .after registration , either the warp , captured in , or the intensity differences between and can be analyzed .several works have defined methods that incorporate registration as part of the defined models .the approach described in this paper will also regard registration as a part of the proposed model and adress the following three problems that arise in image analysis : ( a ) being able to estimate model parameters such as in a data - driven fashion ; ( b ) assuming a generative statistical model that gives explicit interpretation of the terms that corresponds to the dissimilarity and penalization ; and ( c ) being simultaneous in the estimation of population - wide effects such as the mean or template image and individual per - image effects , such as the warp and intensity effects .these features are of fundamental importance in image registration and many works have addressed combinations of them .the main difference of our approach to state - of - the - art statistical registration frameworks is that we propose a simultaneous random model for warp and intensity variation .as we will see , the combination of maximum likelihood estimation and the simultaneous random model for warp and intensity variation manifests itself in a trade - off where the uncertainty of both effects are taken into account simultaneously . as a result , when estimating fixed effects and predicting random effects in the model the most likely separation of the effects given the observed patterns of variation in the entire data material is used .methods for analyzing collections of image data , for example template estimation in medical imaging , with both intensity and warping effects can be divided into two categories , _ two - step methods _ and _ simultaneous methods_. two - step methods perform alignment as a preprocessing step before analyzing the aligned data .such methods can be problematic because the data is modified and the uncertainty related to the modification is ignored in the subsequent analysis .this means that the effect of intensity variation is generally underestimated , which can introduce bias in the analysis , see for the corresponding situation in 1d functional data analysis .simultaneous methods , on the other hand , seek to analyze the images in a single step that includes the alignment procedure .conventional simultaneous methods typically use data terms to measure dissimilarity .such dissimilarity measures are equivalent to the model assumption that the intensity variation in the image data consists solely of uncorrelated gaussian noise .this approach is commonly used in image registration with the sum of squared differences ( ssd ) dissimilarity measure , and in atlas estimation .since the data term is very fragile to systematic deviations from the model assumption , for example contrast differences , the method can perform poorly .one solution to make the data term more robust against systematic intensity variation and in general to insufficient information in the data term is to add a strong penalty on the variation of the warping functions .this approach is however an implicit solution to the problem , since the gained robustness is a side effect of regularizing another model component . as a consequence ,the effect on the estimates is very hard to quantify , and it is very hard to specify a suitable regularization for a specific type of intensity variation .this approach is , for example , taken in the variational formulations of the template estimation problem in .an elegant instance of this strategy is the bayesian model presented in where the warping functions are modeled as latent gaussian effects with an unknown covariance that is estimated in a data - driven fashion .conversely , systematic intensity variation can be sought to be removed prior to the analysis , in a reversed two - step method , for example by using bias - correction techniques for mri data .the presence of warp variation can however influence the estimation of the intensity effects .analysis of images with systematic intensity differences can be improved using data dissimilarity measures that are robust or invariant to such systematic differences .however , robustness and invariance come at a cost in accuracy . by choosing a specific kind of invariance in the dissimilarity measure, the model is given a pre - specified recipe for separating intensity and warping effects ; the warps should maximize the invariant part of the residual under the given model parameters .examples of classical robust data terms include -norm data terms , charbonnier data terms , and lorentzian data terms .robust data terms are often challenging to use , since they may not be differentiable ( -norms ) or may not be convex ( lorentzian data term ) .a wide variety of invariant data terms have been proposed , and are useful when the invariances represent a dominant mode of variation in the data .examples of classical data terms that are invariant to various linear and nonlinear photometric relationships are normalized cross - correlation , correlation - ratio and mutual information .another approach for achieving robust or invariant data terms is to transform the data that is used in the data term .a classical idea is to match discretely computed gradients or other discretized derivative quantities .a related idea is to construct invariant data terms based on discrete transformations .this type of approach has become increasingly popular in image matching in recent years .examples include the rank transform and the census transform , and more recently the complete rank transform . while both robust and invariant data terms have been shown to give very good results in a wide array of applications , they induce a fixed measure of variation that does not directly model variation in the data .thus , the general applicability of the method can come at the price of limited accuracy .several alternative approaches for analyzing warp and intensity simultaneously have been proposed . in warps between imagesare considered as combination of two transformation fields , one representing the image motion ( warp effect ) and one describing the change of image brightness ( intensity effect ) . based on this definition warp and intensity variation can be modeled simultaneously .an alternative approach is considered in , where an invariant metric is used , which enables analysis of the dissimilarity in point correspondences between images disregarding the intensity variation .these methods are not statistical in the sense that they do not seek to model the random structures of the variation of the image data .a statistical model is presented in , where parameters for texture , shape variation ( warp ) and rendering are estimated using maximizing - a - posteriori estimation . to overcome the mentioned limitations of conventional approaches, we propose to do statistical modeling of the sources of variation in data . by using a statistical model where we assume parametric covariance structures for the different types of observed variation, the variance parameters can be estimated from the data .the contribution of different types of variation is thus weighted differently in the data term . by using , for example , maximum - likelihood estimation ,the most likely form of the variation given the data is penalized the least .we emphasize that in contrast to previous mixed - effects models incorporating warp effects , the goal here is to simultaneously model warp and intensity effects .these effects impose randomness relative to a template , the fixed - effect , that is estimated during the inference process .the nonlinear mixed - effects models are a commonly used tool in statistics .these types of models can be computationally intensive to fit , and are rarely used for analyzing large data sizes such as image data .we formulate the proposed model as a nonlinear mixed - effects model and demonstrate how certain model choices can be used to make estimation in the model computationally feasible for large data sizes .the model incorporates random intensity and warping effects in a small - deformation setting : we do not require warping functions to produce diffeomorphisms .the geometric structure is therefore more straightforward than in for example the lddmm model . from a statistical perspective, the small - deformation setting is much easier to handle than the large - deformation setting where warping functions are restricted to produce diffeomorphisms . instead of requiring diffeomorphisms, we propose a class of models that will produce warping functions that in most cases do not fold .another advantage of the small - deformation setting is that we can model the warping effects as latent gaussian disparity vectors in the domain .such direct modeling allows one to compute a high - quality approximation of the likelihood function by linearizing the model around the modes of the nonlinear latent random variables .the linearized model can be handled using conventional methods for linear mixed - effects models which are very efficient compared to sampling - based estimation procedures . in the large - deformation setting, the metamorphosis model extends the lddmm framework for image registration to include intensity change in images .warp and intensity differences are modeled separately in metamorphosis with a riemannian structure measuring infinitesimal variation in both warp and intensity .while this separation has similarities to the statistical model presented here , we are not aware of any work which have considered likelihood - based estimation of variables in metamorphosis models .we consider spatial functional data defined on taking values in .let be functional observations on a regular lattice with points , that is , for , .consider the model in the image space for , and . here denotes the template and is a warping function matching a point in to a point in the template .moreover is the random spatially correlated intensity variation for which we assume that where the spatial correlation is determined by the covariance matrix .the term models independent noise .the template is a fixed - effect while , , and are random .we will consider warping functions of the form where is coordinate - wise bilinear spline interpolation of on a lattice spanned by .in other words , models discrete spatial displacements at the lattice anchor points .figure [ warpex ] shows an example of disparity vectors on a grid of anchor points and the corresponding warping function .the displacements are modeled as random effects , where is a covariance matrix , and , as a result , the warping functions can be considered nonlinear functional random effects .as is assumed to be normally distributed with mean zero , small displacements are favorited and hence the warp effect will be less prone to fold .the model is a spatial extension of the phase and amplitude varying population pattern ( pavpop ) model for curves .first , we will consider estimation of the template from the functional observations , and we will estimate the contributions of the different sources of variation . in the proposed model ,this is equivalent to estimating the covariance structure for the warping parameters , the covariance structure for the spatially correlated intensity variation , and the noise variance .the estimate of the template is found by considering model in the back - warped template space because every back - warped image represents on the observation lattice , a computationally attractive parametrization is to model using one parameter per observation point , and evaluate non - observation points using bilinear interpolation .this parametrization is attractive , because henderson s mixed - model equations suggests that the conditional estimate for given is the pointwise average if we ignore the slight change in covariance resulting from the back - warping of the random intensity effects . as this estimator depends on the warping parameters , the estimation of and the variance parameters has to be performed simultaneously with the prediction of the warping parameters .we note that , as in any linear model , the estimate of the template is generally quite robust against slight misspecifications of the covariance structure . and the idea of estimating the template conditional on the posterior warp is similar to the idea of using a hard em algorithm for computing the maximum likelihood estimator for .we use maximum - likelihood estimation to estimate variance parameters , that is , we need to minimize the negative log - likelihood function of model .note that contains nonlinear random effects due to the term where is a nonlinear transformation of .we handle the nonlinearity and approximate the likelihood by linearizing the model around the current predictions of the warping parameters : where denotes the jacobian matrix of with respect to and letting , the linearized model can be rewritten we notice that in this manner , can be approximated as a linear combination of normally distributed variables , hence the negative log - likelihood for the linearized model is given by where .the idea of linearizing nonlinear mixed - effects models in the nonlinear random effects is a solution that has been shown to be effective and which is implemented in standard software packages .the proposed model is , however , both more general and computationally demanding than what can be handled by conventional software packages .furthermore , we note that the linearization in a random effect as done in model is fundamentally different than the conventional linearization of a nonlinear dissimilarity measure such as in the variational problem . as we see from the linearized model , the density of is approximated by the density of a linear combination , , of multivariate gaussian variables .the likelihood function for the first - order taylor expansion in of the model is thus a laplace approximation of the true likelihood , and the quality of this approximation is approximately second order . as mentioned abovethe proposed model is computationally demanding .even the approximated likelihood function given in equation is not directly computable because of the large data sizes . in particular ,the computations related to determinants and inverses of the covariance matrix are infeasible unless we impose certain structures on these . in the following, we will assume that the covariance matrix for the spatially correlated intensity variation has full rank and sparse inverse .we stress that this assumption is merely made for computational convenience and that the proposed methodology is also valid for non - sparse precision matrices .the zeros in the precision matrix are equivalent to assuming conditional independences between the intensity variation in corresponding pixels given all other pixels .a variety of classical models have this structure , in particular ( higher - order ) gaussian markov random fields models have sparse precision matrices because of their markov property . to efficiently do computations with the covariances , we exploit the structure of the matrix . the first term is an update to the intensity covariance with a maximal rank of , the first term of the intensity covariance has a sparse inverse and the second term is of course sparse with a sparse inverse . using the woodbury matrix identity , we obtain which can be computed if we can efficiently compute the inverse of the potentially huge intensity covariance matrix .we can rewrite the inverse intensity covariance as thus we can write in a way that only involves operations on sparse matrices . to compute the inner product , we first form the matrix andcompute its cholesky decomposition using the ng - peyton method implemented in the ` spam ` r - package . by solving a low - rank linear system using the cholesky decomposition, we can thus compute .the inner product is then efficiently computed as where to compute the log determinant in the likelihood , one can use the matrix determinant lemma similarly to what was done above to split the computations into low - rank computations and computing the determinant of , for the models that we will consider , the latter computation is done by using the operator approximation proposed in which , for image data with sufficiently high resolution ( e.g. ) , gives a high - quality approximation of the determinant of the intensity covariance that can be computed in constant time . by taking the described strategy, we never need to form a dense matrix , and we can take advantage of the sparse and low - rank structures to reduce the computation time drastically . furthermore , the fact that we assume equal - size images allows us to only do a single cholesky factorization per likelihood computation , which is further accelerated by using the updating scheme described in .after the maximum - likelihood estimation of the template and the variance parameters , we have an estimate for the distribution of the warping parameters .we are therefore able to predict the warping functions that are most likely to have occurred given the observed data .this prediction parallels the conventional estimation of deformation functions in image registration .let be the density for the distribution of the warping functions given the data and define , in a similar manner .then , by applying , we see that the warping functions that are most likely to occur are the minimizers of the posterior given the updated predictions of the warping parameters , we update the estimate of the template and then minimize the likelihood to obtain updated estimates of the variances .this procedure is then repeated until convergence is obtained .the estimation algorithm is given in algorithm [ alg ] .the run time for the algorithm will be very different depending on the data in question .as an example we ran the model for 10 mri midsaggital slices ( for more details see section [ mri ] ) of size , with .we ran the algorithm on an intel xeon e5 - 2680 2.5ghz processor .the run time needed for full maximum likelihood estimation in this setup was 1 hour and 15 minutes using a single core .this run time is without parallization , but it is possible to apply parallization to make the algorithm go faster . the spatially correlated intensity variation can also be predicted .either as the best linear unbiased prediction ] . the covariance function , ^ 2\times [ 0,1]^2\to\mathbb{r} ] under homogeneous dirichlet boundary conditions .thus the conditional linear prediction of given by is equivalent to estimating the systematic part of the residual as a generalized smoothing spline with roughness penalty the tied - down brownian sheet can also be used to model the covariance between the displacement vectors . herethe displacement vectors given by the warping variables are modeled as discretely observed tied - down brownian sheets in each displacement coordinate . as was the case for the intensity covariance, this model is a good match to image data since it allows the largest deformations around the middle of the image .furthermore , the fact that the model is tied down along the boundary means that we will predict the warping functions to be the identity along the boundary of the domain ^ 2 ] onto ^ 2 ] .we used 5 outer and 3 inner iterations in algorithm [ alg ] .the image value range was scaled to ] .the full predictions are very faithful to the observations , with only minor visible deviations around the eyes in the second and fifth row .this suggests that the chosen model for the spatially correlated intensity variation , the tied - down brownian sheet , is sufficiently versatile to model the systematic part of the residuals .p0.22p0.22p0.22p0.22 no alignment & procrustes free warp & & proposed & & & p0.15|p0.15|p0.15|p0.15|p0.15 procrustes & & proposed warped template prediction & proposed full prediction & observation & & & & & & & & & & & & & & & & & & & & the data considered in this section are based on 3d mr images from the adni database .we have based the example on images with 18 normal controls ( nc ) , 13 with alzheimer s disease ( ad ) and 19 who are mild cognitively impaired ( mci ) .the 3d images were initially affinely aligned with 12 degrees of freedom and normalized mutual information ( nmi ) as a similarity measure . after the registration , the mid - sagittal slices were chosen as observations .moreover the images were intensity normalized to ] .six samples are displayed in figure [ fig : eksbrains ] where differences in both contrast , placement and shape of the brains are apparent .+ for the given data , we used 25 displacement vectors on an equidistant interior grid in ^ 2 ] ) .the difference in regularization of the warps is shown in figure [ fig : dis ] , where the estimated warps using the procrustes model are compared to the predicted warps from the proposed model .we see that the proposed model predicts much smaller warps than the procrustes model .p0.31p0.31p0.31 rigid registration with scaling & procrustesfree warp & & & + one of the advantages of the mixed - effects model is that we are able to predict the systematic part of the intensity variation of each image , which in turn also gives a prediction of the residual intensity variation the variation that can not be explained by systematic effects . in figure [ fig : brain_pred ] , we have predicted the individual observed slices using the procrustes model and the proposed model . as we also saw in figure[ fig : dis ] , the proposed model predicts less deformation of the template compared to the procrustes model , and we see that the brownian sheet model is able to account for the majority of the personal structure in the sulci of the brain .moreover , the predicted intensity variation seems to model intensity differences introduced by the different mri scanners well .p0.19|p0.19|p0.19|p0.19|p0.19 procrustes warped template prediction & & predicted spatially correlated intensity variation & full prediction & observation & & & & & & & & & & & &in this section , we present a simulation study for investigating the precision of the proposed model .the results are compared to the previously introduced models : procrustes free warp and a regularized procrustes .data are generated from model ( [ eq : model ] ) in which is taken as one of the mri slices considered in section [ mri ] .the warp , intensity and the random noise effects are all drawn from the previously described multivariate normal distributions with variance parameters respectively and applied to the chosen template image . to consider more realistic brain simulations , the systematic part of the intensity effectwas only added to the brain area of and not the background .as this choice makes the proposed model slightly misspecified , it will be hard to obtain precise estimates of the variance parameters . in practice, one would expect any model with a limited number of parameters to be somewhat misspecified in the presented setting .the simulations thus present a realistic setup and our main interest will be in estimating the template and predicting warp and intensity effects .figure [ fig : simeksbrains ] displays 5 examples of the simulated observations as well as the chosen .+ the study is based on 100 data sets of 100 simulated brains .for each simulated dataset we applied the proposed , procrustes free warp and procrustes regularized model . the regularization parameter , , in the regularized procrustes model , was set to the true parameter used for generating the data .the variance estimates based on the simulations are shown in figure [ fig : vardens ] .the true variance parameters are plotted for comparison .we see some bias in the variance parameters .while bias is to be expected , the observed bias for the noise variance and the warp variance scale are bigger than what one would expect .the reason for the underestimation of the noise variance seems to be the misspecification of the model . since the model assumes spatially correlated noise outside of the brain area , where there is none , the likelihood assigns the majority of the variation in this area to the systematic intensity effect .the positive bias of the warp variance scale seems to be a compensating effect for the underestimated noise variance .the left panel of figure [ fig : wtdens ] shows the mean squared difference for the estimated templates with the three types of models .we see that the proposed model produces conisderably more accurate estimates than the alternative frameworks . to give an example of the difference between template estimates for the three different models , one set of template estimates for each of the modelsis shown in figure [ fig : tempex ] . from this examplewe see that the template for the proposed model is slightly more sharp than the procrustes models and are more similar to the true which was also the conclusion obtained from the density of the mean squared difference for the template estimates ( figure [ fig : wtdens ] ) .p0.24p0.24p0.24p0.24 true template & & procrustes & procrustes & & & the right panel of figure [ fig : wtdens ] shows the mean squared prediction / estimation error of the warp effects .the error is calculated using only the warp effects in the brain area since the background is completely untextured , and any warp effect in this area will be completely determined by the prediction / estimation in the brain area .we find that the proposed model estimates warp effects that are closest to the true warps .it is worth noticing that the proposed model is considerably better at predicting the warp effects than the regularized procrustes model .this happens despite the fact that the value for the warp regularization parameter in the model was chosen to be equal to the true parameter ( ) .examples of the true warping functions in the simulated data and the predicted / estimated effects in the different models are shown in figure [ fig : simpred ] .none of the considered models are able to make sensible predictions on the background of the brain , which is to be expected . in the brain region ,the predicted warps for the proposed model seem to be very similar to the true warp effect , which we also saw in figure [ fig : wtdens ] was a general tendency .p0.03p0.3p0.3p0.3[0pt][0pt ] & & & [ 0pt][0pt ] & & & [ 0pt][0pt ] & & & [ 0pt][0pt ] & & &we generalized the likelihood based mixed - effects model for template estimation and separation of phase and intensity variation to 2d images .this type of model was originally proposed for curve data .as the model is computationally demanding for high dimensional data , we presented an approach for efficient likelihood calculations .we proposed an algorithm for doing maximum - likelihood based inference in the model and applied it to two real - life datasets .based on the data examples , we showed how the estimated template had desirable properties and how the model was able to simultaneously separate sources of variation in a meaningful way .this feature eliminates the bias from conventional sequential methods that process data in several independent steps , and we demonstrated how this separation resulted in well - balanced trade - offs between the regularization of warping functions and intensity variation .we made a simulation study to investigate the precision of the template and warp effects of the proposed model and for comparison with two other models .the proposed model was compared with a procrustes free warp model , as well as a procrustes regularized model .since the noise model was misspecified , the proposed methodology could not recover precise maximum likelihood estimates of the variance parameters .however , the maximum likelihood estimate for the template was seen to be a lot sharper and closer to the true template compared to alternative procrustes models .furthermore , we demonstrated that the proposed model was better at predicting the warping effect than the alternative models .the main restriction of the proposed model is the computability of the likelihood function .we resolved this by modeling intensity variation as a gaussian markov random field .an alternative approach would be to use the computationally efficient operator approximations of the likelihood function for image data suggested in .this approach would , however , still require a specific choice of parametric family of covariance functions , or equivalently , a family of positive definite differential operators .an interesting and useful extension would be to allow a free low - rank spatial covariance structure and estimate it from the data .this could , for example , be done by extending the proposed model to a factor analysis model where both the mean function and intensity variation is modeled in a common functional basis , and requiring a specific rank of the covariance of the intensity effect .such a model could be fitted by means of an em algorithm similar to the one for the reduced - rank model for computing functional principal component analysis proposed in , and it would allow simulation of realistic observations by sampling from the model .for the computation of the likelihood function of the nonlinear model , we relied on local linearization which is a simple well - proven and effective approach . in recent years ,alternative frameworks for doing maximum likelihood estimation in nonlinear mixed - effects models have emerged , see and references therein .an interesting path for future work would be to formulate the proposed model in such a framework that promises better accuracy than the local linear approximation .this would allow one to investigate how much the linear approximation of the likelihood affects the estimated parameters . in this respect, it would also be interesting to compare the computing time across different methods to identify a suitable tradeoff between accuracy and computing time .the proposed model introduced in this paper is a tool for analyzing 2d images . the model , as it is , could be used for higher dimensional images as well , but the analysis would be computationally infeasible with the current implementation . to extend the proposed model to 3d images there is a need to devise new computational methods for improving the calculation of the likelihood function .
|
this paper introduces a class of mixed - effects models for joint modeling of spatially correlated intensity variation and warping variation in 2d images . spatially correlated intensity variation and warp variation are modeled as random effects , resulting in a nonlinear mixed - effects model that enables simultaneous estimation of template and model parameters by optimization of the likelihood function . we propose an algorithm for fitting the model which alternates estimation of variance parameters and image registration . this approach avoids the potential estimation bias in the template estimate that arises when treating registration as a preprocessing step . we apply the model to datasets of facial images and 2d brain magnetic resonance images to illustrate the simultaneous estimation and prediction of intensity and warp effects . * keywords * template estimation , image registration , separation of phase and intensity variation , nonlinear mixed - effects model
|
multiscale is one of the most important features commonly existing in complex systems , where a large range of spatial / temporal scales coexist and interact with each other .typically such interaction generates scaling relations in the respective scale ranges . to understandthe multiscale statistical behavior has been remaining as the research focus in various areas , such as fluid turbulence , financial market analysis , environmental science and population dynamics , to list a few . among a number of existing analysis approaches for such problems ,the standard and most cited one is structure - function ( sf ) , which is first introduced by kolmogorov in his famous homogeneous and isotropic turbulence theory in 1941 .however , the average operation in sf mixes regions with different correlations .mathematically , sf acts as a filter with a weight function of , in which is the wavenumber and is the separation scale .it thus makes the statistics at different scale strongly mixed , resulting in the so - called infrared and ultraviolet effects , respectively for large - scale and small - scale contamination .the situation will be more serious when an energetic structure presents , e.g. , annual cycle in collected geoscience data , large - scale circulations in rayleigh - bnard convection , vortex trapping events in lagrangian turbulence .fourier analysis in the frequency domain has the similar deficiency as sf , i.e. any local event will propagate the influence over the entire analyzed domain , especially for the nonlinear and nonstationary turbulent structures . claimed that a localized expansion should be preferred over unbounded trigonometric functions used in fourier analysis , because it is believed that trigonometric functions are at risk of misinterpreting the characters of field phenomena .an alternative approach , namely wavelet transform is then proposed to overcome the possible shortcoming of the fourier transform with local capability . to overcome the potential weaknesses of sf or fourier analysis ,several methodologies have been proposed in recent years to emphasize the local geometrical features , such as detrended fluctuation analysis , detrended structure - function , scaling of maximum probability density function of increments , and hilbert spectral analysis , to name a few .note that different approaches may have different performances , and their own advantages and disadvantages .for example , the detrended structure - function can constrain the influence of the large - scale structure , using the detrending procedure to remove the scales larger than the separation distance . in practice , the famous -lawcan then be more clearly retrieved than the classical sf .however , this method is still biased with the vortex trapping event in lagrangian turbulence , which typically possesses a time scale around in the dissipative range .the scaling of maximum probability density function of increments helps to quantify the background fluctuation of turbulent fields .compared with sf , it can efficiently extract the first - order scaling relations ; however , it is difficult to extend to higher - order cases .a new view on the field structure is based on the topological features of the extremal points . in principle, physical systems may assume different complexity and interpretability in different spaces , such as physical or fourier .the extremal point structure in physical space has the straightforward advantage in defining characteristic parameters .considering a fluctuating quantity , turbulence disturbs the flow field to generate the local extrema , while viscous diffusion will smooth the field to annihilate the extremal points . by naturethe statistics of local extremal points inherits the process physics .based on this idea , wang , peters and other collaborators have studied passive scalar turbulence via dissipation element analysis . analyzed the lagrangian velocity by defining the trajectory segment structure from the extrema of particles local acceleration .such diagnosis verifies successfully the kolmogorov scaling relation , which has been argued controversially for a long time .however , under some circumstances the extremal points may largely be contaminated by noise , thus partly be spurious . in other wordsextremal points are sensitive to noise perturbation .although data smoothing can relieve this problem , some artificial arbitrariness will inevitably be introduced ; moreover it may not be easy to design reliable smoothing algorithms from case to case . in this regardthe extrema - based analyses are not generally applicable , e.g. with noises from measurement inaccuracy , interpolation error or external perturbations . in this papera new method , multi - level segment analysis ( msa ) , has been developed .the key idea hereof is based on the observation that local extrema are conditionally valid , indicating a kind of multi - level structure .compared with the aforementioned extrema - based analyses , this new method is a reasonable extension with more applicability .details in algorithm definition , verification and applications will be introduced in the following .considering any function in some physical process , where is the independent variable , e.g. , the spatial or temporal coordinates , is a local extremal point with respect to scale is defined as ( minimum ) , or ( maximum ) .extrema are conditionally valid . for instance , if is extremal at scale , it may not be extremal at a larger scale . figure [ level ] illustrates that for an artificially generated signal , at different levels both the number density and the fluctuation amplitude of the extremal points will change accordingly . under some special conditionsextremal points have simple but interesting properties .for instance , for a monotonous function , there is no extrema for any ; for a single harmonic wave function with a period of , the number of extremal point remains constant . for real complex multiscale systems as turbulence , variation of extremal point is continuously dependant on .levels : a ) , and b ) .the local maxima and minima are demonstrated respectively by and .the vertical line indicates the window size . ] at a specific level , denote the corresponding extremal point set as , ( along the coordinate increasing direction ) .numerical tests show that typically these points are alternated for small , while when increases or events may also appear but with low probability ( e.g. , ) , depending on the structure and the process physics .the segment is defined as the part of between two adjacent extremal points .the characteristic parameters to describe the structure skeleton are the function difference , i.e. , and the length scale , i.e. , . by varying the value , different extremal point sets , and thus different segment sets ,can be obtained . in this sensethis procedure is named as multi - level segment analysis ( msa ) ; while the existing approaches based on local extremal points calculate extrema from the dns data ; thus can be understood as single - level . compared with the conventional sf , in msathe segment length scale is not an independent input , but determined by . .in terms of structure function , the mathematical expression is ( for the order case ) ^{q}\vert { x_{s , i}-x_{s , i-1}=\ell}\rangle_{s } , \label{newsf}\ ] ] where denotes sampling over different . if any scaling relation exists , one may expect a power - law behavior as in which is the msa scaling exponent .as shown in the rest of this paper , for simple cases , msa and the classic methods show pretty identical results ; while for complex analyses as turbulence , because of the algorithmic principle to depict the function structure , the former one is more effective and efficient .some additional comments are stated as follows .first , msa can be considered as a dynamic - based approach without any basis assumption _ a priori_. since local extrema imply the change of the sign of across . assuming that represents a velocity signal , is then the acceleration , i.e. a dynamical variable .second , msa shares the spirit of the wavelet - transform - modulus - maxima ( wtmm ) , in which only the maximum modulus of the wavelet coefficient is considered . however , as argued in several refs. , if the shape of the chosen mother wavelet is different with the specific turbulent structure , e.g. , ramp - cliff in the passive scalar field , .in this aspect msa can be considered as a data - driven type of wtmm without any transform .by msa for fractional brownian motion with hurst number ( ) , ( ) , ( ) and ( ) , respectively .power - law behavior can be observed for all .b ) compensated curves . for display clarity ,these curves have been vertical shifted .c ) experimental scaling exponents in the range .the errorbar implies the standard deviation from 100 realizations .d ) measured singularity spectrum . ] ; b ) compensated curves ; c ) measured scaling exponent , and d ) the measured singularity spectrum versus .symbols are the same as in figure [ fig : fbmmsa ] . ] ; b ) compensated curves ; c ) measured scaling exponent , and d ) the measured singularity spectrum versus .the symbols are the same as in figure [ fig : fbmmsa ] . ] versus the given .the theoretical value is indicated by the the solid line . ]fractional brownian motion ( fbm ) is a generalization of the classical brownian motion .it was introduced by and extensively studied by mandelbrot and co - workers in the 1960s . since then, it is considered as a classical scaling stochastic process in many fields . in the multifractal context, fbm is a simple self - similar process .more precisely , the measured sf scaling exponent is linear with the moment order , i.e. , , in which is the hurst number .the above linear scaling relation has been verified by various methodologies , such as , classical sf , wavelet - based method , detrended fluctuation analysis and detrended structure - function , to list a few . in the present work , a fast fourier transform based wood - chan algorithm is used to synthesize the fbm data with 100 realizations , and points for each .figure [ fig : fbmmsa]a ) shows respectively the calculated second - order by msa for hurst number ( ) , ( ) , ( ) , and ( ) .the power - law behavior is observed for all considered here . for display clarity , these curves have been vertical shifted .visually , when , the measured deviates from the theoretical prediction . such outcome may be related to msa itself or the fbm date generation algorithm .further investigation will be conducted hereof .figure [ fig : fbmmsa]c ) shows the measured , .these curves demonstrate for all the cases the linear dependence of the measured scaling exponent with , whose slope is .in other words , multifractality can successfully be detected by msa for all values ( including ) .consider the so - called singularity spectrum , which is defined through a legendre transform as , for a monofractal process , is independent with , e.g. , , and . in practice ,for a prescribed , a wider range of has , a more intermittent the process is .figure [ fig : fbmmsa]d ) shows the measured in the range .it confirms the monofractal property of the fbm process .we provide a comment here on the deviation of the measured from the theoretical prediction when . in the context of extreme point based msa , the intrinsic structure of fbmis presented by these extrema .therefore how will behave is determined by the relation between the hurst number and the distribution of the extreme points , as well as the fbm data generation .this is beyond the topic of this paper .we will present more details on this topic in the follow - up studies . ) and the hilbert - based method ( ) for the second - order statistics , the trajectory - segment method ( ) and msa ( ) for the first - order statistics . for the hilbert - based statistics, frequency has been converted to time by .b ) the corresponding curves compensated by the dimensional scalings , i.e. and . there is no plateau from sf , in consistency with other reports in the literature .the following convincing scaling ranges can be observed : about from msa , from the trajectory segment analysis and from the hilbert - based method , which verify the prediction of the kolmogorov - landau s phenomenological theory . for display clarity ,these curves have been vertical shifted . ]the lagrangian velocity sf has been extensively studied . because the time scale separation in the lagrangian frame is more reynolds number dependent than the length scale case in the eulerian frame , the finite reynolds number influence becomes stronger , making the lagrangian velocity scaling relation quite controversial .more specifically , this is recognized as a consequence of mixing between large - scale structures and energetic small - scale structures , e.g. , vortex trapping events .the - order sf of the velocity component or is defined as : ^{q}\rangle , \label{sf}\ ] ] where is an arbitrary time separation scale . from dimensional analysis ,the 2nd - order sf is supposed to satisfy : where is the rate of energy dissipation per unit mass and is assumed as a universal constant at high reynolds numbers . to analyze this problem, we adopted the data from direct numerical simulation ( dns ) , implemented for the isotropic turbulence in a cubic domain .the boundary conditions are periodic along each spatial direction and kinetic energy is continuously provided at few lowest wave number components .a fine resolution of ( the kolmogorov scale ) ensures resolving the detailed small - scale velocity dynamics .the taylor scale based reynolds number is about .totally million lagrangian particle samples are collected , each of which has about one integral time life span . during the evolution process ,the velocity and velocity derivatives are recorded at each , which is the kolmogorov time .more numerical details can be found in ref . and references therein .recently , this database has been analyzed respectively by , and to identify the inertial scaling behavior .the former study employed the hilbert - based approach , in which different scale events are separated by the empirical mode decomposition without any _ a priori _ basis assumption and the corresponding frequency is extracted by the hilbert spectral analysis .they observed clearly an inertial range of , i.e. .the scaling exponents agree well with the multifractal model ( see details in ref .the latter one studies the extrema of the fluid particle acceleration , which physically can be considered as the boundary markers between different flow regions . with the help of the so - called trajectory segment structure, the clear scaling range does appear .because of interpolation inaccuracy ( noise ) , dns data need to be particularly smoothed , which may lead to some artificial input .figure [ fig : lagrangian]a ) shows the numerical results from various methods : classical sf ( ) and the hilbert - based method ( ) for the second - order statistics , the trajectory - segment method ( ) and msa ( ) for the first - order statistics . except for sf ,clear power - law behaviors can be observed for others . to emphasize this ,curves compensated by the dimensional scalings , i.e. and , are plotted in fig.[fig : lagrangian]b ) .the sf curve does not show any plateau , which is consistent with reports in the literature , even for high cases as experimentally or numerically .the absence of the clear inertial range makes the kolmogorov - landau s phenomenological theory quite controversial . in comparison, the msa curve shows a convincing plateau in the range .it need to mention that for this dns database the inertial range has been recognized as for both the single particle statistics using the hilbert - based methodology and the energy dissipation statistics to check the lagrangian version refined similarity hypothesis . herethe inertial range detected by msa is due to the different scale definition .for the two - dimensional turbulent velocity . due tothe scale - mixing in sf analysis , there is no power - law behavior . ]th - order from msa .a dual - cascade power - law behavior is observed in the range for the forward enstrophy cascade , and for the inverse energy cascade .b ) the curves compensated by the least square fitted scaling exponent to emphasize the power - law behavior .the vertical solid line indicates the forcing scale for both the forward enstrophy ( ) and inverse energy ( ) cascades .the two solid lines represent respectively the krainchan s prediction for the forward cascade and for the inverse cascade .b ) measured singularity spectrum .the inset shows the enlargement part in .a broader range of and for the forward enstrophy cascade indicates a stronger intermittency . ] two - dimensional ( 2d ) turbulence is an ideal model for the large scale movement of the ocean or atmosphere .we recall here briefly the main theoretical results of 2d turbulence advocated by .the 2d ekman - navier - stokes equation can be written as in which ] .totally , five snapshots with data points are used for analysis .more details of this database can be found in ref . .the conventional sfs are shown in figure[fig:2dsf ] and no clear scaling range can be observed , neither any indication of the aforementioned two regimes . as discussed above and also in ref . , such outcome can be ascribed to the scaling mixing in sf analysis . for comparisonthe results from msa are shown in figure[fig:2dmsa]a ) , in which two regimes with different scaling relations appear , specifically in the range for the forward enstrophy cascade and for the inverse energy cascade . to emphasize the observed power - law behavior ,the compensated curves are then displayed in figure[fig:2dmsa]b ) by using the fitted scaling exponents .two clear plateaus appear , confirming the existence of the dual - cascade process in the 2d turbulence .the scaling exponents are then estimated in these scaling ranges by a least - square - fitting algorithm .figure [ fig:2dexponents]a ) shows the measured dual - cascade .the theoretical predictions by equations ( [ eq : energy ] ) ( i.e. ) and ( [ eq : enstrophy ] ) ( i.e. ) are indicated by solid lines .note that the measured forward cascade curve is larger than the theoretical one .a similar observation for the fourier power spectrum has been reported in ref . , which is considered as an influence of the fluid viscosity . to detect the multifractality , the singularity spectrum then estimated , which is shown in figure[fig:2dexponents]b ) .based on the fbm case test , one can conclude that the inverse cascade is nonintermittent as expected ; while the forward enstrophy cascade shows a clear sign of multifractality : a broad change of and .it also has to point out here that the measured and could be a function of or the ekman friction .systematic analysis of the 2d velocity field with different parameter is necessary in the future for deeper insights .finally , we provide the following general remarks : * in principle , msa is generally applicable without special requirements on the data itself , such as periodicity , the fourier spectrum slope ( steeper than ) , noise perturbation and unsmoothness structures . * the length scale in the context of msais determined by the functional structure rather than being an independent input . at different levelsthe segments are different and thus the scales as well , which conforms with the multi - scale physics . *a serious deficiency of the conventional sf is the strong mixing of different correlation and scaling regimes due to sample averaging , i.e. the filtering ( as infrared and ultraviolet ) effect . to define the segment structure helps to annihilate such filtering and extract the possible scaling relations in the respective scale regimes , which has been proved from analyzing the lagrangian and 2d turbulence data . *lw acknowledges the funding support by national science foundation china ( nsfc ) under the grant nos .11172175 and 91441116 .this research work by yh is partially sponsored by the nsfc under grant nos .11202122 and 11332006 .we thank professor g. boffetta and professor f. toschi for providing us the dns data , which are freely available from the icfddatabase : http://cfd.cineca.it for public dns database of the 2d turbulence and the lagrangian turbulence .
|
for many complex systems the interaction of different scales is among the most interesting and challenging features . it seems not very successful to extract the physical properties in different scale regimes by the existing approaches , such as structure - function and fourier spectrum method . fundamentally these methods have their respective limitations , for instance scale mixing , i.e. the so - called infrared and ultraviolet effects . to make improvement in this regard , a new method , multi - level segment analysis ( msa ) based on the local extrema statistics , has been developed . benchmark ( fractional brownian motion ) verifications and the important case tests ( lagrangian and two - dimensional turbulence ) show that msa can successfully reveal different scaling regimes , which has been remaining quite controversial in turbulence research . in general the msa method proposed here can be applied to different dynamic systems in which the concepts of multiscaling and multifractal are relevant . _ keywords _ : intermittency , multifractal , two - dimensional turbulence , lagrangian turbulence
|
over the past six decades the world went through a period of quick and remarkable urbanization . according to the united nations in the year of 2007 , for the first time , the world urban population has surpassed the rural one and , if this process persists , two - thirds of the world population are expected to be living in urban areas by 2050 . on one hand ,cities are usually associated with higher levels of literacy , health care and better opportunities ; on the other hand , unplanned urbanization and bad political decisions also lead to pollution , environmental degradation , growth of crime , unequal opportunities and the increase in the number of people living in substandard conditions . in this sense, there is a vast necessity for finding patterns , quantifying and predicting the evolution of urban indicators , since these investigations may provide guidance for better political decisions and resources allocation . fostered by this need and also due the availability of an unprecedented amount of data at city level ,several researchers have recently promoted an impressive progress into what has been called _ science of cities _ .these new data allowed researchers to probe patterns of the cities to a degree not before possible and one of the most striking and universal finding of these studies was the discovery of robust allometric scaling laws between several urban indicators and the population size . patents , gasoline stations , gross domestic product , crime , indicators of education , suicides , number of election candidates , transportation networks , employees from several sectors , measures of social interaction are just a few examples where robust scaling laws have been found . despite these intrinsic nonlinear relationships it is common to find works that try unveiling relationships between population size and urban indicators by employing linear regressions in the raw data ; also , per capita indicators ( that is , divided by population size ) are ubiquitous in reports of government agencies and are often used as a guide for public policies when analyzing the temporal evolution of a given city or for comparing / ranking the performance of cities with different population sizes .however , these linear regressions may result in misguided / controversial relationships and per capita indicators are oblivious to the allometric scaling laws that make city a complex agglomeration that can not be modeled as a linear combination of its individual components . in this sense, there is a paucity of alternative metrics for urban indicators that may overcome the foregoing problems and provide a fairer comparison between cities of different sizes as well as a better understanding of the city evolution .et al . _ have recently proposed a simple alternative to overcome these nonlinearities by evaluating the difference between the actual value of the urban indicators and the value expected by the allometries with the population size ( that is , the residuals in the allometric relationships ) .this scale - adjusted metric explicitly considers the allometric relationships and already have proved to be useful in the economic context and for unveiling relationships between crime and urban metrics that are not properly carried out by regression analysis . in this article, we follow the evolution of this relative metric for eight urban indicators from brazilian cities in three years ( 1991 , 2000 and 2010 ) in which the national census took place . by grouping cities in above and below the allometric laws, we argue that the average of this scale - adjusted metric provides a more appropriate / informative summary of the evolution of the urban indicators when compared with the per capita values ; it also reveals patterns that do not appear when analyzing only the evolution of the per capita values .for instance , while the per capita values of homicides have systematically increased over the last three decades , both the average of the scale - adjusted metric for cities above and below the allometric law are approaching zero , that is , cities where the number of homicides is above the expected by the allometry have managed to reduce this crime , whereas this crime has been increased in those cities where number of homicides is below the allometry .we argue that the nonlinearities may affect the per capita indicators by creating a bias towards large cities for superlinear allometries and a bias towards small cities for sublinear allometries .we further show that these scale - adjusted metrics are strongly correlated with their past values by a linear correspondence , making them particularly good for predicting future values of urban indicators through linear regressions .we have tested this hypothesis via a linear model where the scale - adjusted metric for one indicator in a given census was predicted by a linear combination of all eight metrics evaluated from the preceding census .these simple models account for 31%-97% of the observed variance in data and correctly reproduce the average value of the scaled - adjusted metric when grouping the cities in above and below the allometric laws .motivated by these good agreements , we present a prediction for the values of urban indicators in the year of 2020 by assuming the linear coefficients constants over time . by visualizing the predicted changes , we verify the emergence of spatial clusters characterized by regions of the brazilian territory where the model predicts an increase or a decrease in the values of urban indicators .we further report a list containing all the scale - adjusted metrics as well as the predictions for each city in the hope that government agencies find these informations useful ( ) .we start by considering the average of the per capita values for eight urban indicators described in the .this is a common practice of government agencies for tracking the evolution of a particular city or for comparing a group of cities with different populations .we observe in fig .[ fig:1 ] that almost all per capita indicators show a clear temporal trend : elderly population , female population , homicides and family income have increased over the years ; whereas child labor , illiteracy and male population have decreased ( the unemployment rates have evolved in a more complex manner , exhibiting no clear tendency ) .we could also list the cities in which these indicators have considerably changed or rank the cities that have made more progress in reducing , for instance , the homicide or illiteracy rates .-2.25in0 in one of the main problems with this analysis is that it completely ignores the hypothesis that most urban indicators displays allometric relationships ( or scale invariance ) with the population , that is , where is a constant , is the allometric ( or scaling ) exponent and stands for time .this simple relationship summarizes the ( average ) effects of increasing the population size on the urban indicators ; it states that cities are self - similar in terms of their population , in the sense that average properties of a given city can be inferred only by knowledge of its population .urban indicators are thus expected to display a deterministic component emerging from very few and general properties of the urban networks related to social and infrastructure aspects of cities .when analyzing per capita values , it is implicitly assumed that the value of an urban indicator is proportional to the population size ( ) or , in other words , that cities are extensive systems .this idea is opposed to the complex systems approach of cities : complex systems are non - extensive ( ) , meaning that its isolated parts do not behave in the same manner as when they are interacting .cities have similar properties and only make sense as an entire `` organism '' and , in fact , there is robust evidence favoring the non - extensive and universal nature of cities ( that is , the urban scaling hypothesis ) across different cultures and historical periods .thus , several properties of a city of a given size can not be linearly scaled for another city with larger or smaller population size .the dynamical processes mediated by the urban networks make the scale operation a nonlinear transformation for several urban indicators , often resulting in per capita savings of material infrastructure and in gain of socio - economic productivity . from a more technical point of view , whenever the allometric exponent is different from one , there is a remaining component related to the population size when evaluating the per capita values of these urban indicators , that is , , which creates a bias towards large cities for and towards small cities for ; per capita measures are only efficient in correctly removing the effect of the population size in an urban indicator for . for our data , a complete description of the allometric relationships between the eight urban indicators and the populationis presented in the , where we have confirmed the presence of allometries in our data as summarized in table [ tab : allometric ] ( see also and ) . .* allometric relationships between urban indicators and population size .* values of parameters and obtained via orthogonal distance regression on the relationship between and for each urban indicator in the year ( see ) .the values inside the brackets are the standard errors ( se ) in the last decimal of the estimated parameters .the last column shows the values of the pearson linear correlation coefficient for each allometry in log - log scale . [ cols="<,^,>,>,^",options="header " , ] [ tab:2 ] in addition of being autocorrelated with their past values , for the urban indicator also displays statistically significant crosscorrelations with for other indicators ( ) .these memory effects and also the fact that the residuals surrounding the relationships versus are very close to gaussian distributions ( ) with standard deviations across windows practically constant ( ) make these scale - adjusted metrics particularly good for being used in linear regressions aiming forecasts .we have thus adjusted the linear model ( via ordinary least - squares method ) by considering the relationships versus and versus . in eq .[ eq : glinmodel ] , is the linear coefficient quantifying the predictive power of on ( is the intercept coefficient ) and is the noise term accounting for the effect of unmeasurable factors . the results exhibiting the linear coefficients of each linear regression for the two combinations of years are shown in .we note that these simple models account for 31%97% of the observed variance in and that they correctly reproduce the average values of the scale - adjusted metric above and below the allometric laws for the years 2000 and 2010 only using data from the years 1991 and 2000 , respectively ( see ) .we have further compared the distributions of the empirical values of with the predictions of these linear models and observed that the agreement is remarkable good for the indicators elderly , female and male population as well as for illiteracy and income ( and ) .motivated by these good agreements , we proposed to forecast the values of in the year of 2020 ( next brazilian national census ) . in order to do so, we have considered that the linear coefficients are constant over time and employed the average value of over the two combinations of years used in eq .[ eq : glinmodel ] for predicting the values of .it is worth noting that by assuming constant , we are ignoring the evolution of socio - economic and policy factors . in an ideal scenario, one could track the evolution of the values of for achieving more reliable predictions .however , our data ( that is , the two values for ) do not enable us to probe possible evolutionary behaviors in the values of .even so , as pointed by bettencourt , the dynamics of the urban metrics seems to be dominated by long timescales ( years ) , and thus the approach of constant coefficients should be seen as a first approximation .the grays bars in fig .[ fig:3 ] show the averages after grouping the cities with and .we observe that predictions for the average values basically keep the trends presented in the previous years ; for unemployment , in which the trend was not very clear , the predictions put this indicator together with most indicators , where the average has been decreasing for cities initially above the allometric law and increasing for those initially below . in order to gain further information on the predictions ,we build a geographic visualization of the expected changes in between the years and .the circles over the maps in fig .[ fig:5 ] show the geographic location of brazilian cities ; the radii of these circles are proportional to and are colored with shades of azure for cities where <0 ] ( the indicator is expected to increase ) ; in both cases , the darker the shade , the larger is the absolute value of the difference $ ] . perhaps , the most striking feature of these visualizations is the fact that the predicted changes appear spatially clustered for almost all indicators , which somehow reflects the geographic inequalities existing in brazil ; however , some intriguing patterns are indicator - dependent .-2.25in0 in between the years and . *each circle represents a city and the radius of the circle is proportional to .we color the circles according to the difference between and : shades of azure indicate that we expect a decrease in the values of , whereas shades of red show the cities where we expect an increase in the values of .the labeled cities are the capitals of the twenty - seven federal units of brazil ( the brazilian states and the federal district ) .the forecast for the values of were obtained through the linear model of eq .[ eq : glinmodel ] , where the linear coefficient were averaged over the two combinations of years and .we note that the changes in appear spatially clustered for most indicators , forming regions where most cities are expected to increase or decrease the value of the scale - adjusted metric . ] for child labour , is expected to increase around three of the most densely populated regions that contain the metropolitan areas of so paulo , rio de janeiro and the metropolitan areas of almost all northeast capitals ; we further observe that a decrease in child labour cases is expected in mostly of the inner and southern cities .for the indicators elderly , female and male populations the clustering of the changes in is quite evident : elderly and female populations are foreseen to decrease in mostly of the northeast cities and display an increasing tendency in large part of the other regions ( male population behaves anti - symmetrically to the female population ) . in the case of homicides , our model predicts a decrease in for the vast majority of southern cities and we further observe a stripe near the east coast where is expected to decrease for mostly of cities ( both densely populated areas ) ; on the other hand , inner cities ( specially inner cities from the state of so paulo and northeastern region ) are expected to increase , suggesting that this violent crime may be `` moving '' towards less populated areas of the interior of brazil . for illiteracy , again, the clustering of the changes is easily perceptible : we expect an increase in for mostly of the northeast and northernmost cities , while a decrease is predicted for the majority of the cities from other regions ( excluding several inner cities of southernmost region ) . for family income, we also observe a clustering in the changes where most northern cities are expected to increase the value of ( specially the inner cities of these regions ) , while for most cites in the central part of brazil are expected to decrease the value of ; these expected changes may be ( at least in part ) related to the `` bolsa famlia '' ( family allowance ) program a large scale social welfare program of the brazilian government ( more than 14 million families were beneficiaries in 2013 ) for providing financial aid to poor families via direct cash transfer because large part of the families receiving this aid are from the north and northeast regions .it is worth noting that for participating in the program , families must insure that their children attend to school and thus one would expect a reduction in for illiteracy in same regions that concentrate the beneficiaries of the program , which was not predicted by the model .this result suggest that simply enforce school attendance may not be efficient for reducing illiteracy , which can also be explained by common observed poor conditions of the public school system of those regions .finally , for unemployment , there is no clear clustering of the changes in , but instead we expect a decrease in widespread throughout the brazilian territory ( except in the southernmost region , where we note the prevalence of light shades of red ) .our article discusses that despite being widely used in government reports and also in academic works , per capita indicators completely ignore the universal allometric laws that appear ruling the growth dynamics of cities .we discuss that per capita indicators can be biased towards small or large cities depending on whether we have sub or superlinear allometries with the population size .we thus employed a scale - adjusted metric by evaluating the difference between the actual values of urban indicators and the ones that are expected by the allometric relationships . when investigating the evolution of the scale - adjusted metrics , we have reported patterns that do not appear in the per capita indicators .the scale - adjusted metrics also display a linear correspondence with their past values , a feature that facilitates the use of linear regressions for modeling the urban indicators . by employing simple linear models for describing the scale - adjusted metrics based on their past values, we verified that these models account for 31%97% of the observed variance and correctly reproduce the average values of the scale - adjusted metric . assuming the linear coefficients constant over time, we present predictions for the values of the scale - adjusted metrics in year of 2020 , when the next brazilian census will happen .we observe that the predicted changes for the urban indicators appear ( for most cases ) spatially clustered , that is , forming regions where most cities are expected to increase or decrease the value of the scale - adjusted metric . apart from this visualization of the predicted changes , we also provide a table ( ) with the values of the scale - adjusted metrics for the three past census data as well as the predictions for the next one in the supplementary materials .we believe that our analysis may find potential applications on development of new policies and resources allocation in the context of urban planing .finally , we note that the methods worked out here can also be directly applied in other contexts where allometries are present such as for economic indexes and biological quantities .the data we analyzed consist of the population size and eight urban indicators for each brazilian city in the years , and in which the national census took place .we filter these data by selecting the 1605 cities for which all the eight urban indicators were available , this corresponds to 28.8% of the total number of brazilian cities but account for 76.5% of the total population of brazil .these data are maintained and made freely available by the department of informatics of the brazilian public health system datasus .the eight urban metrics are defined as follows : _ child labour : _ the proportion of the population aged 10 to 15 years who is working or looking for work during the reference week , in a given geographic area , in the current year ; _ elderly population : _ the number of inhabitant of a given city aged 60 years or older ; _ female population : _ the number of inhabitant of a given city that is female ; _ homicides : _ injuries inflicted by another person with intent to injure or kill , by any means . _illiteracy : _ it gives the number of inhabitants in a given geographic area , in the current year , aged 15 years or older , who can not read and write at least a single ticket in the language they know ; _ family income : _ this indicator gives the average household incomes of residents in a given geographic area , in the current year .it was considered as family income the sum of the monthly income of the household , in reals ( brazilian currency ) divided by the number of its residents ; _ male population : _ the number of inhabitant of a given city that is male ; _ unemployment : _ it gives the number of inhabitant aged 16 years or older who is without working or looking for work during the reference week , in a given geographic area , in the current year . despite there being other definitions ,the results presented here have been obtained by considering that cities are the smallest administrative units with a local government ( municipalities or _municpios _ ) .the other commonly employed definition is the metropolitan area , which is composed of more than one municipality and its is usually associated with the coalescence of several municipalities . as discussed by bettencourt _et al . _ , the choice of the `` unit of analysis '' is crucial when studying properties of cities . regarding the scaling analysis :on one hand , the disaggregation of the correct urban definition can introduce a bias in the value of the scaling exponent ( either by reducing or increasing its expectation value ) ; on the other hand , the aggregation of the correct urban definition usually make the allometry more linear .in fact , changes in the scaling exponents have been reported when choosing different definitions of city .however , there is no fail - safe procedure for defining the correct boundaries of a city , also some urban indicators are actually more spatially restricted than others ( for instance , homicides versus family income ) .we have also analyzed our data after considering simultaneously the municipalities that do not belong to any metropolitan area and by aggregating the municipalities of the 39 metropolitan areas existing in brazil . despite the observation of relatively small changes in the scaling exponents ,our conclusions remain unaltered under this scenario .as we previously mentioned , several works have reported the existence of robust allometric relationships between several urban indicators and the population size .regarding brazilian cities , scaling laws already have been identified for several urban indicators , mainly because of the existence of reliable data made freely available by brazilian agencies ( datasus and ibge ) . here , we want to confirm that the urban scaling hypothesis holds for all our urban indicators and if these allometries have been changed over time .specifically , we test the hypothesis that an urban indicator can be described by a power - law function of the population size , that is , , where is the allometric ( or scaling ) exponent and is constant . in order to do so, we have plotted the logarithm of each urban indicator against the logarithm of the population and adjusted a linear model via orthogonal distance regression ( as implement in the package ` scipy.odr ` of the python library scipy ) to all these relationships .although the empirical relationships present different scattering degrees , they all display good quality linear relationships ( pearson correlation ranging from to see table [ tab : allometric ] ) which are well described by linear models in log - log scale ( see fig [ fig:2 ] and and ) . from table [tab : allometric ] , we further note that the values of for illiteracy , family income and unemployment display a weak decreasing tendency over the years , while the other indicators show only small fluctuations ( no clear evolutive tendency ) .a weak decreasing tendency for unemployment also appears in the work of ignazzi on the same data from the years of 2000 and 2010 .the values of thus classify our indicators in two groups : female population , homicides and unemployment have superlinear relationships with the population ( ) ; while child labor , elderly population , illiteracy , family income and male population have sublinear ones ( ) .it is worth noting that despite the allometric exponents be close to one for elderly , female and male populations , the allometries between these indicators and total population are almost perfectly correlated , producing values of very close but statistically different from one .we further observe that the values of the allometric exponents reported here may slightly differ from previous - reported one due the different fitting procedures as well as different urban definitions ; however , these discrepancies are often very small ( for instance , by considering generalized least squares via the cochrane - orcutt procedure and another definition of city , ignazzi have found and for unemployment , respectively in years of 2000 and 2010 ) .h.v.r . and l.g.a.a .designed the research , analyzed the data and prepared the figures .all authors wrote and reviewed the manuscript . 10 united nations , department of economic and social affairs , population division ( 2014 ) .world urbanization prospects : the 2014 revision , highlights ( st / esa / ser.a/352 ) .available : http://esa.un.org/unpd/wup/highlights/wup2014-highlights.pdf .date of access : 01 2015 jun .louf r , barthelemy m ( 2014 ) how congestion shapes cities : from mobility patterns to scaling .rep . 4 : 5561 . bettencourt lma , lobo j , helbing d , kuhnert c , west gb ( 2007 ) growth , innovation , scaling , and the pace of life in cities . proc .u. s. a. 104 : 7301 .arbesman s , kleinberg jm , strogatz w h ( 2009 ) superlinear scaling for innovation in cities .phys . rev .e 79 : 016115 .bettencourt lma , west gb ( 2010 ) a unified theory of urban living .nature 467 : 912 .bettencourt lma , lobo j , strumsky d , west gb ( 2010 ) urban scaling and its deviations : revealing the structure of wealth , innovation and crime across cities .plos one 5 : e13541 .gomez - lievano a , youn h , bettencourt lma ( 2012 ) the statistics of urban scaling and their connection to zipf s law .plos one 7 : e40393 .alves lga , ribeiro hv , mendes rs ( 2013 ) scaling laws in the dynamics of crime growth rate .physica a 392 : 2672 .alves lga , ribeiro hv , lenzi ek , mendes rs ( 2013 ) distance to the scaling law : a useful approach for unveiling relationships between crime and urban metrics .plos one 8 : e69580 .alves lga , ribeiro hv , lenzi ek , mendes rs ( 2014 ) empirical analysis on the connection between power - law distributions and allometries for urban indicators .physica a 409 : 175 .ignazzi ac ( 2014 ) scaling laws , economic growth , education and crime : evidence from brazil .lespace gographique 4 : 324 .melo hpm , moreira aa , batista e , makse ha , andrade js ( 2014 ) statistical signs of social influence on suicides .sci . rep . 4 : 6239. mantovani mc , ribeiro hv , moro mv , picoli s , mendes rs ( 2011 ) scaling laws and universality in the choice of election candidates .epl 96 : 48001 .mantovani mc , ribeiro hv , lenzi ek , picoli s , mendes rs ( 2013 ) engagement in the electoral processes : scaling laws and the role of political positions .e 88 : 024802 .samaniego h , moses me ( 2008 ) cities as organisms : allometric scaling of urban road networks .j. transp . land .use 1 : 21 .louf r , roth c , barthelemy m ( 2014 ) scaling in transportation networks .plos one 9 : e102007 .pumain d , paulus f , vacchiani - marcuzzo c , lobo j ( 2006 ) an evolutionary theory for interpreting urban scaling laws .cybergeo 343 : 20 .pan w , ghoshal g , krumme c , cebrian m , pentland a ( 2013 ) urban characteristics attributable to density - driven tie formation .commun . 4 : 1961 .gordon mb ( 2010 ) a random walk in the literature on criminality : a partial and critical view on some statistical analysis and modeling approaches .math . 1 : 283 .pumain d ( 2004 ) scaling laws and urban systems .sfi working paper : 2004 - 02 - 002 .available : http://www.santafe.edu/media/workingpapers/04-02-002.pdf .date of access : 01 2015 jun .bettencourt lma ( 2013 ) the origins of scaling in cities .science 340 : 1438 .podobnik b , horvatic d , kenett dy , stanley he ( 2012 ) the competitiveness versus the wealth of a country .sci . rep . 2 : 678 .lobo j , bettencourt lma , strumsky d , west gb ( 2013 ) urban scaling and the production function for cities .plos one 8 : e58407 .bettencourt lma , lobo j , youn h ( 2013 ) the hypothesis of urban scaling : formalization implications and challenges .sfi working paper : 2013 - 01 - 004 .available : http://arxiv.org/abs/1301.5919 .date of access : 01 2015 jun .ortman sg , cabaniss ahf , sturm jo , bettencourt lma ( 2014 ) the pre - history of urban scaling .plos one 9 : e87902 .hesketh t , xing zw ( 2006 ) abnormal sex ratios in human populations : causes and consequences .u. s. a. 103 : 13271 .brazil s public healthcare system ( sus ) , department of data processing ( datasus ) , 2011 .available : http://www.datasus.gov.br/. date of access : 01 2015 jun .angel s , sheppard sc , civco dl , buckley r , chabaeva a , et al .( 2005 ) the dynamics of global urban expansion .washington dc : world bank .schlpfer m , bettencourt lma , grauwin s , raschke m , claxton r , smoreda z , west gb ,ratti c ( 2014 ) the scaling of human interactions with city size . j. r. soc .interface 11 : 20130789 .louf r , barthelemy m ( 2004 ) scaling : lost in the smog .b 41 : 767 .arcaute e , hatna e , ferguson p , youn h , johansson a , batty m ( 2015 ) constructing cities , deconstructing scaling laws .j. r. soc .interface 12 : 20140745 .jones e , oliphant e , peterson p , _ et al ._ scipy : open source scientific tools for python , 2001 , available : http://www.scipy.org/. date of access : 01 2015 jun .* scale - adjusted metrics for brazilian cities .* values of the scale - adjusted metrics ( ) for the eight urban indicators of brazilian cities in years of 1991 , 2000 and 2010 as well as the predictions obtained via the linear model ( eq . [ eq : glinmodel ] ) for year of 2020 . * allometric laws with the population size . *the scatter plots show the allometric relationships between the urban indicators ( from top to bottom : child labor , elderly population , female population and homicides ) and population size for the years ( red dots ) , ( blue dots ) and ( green dots ) in log - log scale .the allometric exponents ( see for details on the calculation of ) are shown in the figures .see for the other indicators . * memory effects in the evolution of the scale - adjusted metrics . *the purple dots show the values of versus for each city .the dashed lines are fits of the linear model ( eq . [ eq : linmodeldistance ] ) obtained via ordinary least - square regression .the values of and their standard errors are shown in the plots and also summarized in table [ tab:2 ] . * cross - correlations between the urban indicators . *the matrix plot on left shows the values of the pearson correlation coefficient between the scale - adjusted metric for a given indicator ( one indicator per row ) in the year and all the other indicators in the year ( one indicator per column ) .the right panel does the same for the years and .the value inside each cell is the pearson correlation and each one is also colored according to this value .we note that all indicators are strongly correlated with their own past values ; furthermore , all indicators also display relevant correlations with at least one other indicator . *cumulative distributions of the normalized fluctuations surrounding the relationships between the scale - adjusted metrics and *. the plots show the cumulative distributions of the normalized residuals of the linear regressions between and ( fig .[ fig:4 ] and ) for the years 2000 - 1991 ( blue lines ) and 2010 - 2000 ( green lines ) in comparison with the standard gaussian ( dashed lines ) .we also show the -values of the cramr von mises method for testing the null hypotheses that the residuals are normally distributed .we observe that the normality of the data is rejected in most cases ( probably due the small heteroskedasticity present in these relationships see ) .however , no huge differences are observed between the gaussian cumulative curve and the empirical cumulative distributions , suggesting that can be approximately described as a standard gaussian noise . * window - evaluated standard deviation over the relationship between the scale - adjusted metrics and . * these plots show the standard deviation of the scale - adjusted metrics versus the average value of evaluated in five equally spaced windows taken from the relationship between and ( fig . [ fig:4 ] and ) for the years 2000 - 1991 ( left panel ) and 2010 - 2000 ( right panel ) .we note that the standard deviation can be approximated by a constant for most indicators in both combinations of years .we further observe that the small fluctuations in are probably the reason of why the cramr von mises test has rejected the normality of the fluctuations shown in . when fitting the linear models of eq .[ eq : glinmodel ] , we have also taken into account this small heteroskedasticity ( as implemented in the stata 13 http://www.stata.com via the ` robust ` option in the ` regress ` function ) but the linear coefficients remain practically the same .* comparisons between the average values of the scale - adjusted metrics obtained from the linear models and the empirical ones .* we have applied the linear model of eq .[ eq : glinmodel ] for predicting the values of in the year of only using data from the year of as well as for predicting in the year of only using data from the year of . in both cases ,we have calculated the average for the predictions ( gray bars ) after grouping the cities in above ( a ) and below ( b ) the allometric laws ( in that year ) and compared these results with the same averages evaluated using the empirical data ( blue bars for year of 2000 and green bars for the year of 2010 ) .the errors bars are 95% bootstrapping confidence intervals for the average values .we observe that the predicted average values are in very good agreement with the empirical values for all urban indicators in both years .* comparisons between the cumulative distributions of the scale - adjusted metrics obtained from the linear models and the empirical ones .* we have obtained the values of in the year of using the linear model of eq .[ eq : glinmodel ] and by employing data from the year of .we thus calculated the cumulative distributions functions ( cdf ) of for the predicted values ( black lines ) and the empirical ones ( blue lines ) .we observe that the agreement is very good for the population indicators , illiteracy and family income ; for the other indicators we observe that the model fails in reproducing the tails of the distributions .* coefficients of the linear regression model of the eq .[ eq : glinmodel]*. values of the linear coefficients in the model of the eq .[ eq : glinmodel ] for the relationships versus and versus for the eight urban indicators .
|
more than a half of world population is now living in cities and this number is expected to be two - thirds by 2050 . fostered by the relevancy of a scientific characterization of cities and for the availability of an unprecedented amount of data , academics have recently immersed in this topic and one of the most striking and universal finding was the discovery of robust allometric scaling laws between several urban indicators and the population size . despite that , most governmental reports and several academic works still ignore these nonlinearities by often analyzing the raw or the per capita value of urban indicators , a practice that actually makes the urban metrics biased towards small or large cities depending on whether we have super or sublinear allometries . by following the ideas of bettencourt _ et al . _ [ plos one 5 ( 2010 ) e13541 ] , we account for this bias by evaluating the difference between the actual value of an urban indicator and the value expected by the allometry with the population size . we show that this scale - adjusted metric provides a more appropriate / informative summary of the evolution of urban indicators and reveals patterns that do not appear in the evolution of per capita values of indicators obtained from brazilian cities . we also show that these scale - adjusted metrics are strongly correlated with their past values by a linear correspondence and that they also display crosscorrelations among themselves . simple linear models account for 31%-97% of the observed variance in data and correctly reproduce the average of the scale - adjusted metric when grouping the cities in above and below the allometric laws . we further employ these models to forecast future values of urban indicators and , by visualizing the predicted changes , we verify the emergence of spatial clusters characterized by regions of the brazilian territory where we expect an increase or a decrease in the values of urban indicators .
|
the density matrix renormalization group ( dmrg ) algorithm proposed by white to solve one - dimensional effective models in condensed matter theory has been established as a powerful method to tackle strongly correlated systems in quantum chemistry .in contrast to other active space methods such as casscf and casci , much larger active spaces can be treated .the original dmrg algorithm formulated by white is a variant of other renormalization group methods , relying on hilbert space decimation and reduced basis transformations . instead of truncating the eigenstates of the hamiltonian according to their energy , the selection of eigenstates based on their weight in reduced density matricesdramatically improved the performance for one - dimensional systems studied in condensed matter physics .an important contribution towards the present understanding of the algorithm was made by stlund and rommer , who showed that the block states in dmrg can be represented as matrix product states ( mps ) .they permitted predictions for the decay of the spectra of reduced density matrices via area laws for the entanglement entropy and by verstraete et al . , who derived the dmrg algorithm from a variational principle . at first , mps were mainly a formal tool to study the properties of dmrg ; an application for numerical purposes was only considered for special cases , such as calculations under periodic boundary conditions .the usefulness of the matrix - product formalism for general applications was demonstrated by mcculloch and by crosswhite et al . who employed the concept of matrix product operators ( mpo ) to represent hamiltonian operators .mpos were previously introduced in ref . to calculate finite - temperature density matrices .the main advantage of mps is that they encode wave functions as stand - alone objects that can directly be manipulated arithmetically as a whole . in traditional dmrg by contrast , a series of reduced basis transformations entail a sequential dependency dictating when a certain part of the stored information becomes accessible .an mps - based algorithm may therefore process the information from independently computed wave functions .mcculloch exploited this fact to calculate excited states by orthogonalizing the solution of each local dmrg update against previously calculated lower lying states .this state - specific procedure converges excited states fast and avoids the performance penalty incured by a state - average approach , where all states must be represented in one common and consequently larger basis .we are not aware of any excited - state algorithm based on projecting out lower states in the context of traditional dmrg , but note that wouters et al . employed such an algorithm in the chemps2 dmrg program , which respresents the wave function as an mps , but employs a traditional operator format .we may refer to these traditional dmrg programs in quantum chemistry as first - generation programs and denote a truly mpo - based implementation of the dmrg algorithm a second - generation dmrg algorithm in quantum chemistry .such a second - generation formulation allows for more flexibility in finding the optimal state and allows for a wider range of applications . the possibility to storethe wave function in the mps form allows us to perform ( complex ) measurements at a later time without the need to perform a full sweep through the system for the measurement . in the mpo formalism, we can perform operator arithmetic , such as additions and multiplications .an example where this is useful is the calculation of the variance , requiring the expectation value of the squared hamiltonian . since the variance vanishes for an exact eigenstate, it is a valuable measure to assess the quality of dmrg wave functions , albeit of limited relevance for quantum chemistry , because quantum chemical dmrg calculations are limited by the size of the hamiltonian mpo , whose square is too expensive to evaluate for large systems . but apart from the possibility of operator arithmetic , the adoption of mpos for quantum chemical dmrg has additional advantages .the mpo structure inherently decouples the operator from core program routines performing operations on mps and mpos .the increased flexibility for hamiltonian operators , on the one hand , permitted us to implement both abelian and non - abelian symmetries with only one set of common contraction routines .we note , that the relativistic coulomb - dirac hamiltonian can also be solved by the same set of program routines , which we will describe in a later work .measuring observables , on the other hand , requires no additional effort ; only the locations of the elementary creation and annihilation operators need to be specified for a machinery capable of calculating expectation values of arbitrary mpss and mpos .the 26 different correlation functions required to calculate single- and two - site entropies , for example , can thus be specified as a program input .the algorithm that we develop in this work is implemented based on the alps mps program , which is a state - of - the - art mpo - based computer program applied in the condensed matter physics community . to its quantum chemical version presented in this workwe refer to as qcmaquis .this work is organized as follows . in secs .[ sec : mps ] and [ sec : mpo ] we introduce the basic concepts of mps and mpo respectively .the inclusion of fermionic anticommutation is discussed in sec .[ sec : fermions ] . in sec .[ sec : eval ] , we describe how mpos act on mpss .finally , we apply our implementation to the problem of two - dimensionally correlated electrons in graphene fragments in sec .[ sec : calculations ] .any state in a hilbert space spanned by spatial orbitals can be expressed in terms of its configuration interaction ( ci ) coefficients with respect to occupation number vectors . with . for mps ,the ci coefficients are encoded as the product of matrices , explicitly written as so that the quantum state reads collapsing the explicit summation over the indices in the previous equation as matrix - matrix multiplications , we obtain in compact form where , and ( ) are required to have matrix dimensions , , and , respectively , as the above product of matrices must yield the scalar coefficient .the reason why eq . is not simply a more complicated way of writing the ci expansion in eq .is that the dimension of the matrices may be limited to some maximum dimension , referred to as the number of renormalized block states .this restriction is the key idea that reduces the exponential growth of the original full - ci ( fci ) expansion to a polynomially scaling ansatz wave function . while the evaluation of all ci coefficients is still exponentially expensive , the ansatz in eq . allows one to compute scalar products and expectation values of an operator in polynomial time as shown in section [ sec : expectation_values ] . at the example of the norm calculation we observe that the operational complexity only reduces from exponential to polynomial , if we group the summations as in addition to limiting the matrix sizes . the price to pay, however , is that such an omission of degrees of freedom leads to the introduction of interdependencies among the coefficients .fortunately , in molecular systems , even mps with a drastically limited number of accurately approximate ground and low - lying excited state energies ( see refs . for examples ) .moreover , mps are systematically improvable towards the fci limit by increasing and extrapolation techniques are available to estimate the distance to this limit for a simulation with finite . despite the interesting properties of mps , eq. might still not seem useful , as we have not discussed how the matrices can be constructed for some state in the form of eq . .although the transformation from ci coefficients to the mps format is possible , it is of no practical relevance , because the resulting state would only be numerically manageable for small systems , where fci would be feasible as well .dmrg calculations may , for instance , be started from a random or from a state guess based on quantum informational criteria .the lack of an efficient conversion to and from a ci expansion is therefore not a disadvantage of the mps ansatz .we also note that given an mps state , the most important ci coefficients can be reconstructed even for huge spaces .mps that describe eigenstates of some given hamiltonian operator inherit the symmetries of the latter and the constitutent mps tensors therefore feature a block diagonal structure .examples of how block diagonal tensors can be exploited in numerical computations may be found in refs . and . for additional information about mps such asleft and right normalization with singular value or qr decomposition we refer to ref . and to the detailed review by schollwck .formally , the mps concept can be generalized to mpos .the coefficients of a general operator + may be encoded in matrix - product form as since we are mainly interested in operators corresponding to scalar observables , the two indices and are restricted to , such that we may later express the contraction over the indices again as matrix - matrix multiplications yielding a scalar sum of operator terms . combining eqs . and , the operator reads in contrast to the mps tensors in eq . , the tensors in eq .each have an additional site index as superscript originating from the bra - ket notation in eq . .to simplify eq . , we perform a contraction over the local site indices in by defining so that eq. reads the motivation for this change in notation is that the entries of the resulting matrices are the elementary operators acting on a single site , such as the creation and annihilation operators and . to see this , note that we may write , for instance , which in practice can be represented as a matrix with two non - zero entries equal to . hence , essentially collects all operators acting on site in matrix form . if we again recognize the summation over pairwise matching indices as matrix - matrix multiplications, we may drop them and rewrite eq . as in practical applications one needs to find compact representations of operators corresponding to physical observables . for our purposes , the full electronic hamiltonian, is particularly important .the mpo formulation now allows us to arrange the creation and annihilation operators in eq . into the operator valued matrices from eq . .in the following , we present two ways to find such an arrangement .first , a very simple scheme is discussed to explain the basic concepts with a simple example .then , to obtain the same operational count as the dmrg in its traditional formulation , we describe in a second step , how the matrices can be constructed in an efficient implementation .we start by encoding the simplest term appearing in the electronic hamiltonian of eq .( [ eq : hamil ] ) , , as an mpo . with the identity operator on sites , we may write for simplicity , we neglected the inclusion of fermionic anticommutation , which will be discussed in section [ sec : fermions ] . to express this operator as an mpo, we need to find operator valued matrices , such that . in this case, there is only one elementary operator per site available .the only possible choice is therefore which recovers after multiplication .note that the coefficient could have been multiplied into any site and its indices are therefore not inherited to the notation . by substituting the identity operators in eq . with elementary creation or annihilation operators at two additional sites, we can express two - electron terms as an mpo .while this changes the content of the matrices at those sites sites , their shape is left invariant . turning to the mpo representation of the hamiltonian in eq ., we enumerate its terms in arbitrary order , such that we may write where is the -th term of the hamiltonian in the format of eq . .by introducing the site index , we can refer to the elementary operator on site of term by and generalize eq . to the formula above states how a sum of an arbitrary number of terms can be encoded in matrix - product form .since all matrices are diagonal , except and which are row and column vectors respectively , the validity of is simple to verify . as the number of terms in the hamiltonian operator is equal to the number of non - zero elements of each , the cost of applying the mpo on a single site scales as with respect to the number of sites , and hence the total operation of possesses a complexity of .this simple scheme of constructing the hamiltonian operator in matrix - product form thus leads to an increase in operational complexity by a factor of compared to the traditional formulation of the dmrg algorithm for the quantum chemical hamiltonian of eq . .it is possible to optimize the hamiltonian mpo construction and reduce the operational count by a factor of .we elaborate on the ideas presented in refs . and , where identical subsequences among operator terms in the form of eq. are exploited .if one considers tensor products as graphical connections between two sites , we can abstract the term in eq . to a single string running through all sites .because of the nature of matrix - matrix multiplications , each element in column of will be multiplied by each element in row of in the product . in the naive representation in eq . of the hamiltonian in eq ., operators were only placed on the diagonal , such that each matrix entry on site is multiplied by exactly one entry on site . in a connectivity diagram ,the construction discussed in the previous section therefore results in parallel strings , one per term , with no interconnections between them .the naive scheme may be improved in two different ways .first , if two or more strings are identical on sites through , they can be collapsed into one single substring up to site , which is then _ forked _ into new strings on site , splitting the common left part into the unique right remainders .this graphical analogy translates into placing local operators on site on the same row of the mpo matrix . because each operator on the site of the fork will be multiplied by the shared left part upon matrix - matrix multiplication , the same operator terms are obtained .the second option is to _ merge _ strings that match on sites through into a common right substring , which is realized if local operators on site of the strings to merge are placed on the same column in . if the hamiltonian operator in eq .is constructed in this fashion , there will also be strings with identical subsections in the middle of the lattice of sites .if we wanted to collapse , for instance , two of these shared substrings , we would first have to merge strings at the start of the matching subsection and subsequently fork on the right end .in general , however , it is not possible to collapse shared substrings in this manner , because each of the two strings to the left of the merge site will be multiplied by both forked strings on the right .four terms are thus obtained where there were initially only two . as an example , the mpo manipulation techniques discussed in this paragraph are illustrated in fig .[ fig : mpoexample ] .we encoded the two terms and in an mpo according to the naive construction and subsequently applied _ fork _ and _ merge _ optimizations . and are encoded in one mpo , whose matrices are depicted in the figure for the seven sites .the naive construction leads to the matrices shown in panel .optimization was carried out in panel to exploit the common operator on site .the combination of both _fork _ and _ merge _ optimizations yields panel .finally , the attempt to exploit the common identity operator on site in panel by introducing an additional _ fork _ and _ merge _ optimization fails , as unwanted cross terms are generated .we note that is only possible if both terms have the same matrix elements ( in this example ) . ] to compact the matrices , we collapse strings from both sides of the orbital chain lattice simultaneously . starting from the naive construction of the hamiltonian operator ,each string is divided into substrings between the sites , which we call mpo bonds .next , we assign to each mpo bond a symbolic label which will later indicate , where we may perform merges and forks . the decisive information , that such a symbolic label carries , is a list of ( operator , position)-pairs which either have already been applied to the left , implying forking behavior , or which will be applied to the right , leading to merging behavior .more specifically , if we assign an mpo bond between sites and with one of the labels we denote , in the first case that , on the current string , operators and were applied on sites and to the left and .we refer to this type of label as a _fork_-label , because we defined forking as the process of collapsing substrings to the left .for this reason the label must serve as an identifier for the left part of the current string . in the second case , operators and will be applied on sites and to the right , , which serves as an identifier for the right substring from site onwards . for both types of labelswe imply the presence of identity operators on sites not mentioned in the label . between each pair of sites , all mpo bonds with identical labelsmay now be collapsed into a single bond to obtain a compact representation of the hamiltonian operator .duplicate common substrings between different terms are avoided in this way , because the row and column labels will be identical within shared sections . in a final stage , at each mpo bond , we enumerate the symbolic labels , in arbitrary order , which yields the and matrix indices of . , , , with combined common subsequences .open squares correspond to operators , filled squares to operators multiplied by the matrix element . ]this leads to the question of when to apply which type of label .the general idea is to start with fork labels from the left and with merge labels from the right .considering , for instance , a term , , the four non - trivial operators divide the string running over all sites into the five substrings 1 , , , and with different symbolic labels . since we start with fork from the left and merge from the right , we need to concern ourselves with the three substrings in the center .regarding the connectivity types of the labels for these substrings , the combinations _ fork - fork - merge _ or _ fork - merge - merge _ lead to particularly compact mpo representations .the former combination is shown in fig .[ fig:4term ] : all terms in the series , , share the operators at sites and , such that the matrix element has to be multiplied by the operator at site , coinciding with the position at which the label type changes from _ fork _ to _ merge_. we now focus on one site only and describe the shape of for an mpo containing the terms this simplification will retain the dominant structural features of the mpo for the hamiltonian in eq . , because there are still terms in this set . assigning all terms the described labels and subsequent numbering on each mpo bond yields .[ fig : mposcale ] shows the result with labels sorted such that distinct subblocks can be recognized . from their labels we can infer the following properties : + [ cols= " < , < " , ] both rows and columns contain labels with at most two positions , implying that the matrix size scales as .the cost of contracting the operator with the mps on one site is proportional to the number of non - zero elements in the mpo matrix .as block has rows and columns , the algorithm will scale as , because it performs iterations with a complexity of in one sweep .note that the first and last operator ( i.e. and ) of each term is missing in fig .[ fig : mposcale ] ; their inclusion only adds a constant number of rows and columns to ( one for each of , , , ) .terms omitted from the previous discussion may easily be accomodated in fig .[ fig : mposcale ] .the middle operator of two - electron terms with one matching pair of indices , for instance , will enter the square above block .on some site . , , and are diagonal blocks which extend incoming operator terms to the next site , while is dense and contains creation or annihilation operators multiplied by .the first and last operator of each term , corresponding to and , were omitted . ]in the operator construction scheme that we described in the previous section we omitted the description of fermionic anticommutation . we can can include it by performing a jordan - wigner transformation on the elementary creation and annihilation operators . introducing the auxiliary operator we transform each elementary creation or annihilation operator ( ) as such that eqs .are fulfilled by the transformed operators . since and = 0 ] under the assumption that and were formed from left and right normalized mps tensors respectively , the derivative of with respect to is just and we arrive at the standard eigenvalue problem the computationally relevant operation is the matrix - vector multiplication on the left hand side of the equation above as it is the key operation in sparse matrix eigensolvers . looking at this operation in detail , we find that it is very similar to the boundary iteration in eq . : steps 13 from fig .[ fig : mpoexpect ] are identical , but instead of multiplying the temporaries by the mps conjugate tensor , a scalar product with is formed . with the implementation of eqs . , , and we have thus all operations at hand to compute ground - states of hamiltonian mpos . moreover , in case of the quantum chemical hamiltonian of eq ., the compression technique described in this work ensures the optimal execution time scaling of .in addition to the eigenvalue problem in eq . for a single site , the procedure described in ref . improves convergence by introducing a tiny ad - hoc noise term which helps to avoid local minima by re - shuffling the renormalized states .in the previous section , we described the optimization of a single site , which in practice has a high probability to become trapped in a local energy minimum .although the convergence behaviour may be improved by introducing a tiny ad - hoc noise term , for chemical applications , the so - called two - site dmrg algorithm , in which two sites are optimized at the same time , achieves faster convergence and is much less likely to get stuck in local minima . in the mps - mpo formalism, we can optimize two sites at once by introducing the two - site mps tensor and the two - site mpo tensor the latter case corresponds to a matrix - matrix multiplication of the operator valued - matrices , whose entries are multiplied by forming the tensor product of the local site operators . if is treated as a single 16-dimensional local space , we can extend eq .from the previous section to two - site dmrg by exchanging with and with and obtain the different steps in evaluating the above expression are essentially the same as for the single - site case , such that the same program routines can be used for both optimization schemes after the generation of the two - site tensors . the mps - based state specific algorithm to economically calculate excited states repeatedly orthogonalizes the excited state against a supplied list of orthogonal states at every micro - iteration during the jacobi - davidson diagonalization step .let denote the -th supplied orthogonal state and the target wave function .we can then define the partial overlaps and where and are defined according to eqs . and , yielding for the overlap .at every site , the orthogonal vectors , which the jacobi - davidson eigensolver takes as input parameters , can now be calculated as we note , that has the property .after diagonalization , and are updated with the optimized tensor during a left and right sweep respectively . finally , the converged will be orthogonal to all .-orbitals , respectively . ]the efficiency of an mpo dmrg implementation critically depends on the representation of the mpo .as we have shown , both a compact construction and the exploitation of sparsity are necessary to achieve optimal scaling . to demonstrate its capabilities , we calculated ground- and excited - state energies of the graphene fragments g2 and g3 shown in fig .[ fig : graphene ] .the conjugation in both molecules provides for long - range correlation in two dimensions and they are therefore difficult to treat with dmrg , which performs best for one - dimensional and quasi one - dimensional systems . in other systems with less delocalization and mixed active spaces , containing both - and -type molecular orbitals and possibly exhibiting spatial point group symmetry ,often a significant fraction of the two - electron integrals have negligible magnitude , leading to smaller mpos such that even larger active spaces become affordable . , dashed gray lines ( ) and dotted green lines . ]we chose the -orbitals for the active space , represented by a minimal sto-3 g basis set . while this choice of basis precludes obtaining realistic energies, it does not simplify the correlation problem within the active space , which we are interested to solve with dmrg .a set of different condensed polyaromatic systems has recently been calculated by olivares - amaya et al . in ref . , employing the same basis set . in accordance with their work , we found that performing pipek - mezey localization first within the active occupied orbitals , followed by a second pipek - mezey localization of the active virtual orbitals , provided the molecular orbital basis with the best convergence .the efficiency gain compared to hartree - fock orbitals more than compensated the loss of spatial point group symmetry . .first panel : ground - state of g3 with m = 2000 , 1500 and 1000 .second panel : lowest singlet excited - state of g2 with m = 1000 , 950 , 900 , 850 .third panel : lowest triplet excited - state of g2 with m = 1000 , 950 , 900 , 850 .fourth panel : ground - state of g2 with m = 350 , 300 , 250 , 200 and a reference calculation with m = 3000 . ] in order to obtain accurate energies , we exploit non - abelian spin symmetry whose implementation with the quantum chemical dmrg hamiltonian will be described in a later work .furthermore , the type of orbitals and the orbital ordering have to be carefully chosen to improve performance . by computing the fiedler vector based on the mutual information , we determined an efficient orbital ordering . in fig .[ fig : mutplot ] , it is shown for the g2 fragment together with the mutual information drawn as lines with varying thickness according to entanglement strength .we observe that , based on this entanglement measure , the fiedler ordering reliably grouped strongly entangled bonding and anti - bonding orbitals pairs next to each other and arranged orbitals according to their spatial position , such that the hexagonal fragment is traversed through the longest possible pathway across two adjacent corners .the application of the fiedler ordering to g3 led a similar type of orbital arrangment . with this configuration, we performed ground- and excited - state calculations with the implementation reported in this paper .the obtained energies per carbon atom are shown in fig .[ fig : extrapolation ] . for the g2 ground - state calculation , chosen such that we obtained truncation errors in the same range as the g3 ground - state calculation .this choice affords wavefunctions of similar quality in both cases and accordingly similar distances in the extrapolated energy , with the advantage , that for the smaller g2 fragment we can verify the extrapolation error with a precise reference calculation with . from the extrapolation , we estimate an error per carbon atom of for g2 and for g3 , respectively .in this work , we provided a prescription for the efficient construction of the quantum chemical hamiltonian in eq .as an mpo .our construction scheme ensures the same computational scaling as traditional dmrg , compared to which a similar performance can be achieved .the advantages of a full matrix product formulation of all wave functions and operators involved in the dmrg algorithm are the efficient calculation of low - lying excited states , straightforward implementation of new observables , and additional flexibility that allows us to accommodate different models and symmetries in a single source code .we investigated our mpo approach at the example of two graphene fragments of different sizes for which we calculated ground and excited state energies .this work was supported by eth research grant eth-34 12 - 2 .
|
we describe how to efficiently construct the quantum chemical hamiltonian operator in matrix product form . we present its implementation as a density matrix renormalization group ( dmrg ) algorithm for quantum chemical applications in a purely matrix product based framework . existing implementations of dmrg for quantum chemistry are based on the traditional formulation of the method , which was developed from a viewpoint of hilbert space decimation and attained a higher performance compared to straightforward implementations of matrix product based dmrg . the latter variationally optimizes a class of ansatz states known as matrix product states ( mps ) , where operators are correspondingly represented as matrix product operators ( mpo ) . the mpo construction scheme presented here eliminates the previous performance disadvantages while retaining the additional flexibility provided by a matrix product approach ; for example , the specification of expectation values becomes an input parameter . in this way , mpos for different symmetries abelian and non - abelian and different relativistic and non - relativistic models may be solved by an otherwise unmodified program .
|
babar is a major particle physics experiment running at the stanford linear accelerator center ( slac ) , which has already produced many billions of events , both real and simulated , in thousands of files .these events are processed , re - processed and selected by standard production jobs .in addition the collaboration numbers 500 + physicists , many of whom are active in running individual analysis jobs . while certainly smaller than the impending experiments at the lhc , babar s requirements approach them in several respects , and meeting the challenge of babar is an invaluable rehearsal for the challenge of the lhc .babar data processing creates a very high demand for computing resources , both in storage space and cpu time .largely as a consequence of this , the collaboration has moved from a system which was basically central , with all resources being provided at the slac site , to a distributed model with large and medium sites ( ` tier a ' and ` tier c ' ) spread across the usa and europe .grid technology provides the obvious means to make these resources available .however , unlike the lhc experiments , the babar system evolved in the pre - grid era , and tools have to be provided which can be used with the current system , rather than building in grid concepts from the start .there is also a difference in timing : we require solutions that can be used today rather than in 2007 .we therefore set out as an exercise to see what could be done with existing grid technology , such as globus and afs , rather than the more complete but longer term solutions offered by the edg and similar projects .we encountered the problems familiar to other developers - authentication , authorisation , data location , the input sandbox and data retrieval - and were able to solve them , in some cases in more than one way .the work went through two phases .the first was a project to provide a grid based babar system that would act as proof of principle - the ` demonstrator ' .the user runs a web browser through their desktop or laptop , and for portability no extra software is required on this platform .the user selects the data to be analysed according to pre - existing babar criteria , and they are processed by the standard analysis job ( ` the workbook example') .the http server is used for file transport .standard ntuples are written , and these are retrieved to the user platform where they can be analysed using root .the demonstrator was successfully used at escience events in the summer of 2002 .the second phase is a command line system - gsub , which gives easy flexibility for the typical physics analyst .the user is presumed to have the standard babar environment ( which enables compilation , data location , etc ) running on their platform .afs is used for the input and output sandboxes , and the tokens are maintained using the gsiklog client and gsiklogd server , so that gsiklogd must be running as a server on the user home system .this places demands on the user system but not on the remote sites - a reasonable scenario given that the user is presumed to want to make use of resources at the remote sites , whereas the remote sites have no great incentive to provide special facilities for the user .we describe the two systems , their good features and also those features which we hope to improve by incorporation of the more sophisticated middleware currently being written .developments involved a number of clusters of pcs running linux .several of these were at uk institutes ( manchester , birmingham , rutherford , imperial , bristol , liverpool , qmul and rhul ) and comprised identical 40 node pc farms .facilities at in2p3 and dresden were also included at some stage .there was good communication and co - operation between the sites and their system managers .standard grid certificates were used for authentication .sites involved had a mutual policy of recognising those authorities accepted by the edg and this covered the uk and french institutes .the doe and german authorities came into being during this period ; before they were available in2p3 offered the facility for babar users to acquire a certificate .we resisted early calls for babar itself to become a ca . by maintaining a clear distinction between authentication and authorisation we realised that the proper r^ ole of the experimental organisation was with the latter and not the former . for an authorisation system we set up a vo ( virtual organisation ) for babar .this is maintained at manchester and published using the ldap system in the usual way .the babargrid sites are available to all members of babar . in some cases ( rutherford and in2p3 , the tier a sites )this means equal access for all members .the tier c university sites require a system whereby in principle ( or even in practice ) priority can be given to local members : institutes are responsible for the cost and maintenance of their sites , and while they are generally happy for them to be put to use by outside members , they do not want their local machines taken over by heavy outside use to an extent that impinges on the productivity of locals . a ` member of babar ' has a precise definition .all members have an account at slac , and this has specific ` babar ' authorisation on the afs acl list .obtaining this requires user signatures , consent by the slac computer center , authorisation by the group leader , and the list is kept up to date to weed out members who have moved on elsewhere .it makes sense to use this existing well - audited system rather than involving users in further paperwork .any babargrid user has also , by definition , got a grid certificate .we therefore set up a system whereby all a user has to do to be entered in the babar vo is to copy the dn ( distinguished name ) from their grid certificate into a specifically named file in their home area at slac . a cron job scans for such files , checks the ( slac ) userid against the afs acl list , and forwards the dn to the vo server at manchester .a second cron job runs at each participating site , and picks up the list of authorised users from the vo .the local system manager retain control over this job and can readily modify it , if necessary , to remove any known rogues .this has not been necessary and we do not expect it to be , but it is a useful factor in making the system acceptable to local managements .this list is appended to the site sgridmap file with the generic userid prefix babar . appending the list has the desirable effect that if a user has a specific account at the site , and this is put into the gridmap file by other means , then a grid request for their dn will take the specific account rather than the generic one . the generic account system provides a pool of accounts ( babar01 , babar02 ... babar99 or whatever ) and the user is mapped onto the first free pool account .this gives outsiders the facility to use the site , while providing accountability : if a particular account behaves ( inadvertantly or deliberately ) in an anti - social way , the dn that was given this account is known .furthermore these accounts can , if desired , be given lower priorities and privileges than local ones . if a dn has finished with the account and then submits a subsequent request , it will be given the same account as last time .this is useful for retrieving output from jobs .we thus have a system of authorisation and accounting which scales , i.e. for users at sites it requires transactions rather than , and yet which enables local users to have priority .the system has proved easy to operate and is reliable .data in babar is organised into runs , sets of approximately 600,000 events recorded by the experiment . when a run is finishedthe run number is incremented and the next run started .the mapping of datafiles to runs is many - to - one : each file contains the events from a particular run , there are many such files containing different processing and selection stages but for each ( complete ) processing specification there is a unique file for each run .all the necessary metadata information can thus be contained in the dataset name , which includes the run number , processing version , selection program version , selection type and other information .the data catalog is maintained centrally at slac , and is thus also the metadata catalog : each filename contains a full and systematic description of what it contains .each site contains a copy of the main metadata catalog , updated nightly .it also contains an extra flag for each file which is set if a local copy exists .this is maintained by the skimtools facilities which are used , for example , to request datasets fulfilling certain criteria to be copied from slac to the local site if they do not exist .the name of each file is the same in all sites . to accommodate the fact that different sites have different disk organisations , each file name starts with the unix symbolic variable bfroot .as the demonstrator used a unique binary , this was compiled ( at slac ) copied to a location ( at manchester ) where it was accessible via http .this was then copied ( sucked ) to the remote site at the start of each job .the skimdata.tcl file for the job was sent in the input stream as part of the input generated by the perl / cgi script .the myanalysis.tcl file was then expanded , i.e. each sourced .tcl file was replaced by the actual contents , iteratively until the file was self - complete .this was a major task to do by hand , though a facility has now been provided to do this automatically , if necessary .this large .tcl file was then shipped to the remote sites in a similar manner to the binary , as were the few but large non - tcl files the standard job requires . in the second version we use afs to circumvent the input sandbox problem .the user s working directory must be within an afs filesystem at a site running a gsiklogd server .an afs filesystem is not a problem as many babar working environments use , or have the option of using , afs based user directories ( e.g. slac and ral ) .the second is a requirement which does require the co - operation of the system manager , but there are no major problems associated with it . at the start of each job gsiklogis run ( as a client ) at the remote site .the binary is copied if it is not available on the system , so this does not restrict the choice of target sites .this essentially provides an afs token on the authority of the grid proxy , which is used to run the job .the job can then change directory to the user s working directory , and all the links are available .all the .tcl files , and the non - tcl files , and the binary are then available .some initialisation of environment variables is required but can readily be done through standard login scripts - the only point where care is needed is to avoid confusion between the bfroot for the remote site and that for the user s home site .individual jobs run at the remote sites , usually through submission to pbs .their progress can be monitored and their log files retrieved , though this is rather slow and incovenient other than for test purposes .the basic betaapp myanalysis.tcl commands are sent by globus - job - submit to the appropriate remote sites ( again , usually to the pbs batch system ) in a simple wrapper that does the gsiklog and cd to the working directory .each is preceded by a globus - job - run -stage to provide gsiklog if necessary , and other simple tasks . for each .tcl file the user ( or a simple script ) gives the command gsub < site > betaapp myanalysis.tcl .there is no need to categorise them into hyper and superjob collections .the log files are not of much interest .each analysis job produces as useful output a set of ntuple / histogram files in hbook ( or root ) format .the job jobnn typically processes a .tclfile data-nn.tcl and produces a file output-nn.root .as each job finishes it moves its output file ( stored on the farm node ) to a directory on the gatekeeper for that site , and tars together all the jobs there to form a single file .thus as jobs start to finish , there is always one file on the site that contains all the presently available output data . the user - again from a web browser running perl / cgi - can then invoke a job which runs on the server and copies all the tarred superjob files back to a directory using grid - ftp .they are untarred and tarred into a single directory , and renamed as necessary so that missing / unfinished / failed jobs do not give holes in the sequence .a link is provided to the total tarred file , and a specific mime type assigned .the browser has this mime type specified such that when the link is clicked on , the file is downloaded to the browser and untarred , and a small root program run to produce a set of histograms from the total results . by contrast , this is absolutely trivial .afs also provides the output sandbox .the ntuple files are written directly back to the users work directory and can be analysed there .we have found what appears to be a simple and stable solution to the three a s problem .the demonstrator has shown that simple grid tools can be used to locate data across remote sites , run appropriate analysis jobs , retrieve and combine the outputs .the gsub command will be made available within babar . whether it is picked up and exploited by userswill be an interesting test .the globus - job - submit commands will be replaced by edg - job - submit as soon as the system becomes stable and supported .99 the babar collaboration , ` the babar detector ' nucl .instr & meth .p1 2002 http://eu-datagrid.web.cern.ch/eu-datagrid http://www.slac.stanford.edu/bfroot/www/doc/ workbook / workbook.html t. adye , r. barlow , a. forti , a. mcnab d. smith , building the grid for babar , all hands 2 - 4 september 2002 http://marianne.in2p3.fr/datagrid/documentation/ edg - installation - guide / node9_ct.html i. foster , c. kesselman , steve tuecke , the anatomy of the grid : enabling scalable virtual organizations , international j. supercomputer applications , 15(3 ) , 2001 ./ babargrid / registration.html http://www.gridpp.ac.uk/authz/gridmapdir http://www.slac.stanford.edu/bfroot/www/physics/ skims / skimdata.html t. adye , a. dorigo , a. forti , e. leonardi , distributing file - based data to remote sites within babar collaboration , in proceedings of computing in high energy and nuclear physics ( chep 2001 ) 9/3/2001 - 9/7/2001 , beijing , china http://www.globus.org/datagrid/ replica-catalog.html http://www.slac.stanford.edu/bfroot/www/doc/ workbook / workdir / workdir.html
|
we present two versions of a grid job submission system produced for the babar experiment . both use globus job submission to process data spread across various sites , producing output which can be combined for analysis . the problems encountered with authorisation and authentication , data location , job submission , and the input and output sandboxes are described , as are the solutions . the total system is still some way short of the aims of enterprises such as the edg , but represent a significant step along the way .
|
in this section , we provide details how the langevin equations , can be transformed into the one - dimensional fpe presented in the main text . from general results on stochastic processes ( see ) , it follows that the previous langevin equation is associated to the following two - dimensional fpe : \nonumber \\ & + \frac{1}{2}\sum_i \partial^2_{i}\left\ { \left[(n_i\sigma_i)^2 + n_i\left(\nu_i+\gamma\frac{n}{k}\right)\right]p\right\}\ , , \label{fp2d } \end{aligned}\ ] ] where .the drift part is directly stemming from the non - fluctuating parts of the langevin equations .diffusion depends on the correlation level of the noises experienced by the two species .in particular , we have introduced the correlation coefficient .the case when the two noises are the same is given by , when they are independent is and when they are anti - correlated is . to study the evolutionary dynamics associated to eq ., the relative abundances are the natural choice of variables .therefore , we transform the absolute abundances and to and . to perform the change of variables ,not only and have to be replaced , also the differential operators and the probability distribution have to be transformed . ensuring that the latter is still normalized after change of variables , the jacobian has to be introduced , .the derivatives are given by , and .after the change of variables , the fpe for and can now be further simplified exploiting the fact that the time scale of selection , , is much slower than the one of the population growth .therefore , we marginalize the fpe with respect to the total population size . thereby ,the integrals of -derivative terms such as or vanish and the fpe simplifies to \right\ } \nonumber \\ & + \partial_x\left(\frac{s}{n } q\right)\nonumber\\ & + \!\partial^2_x\!\left\{\!\left[\frac{\sigma_1 ^ 2\!-\!2\epsilon\sigma_1\sigma_2\!+\!\sigma_2 ^ 2}{2}x(1\!-\!x)\!+\!\!\frac{\gamma}{2k}+\frac{\nu_1-sx}{2n}\right]\ !q\right\ } , \label{eq : fp1db}\end{aligned}\ ] ] where . the drift term in the second line stemming from demographic fluctuations can be neglected as holds .to finally arrive at the one - dimensional fpe employed in the main text , we compute the steady state population size . as the deterministic differential equation for given by ,\nonumber\end{aligned}\ ] ] the fixed point for the populations size is ] .it is verified that in the limit , one recovers the expression given in the main text .all in all , the behavior we discussed in the main text is validated by analyzing the fixation probability : both a higher growth rate and a smaller variability are beneficial for an individual .the expression for the time of fixation in the neutral case that we presented in the body of the paper is derived as follows .the average time for fixation obeys the following backward equation , \partial_x\right\ } \partial_x t(x)= \nonumber \\ & -\left(\tilde\sigma^2x\left(1-x\right)\right)^{-1 } , \label{eq : fp_time}\end{aligned}\ ] ] with the boundary conditions . integrating eq . andby variation of constants , we obtain : \ , , \label{t_der}\end{aligned}\ ] ] where is a constant to be fixed by the boundary conditions .the integrals of eq . needed for are performed by decomposing the rational function at the prefactor and using the formula : that follows from the very definition of the dilogarithm ( see ) .the formula is used four times either directly ( with a simple change of variables ) or first integrating by parts to satisfy the condition in .the resulting expression is then transformed to the form given in the main text ( which is the one given by mathematica ) by using the reflection property , , see . in the general case when selection is present ,the expression for the average fixation time can not be found explicitly but is reducible to quadratures as follows .the fixation time obeys the backward equation with the left - hand side replaced by . using the definitions , we obtain \partial_x\right\}\times\nonumber \\ & & \partial_xt(x)=-\frac{2k}{\gamma x(1-x)}. \label{eq : fpt}\end{aligned}\ ] ] boundary conditions are .the homogeneous solution was already found following and reads where and are constants and we defined to simplify notation .the non - homogeneous solution for the gradient of is obtained by varying the constant in , remarking that and integrating the resulting first - order differential equation to obtain \nonumber\end{aligned}\ ] ] where is the hypergeometric function of two variables .the solution for involves the integral of the expression above ( for which a closed form does not seem to be available ) , and the two constants in are fixed by it is verified from the expression above or directly from the original equation that in the two limits and the solution behaves like in the neutral case , _i.e. _ and . selection and the rest of the parameters affect of course the solution in the rest of the interval of definition ] .the dependency of the instantaneous reproduction rate on is given by the sigmoidal function : which reduces to in the limit of large variances } ] and .[ fig : compare ] ] we now turn to the scenario where memory extends over several environmental conditions that an individual previously experienced .whilst for an exact analytic mapping can be found in the limit of small , for finite memories the situation is more intricate .however , for the special case the variability in the growth rate can be well approximated by the following argument .the reproduction rate of an individual , , of type at time depends on the average of all instantaneous reproduction rates experienced by the individual : at the time of reproduction , we assume for simplicity that offspring looses its memory of past environments experienced by the progenitor .we consider again a time interval of length as in .fluctuations in the rates decorrelate on timescales of the order of the lifetimes of individuals , , which are much longer than and . therefore on the scale , noise is smooth , contrary to .conversely , timescales of several lifetimes are much smaller than those on which selection acts and much longer than the characteristic time of the noise .therefore , to describe the dynamics of the fractions , the environmental noises are well approximated by a shortly correlated noise .an estimation of the amplitude of the noise is obtained by calculating the sum the durations of the environmental intervals are independent for different and ( and the contribution is negligible with respect to the rest of the sum ) so that one can replace them by .in addition , the symmetry in the indices of the intervals allows us to further simplify the expression to compute the average in eq .three different orders of events have to be distinguished : a ) if the birth of the -th individual was prior to the one of the -th individual , then it follows from eq that the quantity to be averaged is , where is the number of environmental switches since the birth of the -th individual up to time . to derive eqwe have used that the terms in the sum take independent values with equal probability .conversely , case ( b ) is when the birth of the individual is posterior to the one of the -th individual .then the quantity to be averaged is , where is the time between the birth of the -th and the -th individuals . finally , in case ( c ) when , the correlation is zero as there is no overlap between the environmental fluctuations of the two individuals .due to the poissonian nature of the events , the number of switches since birth ( back in the past ) or before reproduction ( forward in the future ) have the same distribution where ( its exact value does not affect the sequel ) .it follows that \,.\nonumber\\ \label{eq : mess } \end{aligned}\ ] ] the integral over is the continuous approximation of the sum over appearing in eq . while the variables and refer to the variables and first and second term in the square parentheses of correspond to cases ( a ) and ( b ) , respectively . by a series of change of variables and integrations by parts, it is shown that reduces to the validity of this approximation is confirmed for the neutral case in fig . 2 in the main body of the paper where the fixation probability and time are compared to the analytic calculations employing eq . .additional data for non - neutral evolution is presented in fig .[ fig : fit ] where two species ( one with finite variability , , one with vanishing variability ) are analyzed .analytic solutions for the fixation time and probability are fitted to simulation data .the best fit deviates less than from eq . . .black dots correspond to simulation results of the ibm and red lines are analytic solutions . to obtain the latter we fitted eq . and the solution of eq . to the ibm .we used and as fitting parameters and obtained and .other parameters are , , , , , , and ] .[ fig : sigma ] ] let us now analyze the mapping of the average reproduction rate .importantly , this mapping is very sensitive to model details which we will exemplify in the following . as results for neutral species do not dependent on the average reproduction rate, we have to turn to the evolution of non - equal individuals to understand the mapping of the growth rates . in fig .[ fig : fit ] , we show the fixation probability for two species with the same but only the first species has a variable reproduction rate for .red lines correspond to a fit with and agree perfectly with simulation results . in other words , the first species does not only have a disadvantage due its sensitivity on environmental changes , , but also has a smaller average growth rate . to study this effect in more detail ,let us now analyze the fixation probability dependence on the memory parameter , see fig .[ fig : kernel_sel ] panel ( a ) .black dots correspond to the standard ibm ( if not mentioned otherwise our discussion applies to this data ) , red dots to a slightly changed model which is going to be introduced in the following . for species are equally likely to fixate as the growth advantage of the more variable species exactly compensates for its disadvantage due to the std of the noise . .the first species growth rate depends on the environment while the second one s is constant .black dots correspond to the ibm introduced in the main body of the paper , while red dots represent a model modification not memorizing the first environment ( the one in which an individual is born ; for details see text ) .while the fixation probability is a direct simulation result , the selection coefficient is inferred from it using the additional data presented in fig .[ fig : sigma ] . parameters are , , , , , , and ] . ] in this section , we present additional data for the non - neutral case . fig .[ fig : compare_models ] shows the fixation probability depending on the environmental switching rate .in particular , we investigate extinction for a species which is not sensitive to its environment ( , ) competing with a sensitive species ( , ) for different values of . in the case of no memory both species are equally likely to fixate as the advantage in the average reproduction rate exactly compensates for the disadvantage due to the std of the noise in the growth rate [ see eq . ] . for larger values of the memory parameter ,a bias favoring the species with is present ( the exact value of the fixation probability depends on mapping details as discussed above ) .importantly , the bias is not only present for very quickly fluctuating environments , but already emerges if reproduction events happen on a time scale comparable to .this supports the conclusion , that we were already drawing in the body of this paper when discussing fig . 4 : the white noise approximation is an adequate description for such an evolutionary process in that parameter regime .
|
one essential ingredient of evolutionary theory is the concept of fitness as a measure for a species success in its living conditions . here , we quantify the effect of environmental fluctuations onto fitness by analytical calculations on a general evolutionary model and by studying corresponding individual - based microscopic models . we demonstrate that not only larger growth rates and viabilities , but also reduced sensitivity to environmental variability substantially increases the fitness . even for neutral evolution , variability in the growth rates plays the crucial role of strongly reducing the expected fixation times . thereby , environmental fluctuations constitute a mechanism to account for the effective population sizes inferred from genetic data that often are much smaller than the census population size . spencer s famous expression `` survival of the fittest '' provides an appealing short summary of darwin s concept of evolution . however , it leaves aside a very difficult yet important aspect namely identifying the factors determining the fitness of a species : fittest individuals are by definition prevailing but the reasons facilitating their survival are not obvious . besides the difficulties arising due to the genotype phenotype mapping causing complex fitness landscapes , also ecological factors like population structure and composition additionally complicate the issue . therefore traditional fitness concepts solely based on growth rates and viability were extended by frequency - dependent or inclusive fitness approaches . another important factor for the success of a certain trait , is a non - constant environment influencing birth / death rates . how variable environmental conditions affect evolutionary strategies like phenotypic heterogeneity or bet - hedging has been extensively studied , see _ e.g. _ , yet the consequences of fluctuating reproduction rates and their interplay with demographic fluctuations were not fully elucidated . here , we quantitatively investigate the impact of variable environments on the fitness . in contrast to other models dealing with variable environmental conditions , we do not study which strategy is optimally suited to cope with such changing environments but focus on the consequences of fluctuating reproduction rates . in particular , we show that an individual s sensitivity to environmental changes contributes substantially to its fitness : a reduced sensitivity increases the fitness and may compensate for large disadvantages in the average reproduction rate . we also find that fluctuating environments influence neutral evolution where they can cause much quicker fixation times than expected . these effects are relevant as constant environmental conditions are the exception rather than the norm ; for instance , the availability of different nutrients , the presence of detrimental substances and other external factors like temperature , all strongly influence reproduction / survival and occur on a broad range of time scales . to understand the impact of fluctuating environmental conditions , we first consider rapidly changing environments in an evolutionary process based on birth and death events similar to . the dynamics is described by the following stochastic differential equations : the influence of environmental variability is modeled as white noise acting on the growth rate , , of a trait of type : , where and is the standard deviation ( std ) of the noise . death rates are assumed to be constant and identical for all traits . population growth is bounded and therefore death rates increase with the total population size , where is the number of -type individuals . this may account for density - dependent ecological factors such as limited resources or metabolic waste products accumulating at high population sizes . for specificity , we choose as functional form where is the carrying capacity scaling the maximal number of individuals and sets the rate of death events . beside environmental noise , demographic fluctuations arising from the stochastic nature of the birth - death dynamics yield the term , where is -correlated noise , , with a variance given by the sum of reaction rates . both multiplicative noise terms in eq . are interpreted in the ito sense . note that environmental noise is linearly multiplicative in , which is crucial for our results . let us now consider the fokker - planck equation ( fpe ) associated to eq . , which we will use to derive fixation probabilities and times . we shall carry out further analysis for two different traits ; generalizations are straightforward . the transformation of eq . to a fpe , depends on the correlation level of the environmental noise acting on distinct traits . while demographic noise for different traits is always uncorrelated , the same environmental noise can affect multiple traits , e.g. if both traits feed from the same nutrients whose abundance fluctuates . we keep the following analysis quite general by introducing the correlation coefficient : for environmental noise is uncorrelated , , while for . the resulting fpe is \nonumber \\ & + \frac{1}{2}\sum_i \partial^2_{i}\left\ { \left[(n_i\sigma_i)^2 + n_i\left(\nu_i+\gamma\frac{n}{k}\right)\right]p\right\}\ , , \end{aligned}\ ] ] where . to uncover the influence of environmental noise on the evolutionary dynamics , the relative abundances seem the natural observables . therefore , we change variables to the fraction and the total number of individuals . the fpe for and can be simplified exploiting the fact that selection , , is much slower than population growth . therefore , we integrate over the total population size , considering the fpe for the marginal distribution , and employing , see sm . the resulting one - dimensional fpe reads : \right\ } \nonumber \\ & + \!\partial^2_x\!\left\{\!\left[\frac{\sigma_1 ^ 2\!-\!2\epsilon\sigma_1\sigma_2\!+\!\sigma_2 ^ 2}{2}x(1\!-\!x)\!+\!\!\frac{\gamma}{k}\right]\ ! q\right\}\!\!\equiv\ ! { \cal l}p(x , t ) , \label{eq : fp}\end{aligned}\ ] ] where and the last equality defines the fokker - planck operator needed in the sequel . in ref . a similar fpe was derived for the special case . for , the drift term reduces to the well - know expression favoring the trait with a higher growth rate . note that the variability in the growth rates affects both the diffusion term and the drift , which is due to the multiplicative nature of the environmental noise . for simplicity , we discuss the case of _ different environmental sensitivity _ , defined by and , _ i.e. _ only the reproduction rate of the first trait depends on the environment . the drift is then proportional to independent of . if , _ i.e. _ the second trait with a smaller variability in its birth rate is also faster in reproducing , the evolutionary dynamics does not change qualitatively compared to conversely , if , the situation changes dramatically : the growth rate favors trait * 1 * while the variability term favors trait * 2*. this leads to a stable fixed point for ( for variability is not sufficient to prevent extinction of trait * 2 * ) . such a dynamics can be interpreted as frequency - dependent fitness function . however , the frequency - dependence arises here from environmental noise and not from a pay - off matrix as in standard evolutionary game theory . , depending on selection strength , , and variability according to eq . . other parameters are , , and . the black line indicates the parabola , which is our prediction for . the inset shows cuts for exemplary values of in , violet , blue , green . [ fig : a ] ] even though environmental variability causes a drift term favoring the traits which is less sensitive to environmental changes , the interplay between drift and diffusion term has to be understood to predict the evolutionary outcome . this is even more important as for the particular situation discussed here , the environmental contribution to the drift caused by is intrinsically connected to the diffusion term . therefore we study the fixation probability , _ i.e. _ the probability that trait * 1 * fixates or trait * 2 * goes extinct . this quantity can be calculated by solving the backward fpe , for the boundary conditions and . the solutions is given by ( for details see sm ) , }\right\}}{1-\exp{\left\{2\zeta\ , \text{tanh}^{-1}\alpha\right\ } } } , \label{eq : fix_prob}\end{aligned}\ ] ] with , and in fig . [ fig : a ] we show the fixation probability for different values of and ( ) . results are obtained by the numerical solution of eq . , _ i.e_. before marginalization on . the parabola ( fig . [ fig : a ] black line ) defined by ( or ) , separates the regions where one of the two traits is predominant : in the grey ( green ) area , the smaller variability ( growth rate ) dominates , respectively . the general case of both species having variable birth rates yields analogous results : a selection advantage for the trait with less variability . in the inset , the fixation probability depending on is compared to the analytic solution ( eq . ) for the four values . both plots demonstrate the advantage of the less variable trait . for strong environmental variations it is then beneficial for a species to minimize its sensitivity to those variations rather than optimizing its growth rate . interestingly one can interpret this result in the context of game theory : decreasing the sensitivity to environmental changes also means to optimize the worst case scenario outcome : the average birth rate is the least reduced when the variability is small . in game theory , this corresponds to the maximin strategy which was shown to be very successful in many fields as finance , economy or behavioral psychology . in the context of evolutionary dynamics another example of a maximin strategy was proposed for bacterial chemotaxis where bacteria move move so as to optimize their minimal uptake of chemoattractants . besides contributing to the fitness , environmental variability also influences fixation probability and time in the case of _ neutral evolution _ , _ i.e. _ and . such analysis is of great interest , as evolution is often studied by investigating how neutral mutations evolve over time . in recent years fast - sequencing techniques made huge amounts of data available , see , _ e.g. _ , , which is now analyzed and interpreted by comparison to evolutionary models as the moran or fisher - wright models . while the correlation parameter does not qualitatively influence results discussed so far , it plays an important role for neutral evolution . for fully correlated noise , , eq . is the same as for and thereby correspond to the ones obtained for no environmental noise , extinctions are solely driven by demographic fluctuations and well - known results apply . in contrast , for all other values of , including uncoupled noise , the dynamics differs in two major respects . first , the drift term does not vanish and corresponds to a stable fixed point at . second , the diffusion term consists of demographic and environmental fluctuations . as the drift suppresses extinction events while a larger diffusion term favors them , a more detailed analysis is required to grasp the evolutionary outcome . due to the stable fixed point , the fixation probability qualitatively differs from the linear dependence which holds for constant or uncorrelated environments . in fig [ fig:2]a ) , typical solutions for and are shown which clearly demonstrated the ensuing s - shape for the uncorrelated case ( ) . and . parameters are : , , and . dots are simulations of the ibm . additional parameters are , , , , , and \!=\!100 ] ( see fig . [ fig:2]b ) . fluctuating environments decrease the fixation times for all initial conditions . this has a crucial consequence : when measuring extinction times and comparing them to standard models without environmental fluctuations , one can only explain large diffusion constants by small population sizes . indeed , it is often found that effective population sizes are much smaller than the census population sizes . fig . [ fig:3 ] shows that conspicuous orders - of - magnitude reductions in the population size set in already at moderate levels of environmental noise . amongst other explanations this could account for a difference between effective and census population size . in other words as long as the level of environmental noise and the correlation level of its influence on different growth rates is not known , the effective population size can only be interpreted as a lower bound for the census population size . . using , we plot the values of the population size and noise that lead to the fixation time for . the black line corresponds to ( see ) , i.e. either perfectly correlated noise or no environmental noise . in the presence of environmental noise , the values of are systematically higher and increase several orders of magnitude even for moderate noise levels . [ fig:3 ] ] to further investigate the impact of variable environmental conditions , we introduce an exemplary individual based model ( ibm ) . in particular , the ibm serves as a proof of principle that linear multiplicative noise can be realistically expected and enables us to study the effect of such noise beyond the white noise regime . importantly , our results presented above hold for any microscopic model whose macroscopic representation is given by eq . , _ i.e. _ where noise in the birth or death rates is linearly multiplicative . in specific scenario discussed here , the reproduction rate of an individual , , at time , depends a priori on the history of environmental conditions experienced during its lifetime ] . we first consider a constant environment . the average _ instantaneous growth rate _ is assumed to be a positive , monotonically increasing function of . in particular , we consider the sigmoidal function : with the ordinate of the inflection point , the maximal deviation from it , and scales the sensitivity to environmental changes . let us now consider changing environments and individuals whose current growth rate memorizes previously experienced environments . the reproduction rate of an individual , , of type now depends on the whole vector , . for concreteness , we assume that the rate is where the memory parameter ] , the mean of the growth rate and std of the noise in eq . are given by : note that the variability in the growth rate not only results in , but also influences the average reproduction rate . while for such a variability increases , the second term of is reduced while increases till it changes sign ( see sm for details ) . for instance , for the growth rate is approximately . hence , the more variable trait has a disadvantage in the average reproduction rate in addition to the effects discussed above . for , the approximation holds . dependencies in this expression are intuited as follows . the number of environmental changes an individual experiences until the memory resets is of the order , where is the typical time for an individual to reproduce or die . as environmental changes are independent random events , the variance of the reproduction rates is . the expression for is finally obtained noting that correlations in the noise extend over times therefore it follows that the average reproduction rate drops out . comparison of the ibm and the langevin model . we show the fixation time for neutral evolution for _ vs _ the environmental switching rate . dots correspond to the ibm [ ] for different values of in red , blue and green . black lines are analytic solutions [ eq . with eq . ] . for quickly fluctuating environments both results are in good agreement whilst for large the white noise approximation fails . other parameters are as in fig . [ fig:2 ] . ] for a detailed comparison of the ibm with the analytics derived in the first part of this paper , we simulate the ibm with a modified gillespie algorithm updating reproduction rates after every environmental change . as shown in figs . [ fig:2]a ) and b ) , results for fixation probability and time , are in very good agreement with analytic solutions [ eqs . and ] . in particular , the sigmoidal shape of the fixation probability is well reproduced by the ibm , supporting the existence and importance of linear multiplicative noise . finally , the ibm enables us to study the environmental switching rate . this is of main interest as results obtained previously strictly only hold for very rapidly fluctuating environments . in fig . [ fig : change_env ] , the dependency on of the extinction time in the neutral case for is shown for different ; see sm for results with . the black lines correspond to eq . mapped according to eqs . and dots are obtained by stochastic simulations of the ibm for . for both models are in very good agreement . this demonstrates that the white noise approximation is valid in a broad parameter range , where fluctuating environments substantially influence the evolutionary dynamics . in summary , we demonstrated that environmental variability has crucial impact on evolutionary fitness . first , we quantified the role of reduced sensitivity to environmental changes and determined how it substantially increases the fitness . second , we showed that the timescale of extinction in the neutral case is strongly affected by environmental noise . that provides a mechanism to explain experimental observations of population sizes that are often much smaller than expected . finally , we investigated individual based models that generate the linear multiplicative noise considered here . it will be of interest to investigate how different forms of memory or time - dependent reproduction rates influence evolution and to integrate them with evolutionary game models . 28 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , _ _ ( , ) . , _ _ ( , ) , . , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * ( ) . , * * ( ) . , * * , ( ) . , , , * * , ( ) . , , , , , * * ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , _ _ ( , ) , ed . , * * , ( ) . , _ _ ( , , ) . , _ _ ( , , ) . , * * , ( ) . , _ _ ( , ) . , _ _ ( , ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . in this supporting material , we provide more details on the calculations leading to the one - dimensional fokker - planck equation ( fpe ) , eq . ( 3 ) main text . moreover , we present calculations for the average fixation time and the extinction probability . we discuss the mapping of the individual based model ( ibm ) to the langevin model . for the no - memory limit we present an analytic derivation of the mapping . for other parameter values we give heuristic arguments that are supported by additional data . finally , we present results for non - neutral evolution investigating the regime in which the white noise approximation is an adequate description for the evolutionary process .
|
xxx = aircraft + state matrix of the damaged aircraft + input matrix of the damaged aircraft + aircraft wing span , + output matrix of the damaged aircraft + dimensionless derivative of rolling moment , + dimensionless derivative of yawing moment , + dimensionless derivative of side force , + state transition matrix of the damaged aircraft + change in side wash angle with respect to change in side slip angle + gravitational acceleration , + normalized mass moment of inertia about the x axis , + normalized product of inertia about the xz axis , + normalized mass moment of inertia about the z axis , + dimensional derivative of rolling moment , + vertical stabilizer lift force , + distance from the vertical stabilizer aerodynamic center to the aircraft center of gravity , + aircraft mass , + dimensional derivative of yawing moment , + roll rate , + yaw rate , + aircraft wing area , + vertical stabilizer area , + engine thrust , + engine thrust command , + time , + time delay , + vertical stabilizer volume ratio , + airspeed , + aircraft weight , + dimensional derivative of side force , + distance from the outermost engine to the aircraft center of gravity , + distance from the vertical stabilizer center of pressure to the fuselage center line , + angle of attack , + side slip angle , + flight path angle , + aileron deflection , + rudder deflection , + collective thrust , + differential thrust , + damping ratio + efficiency factor + pitch angle , + air density , + time constant , + roll angle , + bandwidth frequency , + first order timederivative + second order time derivative + trimmed value + damaged aircraft dynamics +the vertical stabilizer of an aircraft is an essential element in providing the aircraft with its directional stability characteristic while ailerons and rudder serve as the primary control surfaces of the yawing and banking maneuvers . in the event of an aircraft losing its vertical stabilizer , the sustained damage will cause lateral / directional stability to be compromised and lack of control is likely to result in a fatal crash .notable examples of such a scenario are the crash of the american airline 587 in 2001 when an airbus a300 - 600 lost its vertical stabilizer in wake turbulence , killing all passengers and crew members and the crash of japan airlines flight 123 in 1985 when a boeing 747-sr100 lost its vertical stabilizer leading to an uncontrollable aircraft , resulting in 520 casualties .however , not all situations of losing the vertical stabilizer resulted in a total disaster . in one of those cases , the united airlines flight 232 in 1989 , differential thrust was proved to be able to make the aircraft controllable .another remarkable endeavor is the landing of the boeing 52-h even though the aircraft lost most of its vertical stabilizer in 1964 . in literature ,research on this topic has been conducted with two main goals : to understand the response characteristics of the damaged aircraft such as the work of bacon and gregory , nguyen and stepanyan , and shah , as well as to come up with an automatic control algorithm to save the aircraft from disasters where the work of burcham et al . , guo et al . , liu et al . , tao and ioanou , and urnes and nielsen hold the detailed analysis .there are many valuable efforts in literature , such as the work of shah , where a wind tunnel study was performed to evaluate the aerodynamic effects of damages to lifting and stability / control surfaces of a commercial transport aircraft . in his work, shah studied this phenomenon in the form of partial or total loss of the wing , horizontal , or vertical stabilizers for the development of flight control systems to recover the damaged aircraft from adverse events .the work of nguyen and stepanyan investigates the effect of the engine response time requirements of a generic transport aircraft in severe damage situations associated with the vertical stabilizer .they carried out a study which concentrates on assessing the requirements for engine design for fast responses in an emergency situation .in addition , the use of differential thrust as a propulsion command for the control of directional stability of a damaged transport aircraft was studied by urnes and nielsen to identify the effect of the change in aircraft performance due to the loss of the vertical stabilizer and to make an improvement in stability utilizing engine thrust as an emergency yaw control mode with feedback from the aircraft motion sensors .existing valuable research in literature provides very unique insight into the dynamics of such an extreme scenario whereas in this paper a unique extension to the existing works is provided . here, we provide a loop - shaping based robust differential thrust control methodology to control such a damaged aircraft and land safely .this research is motivated to improve air travel safety in commercial aviation by incorporating the utilization of differential thrust to regain lateral / directional stability for a commercial aircraft ( in this case , a boeing 747 - 100 ) with a damaged vertical stabilizer . for this purpose ,the damaged aircraft model is constructed in section [ ac model ] , where its plant dynamics is investigated . in section [ differential thrust ] , the engine dynamics of the aircraft is modeled as a system of differential equations with corresponding time constant and time delay terms to study the engine response characteristic with respect to a differential thrust input , and the novel differential thrust control module is developed to map the rudder input to differential thrust input . in section [ open loop ] , the aircraft s open loop system response is investigated . in section[ robust ] , the robust control system design based on loop - shaping approach is implemented as a means to stabilize the damaged aircraft .investigation provides remarkable results of such robust differential thrust methodology as the damaged aircraft can reach steady state stability relatively fast , under feasible control efforts . in section [ robustness ] ,robustness and uncertainty analysis is conducted to test the guaranteed stability bounds and to validate the overall performance of the system in the presence of given ball of uncertainty . with section [ conclusion ] ,the paper is finalized , and future work is discussed .in this paper , the flight scenario is chosen to be a steady , level cruise flight for the boeing 747 - 100 at mach 0.65 and 20,000 feet .it is assumed that at one point during the flight , the vertical stabilizer is completely damaged , and in the followings the means to control such aircraft is investigated in such an extreme case scenario .for this purpose , first , the damaged aircraft model is developed for analysis , where the flight conditions for the damaged model are summarized in table [ table : flight conditions ] ..flight conditions [ cols="^,^",options="header " , ] table [ table : damaged_damping ] clearly indicates the unstable nature of the damaged aircraft in the dutch roll mode by the obvious right half plane ( rhp ) pole locations .furthermore , the pole of the spiral mode lies at the origin .the only stable mode of the damaged aircraft remains to be the roll mode .following the aerodynamic and stability analysis of the damaged aircraft , we next concentrate on the differential thrust methodology .with emerging advancements in manufacturing processes , structures , and materials , it is a well known fact that aircraft engines have become highly complex systems and include numerous nonlinear processes , which affect the overall performance ( and stability ) of the aircraft . from the force - balance point of view , this is usually due to the existing coupled and complex dynamics between engine components and their relationships in generating thrust . however , in order to utilize the differential thrust generated by the jet engines as a control input for lateral / directional stability , the dynamics of the engine need to be modeled in order to gain an insight into the response characteristics of the engines .engine response , generally speaking , depends on its time constant and time delay characteristics .time constant dictates how fast the thrust is generated by the engine while time delay ( which is inversely proportional to the initial thrust level ) is due to the lag in engine fluid transport and the inertia of the mechanical systems such as rotors and turbo - machinery blades .it is also suggested that the non - linear engine dynamics model can be simplified as a second - order time - delayed linear model as where and are the damping ratio and bandwidth frequency of the closed - loop engine dynamics , respectively ; is the time delay factor , and is the thrust command prescribed by the engine throttle resolver angle . with the time constant defined as the inverse of the bandwidth frequency , and chosento be 1 representing a critically damped engine response ( to be comparable to existing studies ) , the engine dynamics can be represented as = \left [ \begin{array}{cc } 0 & 1\\ \frac{-1}{\tau^2 } & \frac{-2}{\tau } \end{array}\right ] \left [ \begin{array}{c } t\\ \dot{t } \end{array}\right ] + \left [ \begin{array}{c } 0 \\ \frac{1}{\tau^2 } \end{array}\right ] t_c(t - t_d)\ ] ] for this study , the pratt and whitney jt9d-7a engine is chosen for the application in the boeing 747 - 100 , where the engine itself produces a maximum thrust of 46,500 lbf . at mach 0.65 and 20,000 feet flight conditions ,the engine time constant is 1.25 seconds , and the time delay is 0.4 second .the engine thrust response curve at mach 0.65 and 20,000 feet is , therefore , obtained as shown in fig .[ fig : engine_response ] , which provides a useful insight into how the time constant and time delay factors affect the generation of thrust for the jt9d-7a jet engine . at mach 0.65 and 20,000 feet , with the engine time constant of 1.25 seconds , and the time delay of 0.4 second, it takes approximately ten seconds for the engine to reach steady state and generate its maximum thrust capacity at 46,500 lbf from the trim thrust of 3221 lbf .the increase in thrust generation follows a relatively linear fashion with the engine response characteristic of approximately 12,726 lbf / s during the first two seconds , and then the thrust curve becomes nonlinear until it reaches its steady state at maximum thrust capacity after about ten seconds .this represents one major difference between the rudder and differential thrust as a control input .due to the lag in engine fluid transport and turbo - machinery inertia , differential thrust ( as a control input ) can not respond as instantaneously as the rudder , which has to be taken into account very seriously in control system design . in order to utilize differential thrust as a control input for the four - engined boeing 747 - 100 aircraft , a differential thrust control module must be developed . here, the differential thrust input is defined as the difference between the thrust generated by engine number 1 and engine number 4 while the amounts of thrust generated by engine number 2 and 3 are kept equal to each other as shown in eqs .( [ eq : d_th]-[eq : th23 ] ) .this concept is illustrated in further details in fig .[ fig : model ] . engine number 1 and 4 are employed to generate the differential thrust due to the longer moment arm , which makes the differential thrust more effective as a control for yawing moment .this brings into the picture the need of developing a logic that maps the rudder input to differential thrust input , which is further explained in the following section . when the vertical stabilizer of the aircraft is intact ( i.e. with nominal plant dynamics ) , the pilot has the ailerons and rudder as major control surfaces .however , when the vertical stabilizer is damaged , most probably , the pilot will keep on demanding the control effort from the rudder until it is clear that there is no response from the rudder . to eliminate this mishap , but to still be able to use the rudder demand , a differential thrust control module is introduced in the control logic , as shown in fig .[ fig : control_logics ] and fig .[ fig : dt_control_module ] , respectively .this differential thrust control module maps corresponding input / output dynamics from the rudder pedals to the aircraft response , so that when the rudder is lost , the rudder input ( from the pilot ) will still be utilized but switched to the differential thrust input , which will act as a _ rudder - like input _ for lateral / directional controls .this logic constitutes one of the novel approaches introduced in this paper . as it can be also seen from fig .[ fig : control_logics ] and fig .[ fig : dt_control_module ] , the differential thrust control module s function is to convert the rudder pedal input from the pilot to the differential thrust input . in order to achievethat , the rudder pedal input ( in radians ) is converted to the differential thrust input ( in pounds - force ) which is then provided into the engine dynamics , as discussed previously in section [ propulsion ] . with this modification , the engine dynamics will dictate how differential thrust is generated , which is then provided as a `` virtual rudder '' input into the aircraft dynamics .the radian to pound - force mapping is provided in the next section , for the completeness of analysis .using fig .[ fig : model ] and with the steady , level flight assumption , the following relationship can be obtained : which means the yawing moment by deflecting the rudder and by using differential thrust have to be the same .therefore , the relationship between the differential thrust control input and the rudder control input can be obtained as based on the flight conditions at mach 0.65 and 20,000 feet , and the data for the boeing 747 - 100 summarized in table [ table : flight conditions ] and table [ table : the undamaged aircraft data ] , the conversion factor for the rudder control input to the differential thrust input is calculated to be due to the sign convention of rudder deflection and the free body diagram in fig .[ fig : model ] , here is negative .therefore , for the boeing 747 - 100 , in this study , the conversion factor for the mapping of a rudder input to a differential thrust input is found to be at this point , the worst case scenario is considered , and it is assumed that the aircraft has lost its vertical stabilizer so that the rudder input is converted to the differential thrust input according to the logic discussed previously in this section . unlike the rudder , due to delayed engine dynamics with time constant , there is a major difference in the commanded differential thrust and the available differential thrust as shown in fig .[ fig : dt ] .it can be seen from fig . [fig : dt ] that compared to the commanded differential thrust , the available differential thrust is equal in amount but longer in the time delivery . for a one degree step input on the rudder ,the corresponding equivalent commanded and available differential thrust are 7737 lbf , which is deliverable in ten second duration .unlike the instantaneous control of the rudder input , there is a lag associated with the use of differential thrust as a control input .this is due to the lag in engine fluid transport and the inertia of the mechanical systems such as rotors and turbo - machinery blades .this is a major constraint of plant dynamics and will be taken into account during the robust control system design phase in the following sections .following to this , the open loop response characteristics of the aircraft with a damaged vertical stabilizer to one degree step inputs from the ailerons and differential thrust are presented in fig .[ fig : open_loop_response ] . here , it can be clearly seen that when the aircraft is damaged and the vertical stabilizer is lost , the aircraft response to the pilot s inputs is absolutely unstable in all four states ( as it was also obvious ( and expected ) from the pole locations ) .this shows the pilot will not have much chance to stabilize the aircraft in time , which calls for a novel approach to save the damaged aircraft .this is another point where the second novel contribution of the paper is introduced : automatic control strategy to stabilize the aircraft , which allows safe ( i.e. intact ) landing of the aircraft .as having been witnessed in section [ open loop ] , the open loop responses of the damaged aircraft are unstable in all four lateral / directional states .this means the pilot will not have much chance to save the aircraft , which calls for a novel approach to save the damaged aircraft and to provide a safe landing .therefore , in this section , the robust control system design based on loop - shaping approach is chosen as a means to stabilize the damaged aircraft due to its ability to suppress external disturbances and noise , and overall system robustness .the damaged aircraft plant s open loop singular values are shaped by the pre- and post - compensation as illustrated in fig .[ fig : shaped_plant ] . as it can be seen from fig .[ fig : shaped_plant ] , is the open loop plant of the damaged aircraft while and are the pre- and post - compensation , respectively .the shaped plant is , therefore , in addition , the controller is synthesized by solving the robust stabilization problem for the shaped plant with a normalized left coprime factorization of , and the feedback controller for plant is , therefore , . next , the implemented loop - shaping diagram is shown in fig . [fig : hinf_loop ] .loop - shaping diagram ] figure [ fig : hinf_loop ] provides an insight into the robust controller design based on the loop - shaping approach .the main goal of this controller is to stabilize all four lateral / directional states of the damaged aircraft , which are roll rate , roll angle , side - slip angle , and yaw rate .the control inputs of the damaged aircraft are ailerons and differential thrust . as it is obvious from fig .[ fig : hinf_loop ] , the pilot s inputs , which are one degree step inputs for both ailerons and rudder , will go through the pre - filter gain and then the _ input control module _, where the aileron control signal is routed through the saturation limits of 26 degrees and the rudder control input is routed through the _ differential thrust control module _ , where the rudder input is converted to differential thrust input as discussed previously in section [ differential thrust ] .it is also worth noting that there are also a differential thrust saturation limit of 43,729 lbf and a thrust generation rate limiter of 12,726 lbf / s imposed on the differential thrust control as discussed in section [ differential thrust ] .this is to make sure the control inputs are within the limits of both the ailerons and differential thrust .furthermore , in order for the control efforts to be feasible in a real - life scenario , the control effort signals are also routed through the _ actuation dynamics _, where the same saturation and rate limits are imposed on the ailerons and differential thrust as discussed previously .the next step is to select the weighting functions , which should be taken into careful consideration due to the dimensions of the system . after an iterative process ,the selected weighting functions for are as we can then construct the _ system matrix _ as \ ] ] the selected weighting functions for are as where the _ system matrix _ of becomes \ ] ] as it is customary in general loop - shaping formulation , the maximum stability margin is defined as a performance criterion where is the optimal cost . for our analysis , the maximum stability margin is then obtained as which is within the suggested optimal value bounds : .next , the sensitivity ( s ) and co - sensitivity ( t ) plots are constructed .figure [ fig : si_so ] illustrates the input and output sensitivity functions .+ ( a ) input sensitivity functions + ( b ) output sensitivity functions next , we can investigate the shaped plant behaviors for inputs and outputs in fig .[ fig : li_lo ] . + ( a ) input responses of shaped plant + ( b ) output responses of shaped plant from fig .[ fig : li_lo ] , we can see that for the aileron control input , the shaped plant has very good response characteristics of having high gain at low frequencies for tracking and low gain at high frequencies for disturbance rejection .for the differential thrust control input , although the gain at low frequencies is not as high as that of the aileron control input , but the gain is relatively linear , which makes the differential thrust control quite predictable for the pilots . at high frequencies , the gain roll - off is also linear and quite similar to that of the aileron control input , which is helpful at rejecting disturbances .in addition , fig .[ fig : li_lo ] also shows the output responses of the shaped plant in all four lateral / directional states , from which we can see that the implemented controller based on loop - shaping approach can augment the system in two groups : roll angle and roll rate ( and ) and side - slip angle and yaw rate ( and ) .it is also obvious that roll angle and roll rate ( and ) have higher gains than side - slip angle and yaw rate ( and ) , which is expected when the aircraft loses its vertical stabilizer . in addition , it is possible to look at the augmented control action by the loop - shaping controller in fig . [fig : k_aug ] .furthermore , the co - sensitivity plots related to the shaped plant behaviors for the inputs and outputs are presented in fig .[ fig : ti_to ] . fig [ fig : k_aug ] and fig .[ fig : ti_to ] prove that the controller based on loop - shaping approach is able to achieve desirable closed loop response characteristics in the frequency domain in terms of robustness and stability .loop - shaping controller ] + ( a ) co - sensitivity plot for input responses of shaped plant + ( b ) co - sensitivity plot for output responses of shaped plant next , the comparison of the open - loop , shaped , and robustified plant response is carried out to investigate the effect of the loop - shaping on the plant dynamics as illustrated in fig .[ fig : loop_da ] and fig .[ fig : loop_dt ] . as seen in fig .[ fig : loop_da ] and fig .[ fig : loop_dt ] , compared to the open - loop plant response , the performance of the damaged aircraft is further improved by controller .vs. shaped vs. robustified plant for aileron input , title="fig : " ] + ( a ) transfer function vs. shaped vs. robustified plant for aileron input , title="fig : " ] + ( b ) transfer function vs. shaped vs. robustified plant for aileron input , title="fig : " ] + ( c ) transfer function vs. shaped vs. robustified plant for aileron input , title="fig : " ] + ( d ) transfer function vs. shaped vs. robustified plant for differential thrust input , title="fig : " ] + ( a ) transfer function vs. shaped vs. robustified plant for differential thrust input , title="fig : " ] + ( b ) transfer function vs. shaped vs. robustified plant for differential thrust input , title="fig : " ] + ( c ) transfer function vs. shaped vs. robustified plant for differential thrust input , title="fig : " ] + ( d ) transfer function furthermore , it is also worth checking the allowable gain and phase margin variations ( i.e. associated uncertainty balls in gain and phase margins for a robust response and guaranteed stability ) . obtained results are described in fig .[ fig : diskmargin_input ] and fig .[ fig : diskmargin_output ] . from fig .[ fig : diskmargin_input ] and fig .[ fig : diskmargin_output ] , it can be clearly seen that with the loop - shaping control system design , both the inputs and outputs can achieve desirable and safe stability margins .now it is time to look at the time domain behaviors of our system , which are shown in fig .[ fig : hinf_state ] .it is worth mentioning that the control inputs for both plants are one degree step inputs for both the ailerons and differential thrust to simulate an extreme scenario test to see whether the damaged aircraft utilizing differential thrust can hold itself in a continuous yawing and banking maneuver without becoming unstable and losing control . from fig .[ fig : hinf_state ] , it is obvious that after only 15 seconds , all four states of the aircraft s lateral / directional dynamics reach steady state values , which means that the controller can stabilize the damaged aircraft in only 15 seconds .however , this comes at the cost of the control efforts as shown in fig .[ fig : hinf_control_efforts ] , which are still under the control limits and without any saturation of the actuators . as discussed previously in this section , in order to have a feasible control strategy in real - life situation , limiting factors are imposed on the aileron and differential thrust control efforts .the aileron deflection is limited at degrees . for differential thrust , a differential thrust saturationis set at 43,729 lbf , which is the difference of the maximum thrust and trimmed thrust values of the jt9d-7a engine .in addition , a rate limiter is also imposed on the thrust response characteristic at 12,726 lbf / s as discussed in section [ differential thrust ] . the aileron control effort , as indicated by fig .[ fig : hinf_control_efforts ] , calls for the maximum deflection of approximately 2.4 degrees and reaches steady state at approximately -0.1 degree of deflection after 15 seconds .this aileron control effort is very reasonable and achievable if the ailerons are assumed to have instantaneous response characteristics by neglecting the lag from actuators or hydraulic systems .the differential thrust control effort demands a maximum differential thrust of approximately 3350 lbf , which is well within the thrust capability of the jt9d-7a engine , and the differential thrust control effort reaches steady state at around 15 lbf after 15 seconds .therefore , it can be concluded that the robust control system design based on the loop - shaping approach is proven to be able to stabilize and save the damaged aircraft .the robustness of the loop - shaping robust differential thrust control system design presented in this paper is investigated by the introduction of 30% of full block , additive uncertainty into the plant dynamics of the damaged aircraft to test the performance of the aircraft in the presence of uncertainty .[ fig : robust_hinf_loop ] shows the logic behind the robust control system design in the presence of uncertainty .loop - shaping diagram in the presence of uncertainty ] one thousand monte - carlo simulations were conducted to test the robustness of the damaged plant in the presence of uncertainty .the state responses in the presence of 30% of uncertainty are shown in fig .[ fig : robust_output_u_30 ] , where it is obvious that the robust control system design is able to perform well under given uncertain conditions and the damaged aircraft has stable steady state responses within only 15 seconds . in that sense ,the uncertain plant dynamics are well within the expected bounds . however , these favorable characteristics come at the expense of the control effort from the ailerons and differential thrust as shown in fig .[ fig : robust_control_effort_u_30 ] . according to fig .[ fig : robust_control_effort_u_30 ] , when there is 30% full block , additive uncertainty , the aileron control demands the maximum deflection of approximately 2.4 degrees and reaches steady state between -0.12 and -0.09 degree after 15 seconds , which is quite similar to the required aileron control effort when there is no uncertainty .therefore , the aileron control effort demands are reasonable and feasible due to the limiting factor of 26 degrees of the aileron deflection and the assumption that ailerons have instantaneous response characteristics by neglecting the lag from actuators or hydraulic systems . as for differential thrust , when there is 30% uncertainty , the differential thrust control demands at maximum approximately 6500 lbf , which is within the thrust capability of the jt9d-7a engine , and the differential thrust control effort reaches steady state between -100 lbf ( negative differential thrust means ) and 100 lbf ( positive differential thrust means ) after 15 seconds . compared to the case whenthere is no uncertainty , the demanded differential thrust associated with uncertain plant dynamics is higher in both magnitude and rate .it is also obvious from fig .[ fig : robust_control_effort_u_30 ] that in a few cases , the differential thrust control effort demand hit the thrust generation rate limiter , which is set at 12,726 lbf / s for the jt9d-7a engine , but fortunately , the control system is so robust that throughout 1000 monte - carlo simulations , it can stabilize the aircraft s uncertain plant dynamics .again , due to the differential thrust saturation set at 43,729 lbf and the thrust response limiter set at 12,726 lbf / s , this control effort of differential thrust in the presence of uncertainty is achievable in real life situation .this paper studied the utilization of differential thrust as a control input to help a boeing 747 - 100 aircraft with a damaged vertical stabilizer regain its lateral / directional stability .the motivation of this research study is to improve the safety of air travel in the event of losing the vertical stabilizer by providing control means to safely control and/or land the aircraft . throughout this paper ,the necessary damaged aircraft model was constructed , where lateral / directional equations of motion were revisited to incorporate differential thrust as a control input for the damaged aircraft .then the open loop plant dynamics of the damaged aircraft was investigated .the engine dynamics of the aircraft was modeled as a system of differential equations with engine time constant and time delay terms to study the engine response time with respect to a commanded thrust input .next , the novel differential thrust control module was presented to map the rudder input to differential thrust input .the loop - shaping approach based robust control system design s ability to stabilize the damaged aircraft was proven as investigation results demonstrated that the damaged aircraft was able to reach steady state stability within only 15 seconds under feasible control efforts .further analysis on robustness showed that the loop - shaping approach based robust differential thrust methodology was also able to stabilize the uncertain plant dynamics in the presence of 30% full block , additive uncertainty associated with the damaged aircraft dynamics . through listed analyses above , the ability to save the damaged aircraft by the robust differential thrust control strategy has been demonstrated in this paper .this framework provides an automatic control methodology to save the damaged aircraft and avoid the dangerous coupling of the aircraft and pilots , which led to crashes in a great number of commercial airline incidents .furthermore , it has also been concluded that due to the heavy dependence of the differential thrust generation on the engine response , in order to better incorporate the differential thrust as an effective control input in a life - saving scenario , major developments in engine response characteristics are also desired to better assist such algorithms. national transportation safety board , `` in - flight separation of vertical stabilizer american airlines flight 587 airbus industrie a300 - 605r , n14053 belle harbor , new york ,november 12 , 2001 '' , ntsb / aar-04/04 , 2004 .guo , j. , tao , g. , and liu , y. , `` multivariable adaptive control of nasa generic transport aircraft model with damage , '' _ journal of guidance , control , and dynamics _ , vol.34 , no.05 , sep . oct .2011 , pp .1495 - 1506 .liu , y. , tao , g. , and joshi , s. m. , `` modeling and model reference adaptive control of aircraft with asymmetric damages , '' _ 2009 aiaa guidance , navigation , and control conference _, aiaa paper 2009 - 5617 , 2009 .tao , g. , and ioannou , p. a. , `` robust model reference adaptive control for multivariable plants , '' _ international journal of adaptive control and signal processing _2 , no . 3 , 1988 ,doi:10.1002/acs.4480020305 mcfarlane , d. , and glover , k. , `` robust controller design using normalized coprime factor plant descriptions , '' vol .138 of _ lecture notes in control and information sciences _ , springer - verlag , berlin , 1990 .
|
the vertical stabilizer is the key aerodynamic surface that provides an aircraft with its directional stability characteristic while ailerons and rudder are the primary control surfaces that give pilots control authority of the yawing and banking maneuvers . losing the vertical stabilizer will , therefore , result in the consequential loss of lateral / directional stability and control , which is likely to cause a fatal crash . in this paper , we construct a scenario of a damaged aircraft model which has no physical rudder control surface , and then a strategy based on differential thrust is proposed to be utilized as a control input to act as a `` virtual '' rudder to help maintain stability and control of the damaged aircraft . the loop - shaping approach based robust control system design is implemented to achieve a stable and robust flight envelope , which is aimed to provide a safe landing . investigation results demonstrate successful application of such robust differential thrust methodology as the damaged aircraft can achieve stability within feasible control limits . finally , the robustness analysis results conclude that the stability and performance of the damaged aircraft in the presence of uncertainty remain within desirable limits , and demonstrate not only a robust , but a safe flight mission through the proposed loop - shaping robust differential thrust control methodology .
|
the relatively recent discipline of data mining involves a wide spectrum of techniques , inherited from different origins such as statistics , databases , or machine learning . among them, association rule mining is a prominent conceptual tool and , possibly , a cornerstone notion of the field , if there is one .currently , the amount of available knowledge regarding association rules has grown to the extent that the tasks of creating complete surveys and websites that maintain pointers to related literature become daunting .a survey , with plenty of references , is , and additional materials are available in ; see also , , , , , , and the references and discussions in their introductory sections .given an agreed general set of `` items '' , association rules are defined with respect to a dataset that consists of `` transactions '' , each of which is , essentially , a set of items .association rules are customarily written as , for sets of items and , and they hold in the given dataset with a specific `` confidence '' quantifying how often appears among the transactions in which appears .a close relative of the notion of association rule , namely , that of exact implication in the standard propositional logic framework , or , equivalently , association rule that holds in 100% of the cases , has been studied in several guises .exact implications are equivalent to conjunctions of definite horn clauses : the fact , well - known in logic and knowledge representation , that horn theories are exactly those closed under bitwise intersection of propositional models leads to a strong connection with closure spaces , which are characterized by closure under intersection ( see the discussions in or ) .implications are also very closely related to functional dependencies in databases .indeed , implications , as well as functional dependencies , enjoy analogous , clear , robust , hardly disputable notions of redundancy that can be defined equivalently both in semantic terms and through the same syntactic calculus .specifically , for the semantic notion of entailment , an implication is entailed from a set of implications if every dataset in which all the implications of hold must also satisfy ; and , syntactically , it is known that this happens if and only if is derivable from via the armstrong axiom schemes , namely , reflexivity ( for ) , augmentation ( if and then , where juxtaposition denotes union ) and transitivity ( if and then ) . also , such studies have provided a number of ways to find implications ( or functional dependencies ) that hold in a given dataset , and to construct small subsets of a large set of implications , or of functional dependencies , from which the whole set can be derived ; in closure spaces and in data mining these small sets are usually called `` bases '' , whereas in dependency theory they are called `` covers '' , and they are closely related to deep topics such as hypergraph theory .associated natural notions of minimality ( when no implication can be removed ) , minimum size , and canonicity of a cover or basis do exist ; again it is inappropriate to try to give a complete set of references here , but see , for instance , , , , , , , , , , and the references therein .however , the fact has been long acknowledged ( e.g. already in ) that , often , it is inappropriate to search only for absolute implications in the analysis of real world datasets .partial rules are defined in relation to their `` confidence '' : for a given rule , the ratio of how often and are seen together to how often is seen .many other alternative measures of intensity of implication exist , ; we keep our focus on confidence because , besides being among the most common ones , it has a natural interpretation for educated users through its correspondence with the observed conditional probability .the idea of restricting the exploration for association rules to frequent itemsets , with respect to a support threshold , gave rise to the most widely discussed and applied algorithm , called apriori , and to an intense research activity .already with full - confidence implications , the output of an association mining process often consists of large sets of rules , and a well - known difficulty in applied association rule mining lies in that , on large datasets , and for sensible settings of the confidence and support thresholds and other parameters , huge amounts of association rules are often obtained . therefore , besides the interesting progress in the topic of how to organize and query the rules discovered ( see , , ) , one research topic that has been worthy of attention is the identification of patterns that indicate redundancy of rules , and ways to avoid that redundancy ; and each proposed notion of redundancy opens up a major research problem , namely , to provide a general method for constructing bases of minimum size with respect to that notion of redundancy . for partial rules ,the armstrong schemes are not valid anymore .reflexivity does hold , but transitivity takes a different form that affects the confidence of the rules : if the rule ( or , which is equivalent ) and the rule both hold with confidence at least , we still know nothing about the confidence of ; even the fact that both and hold with confidence at least only gives us a confidence lower bound of for ( assuming ) .augmentation does not hold at all ; indeed , enlarging the antecedent of a rule of confidence at least may give a rule with much smaller confidence , even zero : think of a case where most of the times appears it comes with , but it only comes with when is not present ; then the confidence of may be high whereas the confidence of may be null . similarly, if the confidence of is high , it means that and appear together in most of the transactions having , whence the confidences of and are also high ; but , with respect to the converse , the fact that both and appear in fractions at least of the transactions having does not inform us that they show up _ together _ at a similar ratio of these transactions : only a ratio of is guaranteed as a lower bound .in fact , if we look only for association rules with singletons as consequents ( as in some of the analyses in , or in the `` basic association rules '' of , or even in the traditional approach to association rules and the useful apriori implementation of borgelt available on the web ) we are almost certain to lose information . as a consequence of these failures of the armstrong schemes , the canonical and minimum - size cover construction methods available for implications or functional dependencies are not appropriate for partial association rules . on the semantic side ,a number of formalizations of the intuition of redundancy among association rules exist in the literature , often with proposals for defining irredundant bases ( see , , , , , , , the survey , and section 6 of the survey ) .all of these are weaker than the notion that we would consider natural by comparison with implications ( of which we start the study in the last section of this paper ) .we observe here that one may wish to fulfill two different roles with a basis , and that both appear ( somewhat mixed ) in the literature : as a computer - supported data structure from which confidences and supports of rules are computed ( a role for which we use the closures lattice instead ) or , in our choice , as a means of providing the user with a smallish set of association rules for examination and , if convenient , posterior enumeration of the rules that follow from each rule in the basis .that is , we will not assume to have available , nor to wish to compute , exact values for the confidence , but only discern whether it stays above a certain user - defined threshold .we compute actual confidences out of the closure lattice only at the time of writing out rules for the user .this paper focuses mainly on several such notions of redundancy , defined in a rather general way , by resorting to confidence and support inequalities : essentially , a rule is redundant with respect to another if it has at least the same confidence and support of the latter _ for every dataset_. we also discuss variants of this proposal and other existing definitions given in set - theoretic terms .for the most basic notion of redundancy , we provide formal proofs of the so far unstated equivalence among several published proposals , including a syntactic calculus and a formal proof of the fact , also previously unknown , that the existing basis known as the essential rules or the representative rules ( , , ) is of absolutely minimum size .it is natural to wish further progress in reducing the size of the basis .our theorems indicate that , in order to reduce further the size without losing information , more powerful notions or redundancy must be deployed .we consider for this role the proposal of handling separately , to a given extent , full - confidence implications from lower - than-1-confidence rules , in order to profit from their very different combinatorics . this separation is present in many constructions of bases for association rules , , .we discuss corresponding notions of redundancy and completeness , and prove new properties of these notions ; we give a sound and complete deductive calculus for this redundancy ; and we refine the existing basis constructions up to a point where we can prove again that we attain the limit of the redundancy notion .next , we discuss yet another potential for strengthening the notion of redundancy .so far , all the notions have just related one partial rule to another , possibly in the presence of full implications .is it possible to combine two partial rules , of confidence at least , and still obtain a partial rule obeying that confidence level ? whereas the intuition is that these confidences will combine together to yield a confidence lower than , we prove that there is a specific case where a rule of confidence at least is nontrivially entailed by two of them .we fully characterize this case and obtain from the caracterization yet another deduction scheme .we hope that further progress along the notion of a _ set _ of partial rules entailing a partial rule will be made along the coming years .preliminary versions of the results in sections [ dedplain ] , [ redundcalculus ] , [ clocalcsoundcompl ] , and [ closbasedent ] have been presented at discovery science 2008 ; preliminary versions of the remaining results ( except those in section [ suppbound ] , which are newer and unpublished ) have been presented at ecmlpkdd 2008 .our notation and terminology are quite standard in the data mining literature .all our developments take place in the presence of a `` universe '' set of atomic elements called _ items _ ; their absence or presence in sets or items plays the same role as binary - valued attributes of a relational table .subsets of are called _itemsets_. a dataset is assumed to be given ; it consists of transactions , each of which is an itemset labeled by a unique transaction identifier .the identifiers allow us to distinguish among transactions even if they share the same itemset . upper - case , often subscripted letters from the end of the alphabet , like or , denote itemsets .juxtaposition denotes union of itemsets , as in ; and denotes _ proper _ subsets , whereas is used for the usual subset relationship with potential equality . for a transaction , we denote the fact that is a subset of the itemset corresponding to , that is , the transaction satisfies the minterm corresponding to in the propositional logic sense . from the given datasetwe obtain a notion of support of an itemset : is the cardinality of the set of transactions that include it , ; sometimes , abusing language slightly , we also refer to that set of transactions itself as support .whenever is clear , we drop the subindex : .observe that whenever ; this is immediate from the definition .note that many references resort to a normalized notion of support by dividing by the dataset size .we chose not to , but there is no essential issue here .often , research work in data mining assumes that a threshold on the support has been provided and that only sets whose support is above the threshold ( then called `` frequent '' ) are to be considered .we will require this additional constraint occassionally for the sake of discussing the applicability of our developments .we immediately obtain by standard means ( see , for instance , or ) a notion of closed itemsets , namely , those that can not be enlarged while maintaining the same support .the function that maps each itemset to the smallest closed set that contains it is known to be monotonic , extensive , and idempotent , that is , it is a closure operator .this notion will be reviewed in more detail later on .closed sets whose support is above the support threshold , if given , are usually termed closed frequent sets .association rules are pairs of itemsets , denoted as for itemsets and .intuitively , they suggest the fact that occurs particularly often among the transactions in which occurs . more precisely , each such rule has a confidence associated : the confidence of an association rule in a dataset is . as with support , often we drop the subindex . the support in of the association rule is .we can switch rather freely between right - hand sides that include the left - hand side and right - hand sides that do nt : rules and are _ equivalent by reflexivity _ if and . clearly , and , likewise , for any ; that is , the support and confidence of rules that are equivalent by reflexivity always coincide . a minor notational issue that we must point out is that , in some references , the left - hand side of a rule is required to be a subset of the right - hand side , as in or , whereas many others require the left- and right - hand sides of an association rule to be disjoint , such as or the original .both the rules whose left - hand side is a subset of the right - hand side , and the rules that have disjoint sides , may act as canonical representatives for the rules equivalent to them by reflexivity .we state explicitly one version of this immediate fact for later reference : [ disjointness ] if rules and are equivalent by reflexivity , , and , then they are the same rule : and . in general , we do allow , along our development , rules where the left - hand side , or a part of it , appears also at the right - hand side , because by doing so we will be able to simplify the mathematical arguments. we will assume here that , at the time of printing out the rules found , that is , for user - oriented output , the items in the left - hand side are removed from the right - hand side ; accordingly , we write our rules sometimes as to recall this convention . also , many references require the right - hand side of an association rule to be nonempty , or even both sides .however , empty sets can be handled with no difficulty and do give meaningful , albeit uninteresting , rules .a partial rule with an empty right - hand side is equivalent by reflexivity to , or to for any , and all of these rules have always confidence 1 .a partial rule with empty left - hand side , as employed , for instance , in , actually gives the normalized support of the right - hand side as confidence value : in a dataset of transactions , .[ emptysides ] again , these sorts of rules could be omitted from user - oriented output , but considering them conceptually valid simplifies the mathematical development .we also resort to the convention that , if ( which implies that as well ) we redefine the undefined confidence as 1 , since the intuitive expression `` all transactions having do have also '' becomes vacuously true .this convention is irrespective of whether . throughout the paper , `` _ implications _ ''are association rules of confidence 1 , whereas `` _ partial rules _ '' are those having a confidence below 1 . when the confidence could be 1 or could be less , we say simply `` _ rule _ '' .we start our analysis from one of the notions of redundancy defined formally in . the notion is employed also , generally with no formal definition , in several papers on association rules , which subsequently formalize and study just some particular cases of redundancy ( e.g. , ) ; thus , we have chosen to qualify this redundancy as `` standard '' .we propose also a small variation , seemingly less restrictive ; we have not found that variant explicitly defined in the literature , but it is quite natural . 1 . has _ standard redundancy _ with respect to if the confidence and support of are larger than or equal to those of , in _ all _ datasets .2 . has _ plain redundancy _ with respect to if the confidence of is larger than or equal to the confidence of , in _ all _ datasets .[ plainreddefsa ] generally , we will be interested in applying these definitions only to rules where since , otherwise , for all datasets and the rule is trivially redundant .we state and prove separately , for later use , the following new technical claim : assume that rule is plainly redundant with respect to rule , and that . then .[ suppfromconf ] assume , to argue the contrapositive . then , we can consider a dataset consisting of one transaction and , say , transactions .no transaction includes , therefore ; however , is either 1 or , which can be pushed up as much as desired by simply increasing .then , plain redundancy does not hold , because it requires to hold for all datasets whereas , for this particular dataset , the inequality fails . the first use of this lemma is to show that plain redundancy is not , actually , weaker than standard redundancy .consider any two rules and where .then has standard redundancy with respect to if and only if has plain redundancy with respect to .[ plainredboundssupport ] standard redundancy clearly implies plain redundancy by definition .conversely , plain redundancy implies , first , by definition and , further , by lemma [ suppfromconf ] ; this implies in turn , for all datasets , and standard redundancy holds .the reference also provides two more direct definitions of redundancy : 1 . if and , rule is _ simply redundant _ with respect to .if and , rule is _ strictly redundant _ with respect to .[ plainreddefsb ] simple redundancy in is explained as a potential connection between rules that come from the same frequent set , in our case .the formal definition is not identical to our rendering : in its original statement in , rule is _ simply redundant _ with respect to , provided that .the reason is that , in that reference , rules are always assumed to have disjoint sides , and then both formalizations are clearly equivalent .we do not impose disjointness , so that the natural formalization of their intuitive explanation is as we have just stated in definition [ plainreddefsb ] .the following is very easy to see ( and is formally proved in ) . both simple and strict redundancies imply standard redundancy .[ simpleandstrict ] note that , in principle , there could possibly be many other ways of being redundant beyond simple and strict redundancies : we show below , however , that , in essence , this is not the case .we can relate these notions also to the _ cover operator _ of : [ kryszcover ] rule covers rule when and . here , again , the original definition , according to which rule covers rule if and ( plus some disjointness and nonemptiness conditions that we omit ) is appropriate for the case of disjoint sides .the formalization we give is stated also in as a property that characterizes covering .both simple and strict redundancies become thus merged into a single definition .we observe as well that the same notion is also employed , without an explicit name , in .again , it should be clear that , in definition [ kryszcover ] , the covered rule is indeed plainly redundant : whatever the dataset , changing from to the confidence stays equal or increases since , in the quotient that defines the confidence of a rule , the numerator can not decrease from to , whereas the denominator can not increase from to .also , the proposals in definition [ plainreddefsb ] and [ kryszcover ] are clearly equivalent : rule covers rule if and only if rule is either simply redundant or strictly redundant with respect to , or they are equivalent by reflexivity .it turns out that all these notions are , in fact , fully equivalent to plain redundancy ; indeed , the following converse statement is a main new contribution of this section : assume rule is plainly redundant with respect to , where .then rule covers rule .[ plainredcover ] by lemma [ suppfromconf ] , . to see the other inclusion , , assume to the contrary that .then we can consider a dataset in which one transaction consists of and , say , transactions consist of .since , these transactions do not count towards the supports of or , so that the confidence of is 1 ; also , is not adding to the support of since .as , exactly one transaction includes , so that , which can be made as low as desired . this would contradict plain redundancy .hence , plain redundancy implies the two inclusions in the definition of cover . combining the statements so far ,we obtain the following characterization : consider any two rules and where .the following are equivalent : 1 . and ( that is , rule covers rule ) ; 2 .rule is either simply redundant or strictly redundant with respect to rule , or they are equivalent by reflexivity ; 3 .rule is plainly redundant with respect to rule ; 4 .rule is standard redundant with respect to rule .[ plainredchar ] marginally , we note here an additional strength of the proofs given .one could consider attempts at weakening the notion of plain redundancy by allowing for a `` margin '' or `` slack '' , appropriately bounded , but whose value is independent of the dataset , upon comparing confidences .the slack could be additive or multiplicative : conditions such as or , for all and for independent of , could be considered .however , such approaches do not define different redundancy notions : they result in formulations actually equivalent to plain redundancy .this is due to the fact that the proofs in lemma [ suppfromconf ] and theorem [ plainredcover ] show that the gap between the confidences of rules that do not exhibit redundancy can be made as large as desired within . likewise ,if we fix a confidence threshold beforehand and use it to define redundancy as for all , again an equivalent notion is obtained , independently of the concrete value of ; whereas , for , this is , instead , a characterization of armstrong derivability . from the characterization just given, we extract now a sound and complete deductive calculus .it consists of three inference schemes : right - hand reduction ( ) , where the consequent is diminished ; right - hand augmentation ( ) , where the consequent is enlarged ; and left - hand augmentation ( ) , where the antecedent is enlarged . as customary in logic calculi, our rendering of each rule means that , if the facts above the line are already derived , we can immediately derive the fact below the line . we also allow always to state trivial rules : clearly , scheme could be stated equivalently with below the line , by : in fact , is exactly the simple redundancy from definition [ plainreddefsb ] and , in the cases where , it provides a way of dealing with one direction of equivalence by reflexivity ; the other direction is a simple combination of the other two schemes .the reduction scheme allows us to `` lose '' information from the right - hand side ; it corresponds to strict redundancy . as further alternative options ,it is easy to see that we could also join and into a single scheme : but we consider that this option does not really simplify , rather obscures a bit , the proof of our corollary [ calculus0complete ] below .also , we could allow as trivial rules whenever , which includes the case of ; such rules also follow from the calculus given by combining with and .the following can be derived now from corollary [ plainredchar ] : the calculus given is sound and complete for plain redundancy ; that is , rule is plainly redundant with respect to rule if and only if can be derived from using the inference schemes , , and .[ calculus0complete ] soundness , that is , all rules derived are plainly redundant , is simple to argue by checking that , in each of the inference schemes , the confidence of the rule below the line is greater than or equal to the confidence of the rule above the line : these facts are actually the known statements that each of equivalence by reflexivity , simple redundancy , and strict redundancy imply plain redundancy .also , trivial rules with empty right - hand side always hold . to show completeness , assume that rule is plainly redundant with respect to rule .if , apply and use to copy and , if necessary , to leave just in the right - hand side .if , by corollary [ plainredchar ] , we know that this implies that and . now , to infer from , we chain up applications of our schemes as follows : where the second step makes use of the inclusion , and the last step makes use of the inclusion . here ,the standard derivation symbol denotes derivability by application of the scheme indicated as a subscript .we note here that proposes a simpler calculus that consists , essentially , of ( called there `` weak left augmentation '' ) and ( called there `` decomposition '' ) .the point is that these two schemes are sufficient to prove completeness of the `` representative basis '' as given in that reference , due to the fact that , in that version , the rules of the representative basis include the left - hand side as part of the right - hand side ; but such a calculus is incomplete with respect to plain redundancy because it offers no rule to move items from left to right .a basis is a way of providing a shorter list of rules for a given dataset , with no loss of information , in the following sense : given a set of rules , is a _ complete basis _ if every rule of is plainly redundant with respect to some rule of .bases are analogous to covers in functional dependencies , and we aim at constructing bases with properties that correspond to minimum size and canonical covers . the solutions for functional dependencies , however , are not valid for partial rules due to the failure of the armstrong schemes . in all practical applications , is the set of all the rules `` mined from '' a given dataset at a confidence threshold $ ] .that is , the basis is a set of rules that hold with confidence at least in , and such that each rule holds with confidence at least in if and only if it is plainly redundant with respect to some rule of ; equivalently , the rules in can be inferred from through the corresponding deductive calculus . all along this paper , such a confidence threshold is denoted , and always .we will employ two simple but useful definitions . fix a dataset .given itemsets and , is a -_antecedent _ for if , that is , .note that we allow , that is , the set itself as its own -antecedent ; this is just to simplify the statement of the following rather immediate lemma : if is a -_antecedent _ for and , then is a -_antecedent _ for and is a -_antecedent _ for .[ chain ] from we have , so that .the lemma follows .we make up for proper antecedents as part of the next notion : fix a dataset .given itemsets and ( _ proper subset _ ) , is a _ valid _-_antecedent _ for if the following holds : 1 . is a -antecedent of , 2 .no proper subset of is a -antecedent of , and 3 .no proper superset of has as a -antecedent .[ validants ] the basis we will focus on now is constructed from each and each valid antecedent of ; we consider that this is the most clear way to define and study it , and we explain below why it is essentially identical to two existing , independent proposals .fix a dataset and a confidence threshold .the _ representative rules _ for at confidence are all the rules for all itemsets and for all valid -antecedents of .[ bcero ] in the following , we will say `` let be a representative rule '' to mean `` let be a set having valid -antecedents , and let be one of them '' ; the parameter will always be clear from the context . note that some sets may not have valid antecedents , and then they do not generate any representative rules . by the conditions on valid antecedents in representative rules ,the following relatively simple but crucial property holds ; beyond the use of our corollary [ plainredchar ] , the argument follows closely that of related facts in : let rule be among the representative rules for at confidence .assume that it is plainly redundant with respect to rule , also of confidence at least ; then , they are equivalent by reflexivity and , in case , they are the same rule .[ equivrepr ] let be a representative rule , so that and is a valid -antecedent of .by corollary [ plainredchar ] , must cover : .as , is a -antecedent of .we first show that ; assume , and apply lemma [ chain ] to : is also a -antecedent of , and the minimality of valid -antecedent gives us . is , thus , a -antecedent of which properly includes , contradicting the third property of valid antecedents . hence , , so that is a -antecedent of ; but again is a minimal -antecedent of , so that necessarily , which , together with , proves equivalence by reflexivity . under the additional condition ,both rules coincide as per proposition [ disjointness ] .it easily follows that our definition is equivalent to the definition given in , except for a support bound that we will explain later ; indeed , we will show in section [ suppbound ] that all our results carry over when a support bound is additionally enforced . fix a dataset and a confidence threshold .let .the following are equivalent : 1 .rule is among the representative rules for at confidence ; 2 . and there does not exist any other rule with , of confidence at least in , that covers .[ charreprbasisk ]let rule be among the representative rules for at confidence , and let rule cover it , while being also of confidence at least and with .then , by corollary [ plainredchar ] makes plainly redundant , and by proposition [ equivrepr ] they must coincide . to show the converse , we must see that is a representative rule under the conditions given .the fact that gives that is a -antecedent of , and we must see its validity .assume that a proper subset is also a -antecedent of : then the rule would be a different rule of confidence at least covering , which can not be .similarly , assume that is a -antecedent of where : then the rule would be a different rule of confidence at least covering , which can not be either . similarly , and with the same proviso regarding support , our definition is equivalent to the `` essential rules '' of .there , the set of minimal -antecedents of a given itemset is termed its `` boundary '' .the following statement is also easy to prove : fix a dataset and a confidence threshold .let .the following are equivalent : 1 .rule is among the representative rules for at confidence ; 2 . is in the boundary of but is not in the boundary of any proper superset of ; that is , is a minimal -antecedent of but is not a minimal -antecedent of any itemset strictly containing .[ charreprbasisay ] if is among the representative rules , must be a minimal -antecedent of by the conditions of valid antecedents ; also , is not a -antecedent at all ( and , thus , not a minimal -antecedent ) of any properly including .conversely , assume that is in the boundary of but is not in the boundary of any proper superset of ; first , must be a minimal -antecedent of so that the first two conditions of valid -antecedents hold .assume that is not among the representative rules ; the third property must fail , and must be a -antecedent of some with .our hypotheses tell us that is not a _ minimal _ -antecedent of .that is , there is a proper subset that is also a -antecedent of .it suffices to apply lemma [ chain ] to to reach a contradiction , since it implies that is a -antecedent of and therefore would not be a minimal -antecedent of .the representative rules are indeed a basis : ( , ) fix a dataset and a confidence threshold , and consider the set of representative rules constructed from ; it is a complete basis : 1 .all the representative rules hold with confidence at least ; 2 .all the rules of confidence at least in are plainly redundant with respect to the representative rules .[ b0iscomplete ] the first part follows directly from the use of -antecedents as left - hand sides of representative rules . for the second part , alsoalmost immediate , suppose , and let ; since is now a -antecedent of , it must contain a minimal -antecedent of , say .let be the largest superset of such that is still a -antecedent of .thus , is among the representative rules and covers .small examples of the construction of representative rules can be found in the same references ; we also provide one below .an analogous fact is proved in through an incomplete deductive calculus consisting of the schemes that we have called and , and states that every rule of confidence at least can be inferred from the representative rules by application of these two inference schemes . since representative rules in the formulation of a right - hand side that includes the left - hand side , this inference process does not need to employ .now we can state and prove the most interesting novel property of this basis , which again follows from our main result in this section , corollary [ plainredchar ] . as indicated , representative rules were known to be irredundant with respect to simple and strict redundancy or , equivalently , with respect to covering .but , for standard redundancy , in principle there was actually the possibility that some other basis , constructed in an altogether different form , could have less rules .we can state and prove now that this is not so : there is absolutely no other way of constructing a basis smaller than this one , while preserving completeness with respect to plain redundancy , because it has absolutely minimum size among all complete bases .therefore , in order to find smaller bases , a notion of redundancy more powerful than plain ( or standard ) redundancy is unavoidably necessary . fix a dataset , andlet be the set of rules that hold with confidence in .let be an arbitrary basis , complete so that all the rules in are plainly redundant with respect to .then , must have at least as many rules as the representative rules . moreover ,if the rules in are such that antecedents and consequents are disjoint , then all the representative rules belong to .[ zerominimality ] by the assumed completeness of , each representative rule must be redundant with respect to some rule . by corollary [ plainredchar ] , covers . then proposition [ equivrepr ] applies : they are equivalent by reflexivity .this means and , hence uniquely identifies which representative rule it covers , if any ; hence , needs , at least , as many rules as the number of representative rules .moreover , as stated also in proposition [ equivrepr ] , if the disjointness condition holds , then both rules coincide .we consider a small example consisting of 12 transactions , where there are actually only 7 itemsets , but some of them are repeated across several transactions .we can simplify our study as follows : if is not a closed set for the dataset , that is , if it has some superset with the same support , then clearly it has no valid -antecedents ( see also fact [ bcerofromclosures ] below ) ; thus we concentrate on closed sets .figure [ smallex ] shows the example dataset and the corresponding of closures , depicted as a hasse diagram ( that is , transitive edges have been removed to clarify the drawing ) ; edges stand for the inclusion relationship . for this example, the implications can be summarized by six rules , namely , , , , , , and , which are also the representative rules at confidence 1 . at confidence , we find that , first , the left - hand sides of the six implications are still valid -antecedents even at this lower confidence , so that the implications still belong to the representative basis .then , we see that two of the closures , and , have additionally one valid -antecedent each , whereas has two .the following four rules hold : , , , and .these four rules , jointly with the six implications indicated , constitute exactly the ten representative rules at confidence 0.75 .theorem [ zerominimality ] in the previous section tells us that , for plain redundancy , the absolute limit of a basis at any given confidence threshold is reached by the set of representative rules .several studies , prominently , have put forward a different notion of redundancy ; namely , they give a separate role to the full - confidence implications , often through their associated closure operator . along this way , one gets a stronger notion of redundancy and , therefore , a possibility that smaller bases can be constructed . indeed , implications can be summarized better , because they allow for transitivity and augmentation to apply in order to find redundancies ; moreover , they can be combined in certain forms of transitivity with partial rules : as a simple example , if and , that is , if a fraction or more of the support of has and _ all _ the transactions containing do have as well , clearly this implies that .observe , however , that the directionality is relevant : from and we infer nothing about , since the high confidence of might be due to a large number of transactions that do not include .we will need some notation about closures . given a dataset , the closure operator associated to maps each itemset to the largest itemset that contains and has the same support as in : , and is as large as possible under this condition .it is known and easy to prove that exists and is unique .implications that hold in the dataset correspond to the closure operator ( , , , , ) : , and is as large as possible under this condition .equivalently , the closure of itemset is the intersection of all the transactions that contain ; this is because implies that all transactions counted for the support of are counted as well for the support of , hence , if the support counts coincide they must count exactly the same transactions . along this section , as in , we denote full - confidence implications using the standard logic notation ; thus , if and only if . a basic fact from the theory of closure spacesis that closure operators are characterized by three properties : extensivity ( ) , idempotency ( ) , and monotonicity ( if then ) . as an example of the use of these properties , we note the following simple consequence for later use : , and .[ trivial ] we omit the immediate proof .a set is closed if it coincides with its closure .usually we speak of the _ lattice _ of closed sets ( technically it is just a semilattice but it allows for a standard transformation into a lattice ) .when we also say that is a generator of ; if the closures of all proper subsets of are different from , we say that is a minimal generator .note that some references use the term `` generator '' to mean our `` minimal generator '' ; we prefer to make explicit the minimality condition in the name . in some works ,often database - inspired , minimal generators are termed sometimes `` keys '' . in other works ,often matroid - inspired , they are termed also `` free sets '' . our definition says explicitly that .we will make liberal use of this fact , which is easy to check also with other existing alternative definitions of the closure operator , as stated in , , and others .several quite good algorithms exist to find the closed sets and their supports ( see section 4 of ) .redundancy based on closures is a natural generalization of equivalence by reflexivity ; it works as follows ( , see also and section 4 in ) : given a dataset and the corresponding closure operator , two partial rules and such that and have the same support and the same confidence .[ closredcci ] the rather immediate reason is that , and .therefore , groups of rules sharing the same closure of the antecedent , and the same closure of the union of antecedent and consequent , give cases of redundancy . on account of these properties , there are some proposals of basis constructions from closed sets in the literature , reviewed below .but the first fact that we must mention to relate the closure operator with our explanations so far is the following : let be a representative rule as per definition [ bcero ] . then is a closed set and is a minimal generator . [ bcerofromclosures ] the proof is direct from definitions [ validants ] and [ bcero ] , and can be found in , , .these references employ this property to improve on the earlier algorithms to compute the representative rules , which considered all the frequent sets , by restricting the exploration to closures and minimal generators . alsothe authors of do the same , seemingly unaware that the algorithm in already works just with closed itemsets .fact [ bcerofromclosures ] may shed doubts on whether closure - based redundancy actually can lead to smaller bases .we prove that this is sometimes the case , due to the fact that the redundancy notion itself changes , and allows for a form of transitivity , which we show can take again the form of a deductive calculus .then , we will be able to refine the notion of valid antecedent of the previous section and provide a basis for which we can prove that it has the smallest possible size among the bases for partial rules , with respect to closure - based completeness .that is , we will reach the limit of closure - based redundancy in the same manner as we did for standard redundancy in the previous section .let be the set of implications in the dataset ; alternatively , can be any of the bases already known for implications in a dataset . in our empirical validations below we have used as the guigues - duquenne basis , or gd - basis , that has been proved to be of minimum size , .an apparently popular and interesting alternative , that has been rediscovered over and over in different guises , is the so - called _ iteration - free basis _ of , which coincides with the proposal in and with the _ exact min - max basis _ of ( also called sometimes _ generic basis _ ) ; because of fact [ bcerofromclosures ] , it coincides exactly also with the representative rules of confidence 1 , that is : implications that are not plainly redundant with any other implication according to definition [ plainreddefsa ] .also , it coincides with the `` closed - key basis '' for frequent sets in , which in principle is not intended as a basis for rules , and has a different syntactic sugar , but differs in essence from the iteration - free basis only in the fact that the support of each rule is explicitly recorded together with it .closure - based redundancy takes into account as follows : let be a set of implications .partial rule has _ closure - based redundancy relative to _ with respect to rule , denoted , if any dataset in which all the rules in hold with confidence 1 gives .[ closredundancy ] in some cases , it might happen that the dataset at hand does not satisfy any nontrivial rule with confidence 1 ; then , this notion will not be able to go beyond plain redundancy .however , it is usual that some full - confidence rules do hold , and , in these cases , as we shall see , closure - based redundancy may give more economical bases .more generally , all our results only depend on the implications reaching indeed full confidence in the dataset ; but they are not required to capture all of these : the implications in ( with their consequences according to the armstrong schemes ) could constitute just a part of the full - confidence rules in the dataset . in particular , plain redundancy reappears by choosing , whether the dataset satisfies or not any full - confidence implication .we continue our study by showing a necessary and sufficient condition for closure - based redundancy , along the same lines as the one in the previous section .let be a set of exact rules , with associated closure operator mapping each itemset to its closure .let be a rule not implied by , that is , where .then , the following are equivalent : 1 . and ; 2 . .[ closredchar ] the direct proof is simple : the inclusions given imply that and ; then .conversely , for , we argue that , if either of and fails , then there is a dataset where holds with confidence 1 and holds with high confidence but the confidence of is low .we observe first that , in order to satisfy , it suffices to make sure that all the transactions in the dataset we are to construct are closed sets according to the closure operator corresponding to .assume now that : then a dataset consisting only of one or more transactions with itemset satisfies ( vacuously ) with confidence 1 but , given that , leads to confidence zero for .it is also possible to argue without resorting to vacuous satisfaction : simply take one transaction consisting of and , in case this transaction satisfies , obtain as low a confidence as desired for by adding as many transactions as necessary ; these will not change the confidence of since . then consider the case where , whence the other inclusion fails : .consider a dataset of , say , transactions , where one transaction consists of the itemset and transactions consist of the itemset .the confidence of is at least , which can be made as close to 1 as desired by increasing , whereas the presence of at least one and no transaction at all containing gives confidence zero to .thus , in either case , we see that redundancy does not hold .we provide now a stronger calculus that is sound and complete for this more general case of closure - based redundancy . for clarity, we chose to avoid the closure operator in our deduction schemes , writing instead explicitly each implication .our calculus for closure - based redundancy consists of four inference schemes , each of which reaches a partial rule from premises including a partial rule .two of the schemes correspond to variants of augmentation , one for enlarging the antecedent , the other for enlarging the consequent .the other two correspond to composition with an implication , one in the antecedent and one in the consequent : a form of controlled transitivity .their names , , , and indicate whether they operate at the right or left - hand side and whether their effect is augmentation or composition with an implication . again we allow to state rules with empty right - hand side directly : alternatively , we could state trivial rules with a subset of the left - hand side at the right - hand side .note that this opens the door to using with an empty , and this allows us to `` downgrade '' an implication into the corresponding partial rule .again , could be stated equivalently as like in section [ dedplain ] .in fact , the whole connection with the simpler calculus in section [ dedplain ] should be easy to understand : first , observe that the rules are identical . now ,if implications are not considered separately , the closure operator trivializes to identity , for every , and the only cases where we know that are those where ; we see that corresponds , in that case , to , whereas the schemes only differ on cases of equivalence by reflexivity .finally , in that case becomes fully trivial since becomes and , together with , leads to : then , the partial rules above and below the line would coincide . similarly to the plain case , there exists an alternative deduction system , more compact , whose equivalence with our four schemes is rather easy to see .it consists of just two forms of combining a partial rule with an implication : however , in our opinion , the use of these schemes in our further developments is less intuitive , so we keep working with the four schemes above . in the remainder of this section , we denote as the fact that , in the presence of the implications in the set , rule can be derived from rule using zero or more applications of the four deduction schemes ; along such a derivation , any rule of ( or derived from by the armstrong schemes ) can be used whenever an implication of the form is required .we can characterize the deductive power of this calculus as follows : it is sound and complete with respect to the notion of closure - based redundancy ; that is , all the rules it can prove are redundant , and all the redundant rules can be proved : let consist of implications .then , if and only if rule has closure - based redundancy relative to with respect to rule : . [ calculussoundcomplete ]soundness corresponds to the fact that every rule derived is redundant : it suffices to prove it individually for each scheme ; the essentials of some of these arguments are also found in the literature . for , the inclusions prove that the partial rules above and below the line have the same confidence . for ,one has , thus and the confidence of the rule below the line is at least that of the one above , or possibly greater .scheme is unchanged from the previous section . finally , for , we have so that , and so that , and again the confidence of the rule below the line is at least the same as the confidence of the one above . to prove completeness, we must see that all redundant rules can be derived .we assume and resort to theorem [ closredchar ] : we know that the inclusions and must hold . from lemma[ trivial ] , we have that .now we can write a derivation in our calculus , taking into account these inclusions , as follows : thus , indeed the redundant rule is derivable , which proves completeness . in a similar way as we did for plain redundancy, we study here bases corresponding to closure - based redundancy .since the implications become `` factored out '' thanks to the stronger notion of redundancy , we can focus on the partial rules .a formal definition of completeness for a basis is , therefore , as follows : given a set of partial rules and a set of implications , _ closure - based completeness _ of a set of partial rules holds if every partial rule of has _ closure - based redundancy relative to _ with respect to some rule of . again is intended to be the set of all the partial rules `` mined from '' a given dataset at a confidence threshold ( recall that always ) , whereas is intended to be the subset of rules in that hold with confidence 1 in or , rather , a basis for these implications .there exist several proposals for constructing bases while taking into account the implications and their closure operator .we use the same intuitions and _ modus operandi _ to add a new proposal which , conceptually , departs only slightly from existing ones .its main merit is not the conceptual novelty of the basis itself but the mathematical proof that it achieves the minimum possible size for a basis with respect to closure - based redundancy , and is therefore at most as large as any alternative basis and , in many cases , smaller than existing ones .our new basis is constructed as follows . for each closed set , we will consider a number of closed sets properly included in as candidates to act as antecedents : fix a dataset , and consider the closure operator corresponding to the implications that hold in with confidence 1 .for each closed set , a closed proper subset is a _basic -antecedent _ if the following holds : 1 . is a -antecedent of : ; 2 .no proper closed subset of is a -antecedent of , and 3 .no proper closed superset of has as a -antecedent .basic antecedents follow essentially the same pattern as the valid antecedents ( definition [ validants ] ) , but restricted to closed sets only , that is , instead of minimal antecedents , we pick just minimal _ closed _ antecedents. then we can use them as before : fix a dataset and a confidence threshold . 1. the basis consists of all the rules for all closed sets and all basic -antecedents of .a _ minmax variant _ of the basis is obtained by replacing each left - hand side in by a minimal generator : that is , for a closed set , each rule becomes for one minimal generator of the ( closed ) basic -antecedent .a _ minmin variant _ of the basis is obtained by replacing by a minimal generator both the left - hand and the right - hand sides in : for each closed set and each basic -antecedent of , the rule becomes where is chosen a minimal generator of and is chosen a minimal generator of .[ mybasis ] the variants are defined only for the purpose of discussing the relationship to previous works along the next few paragraphs ; generally , we will use only the first version of . note the following : in a minmax variant , at the time of substituting a generator for the left - hand side closure , in case we consider a rule from that has a left - hand side with several minimal generators , only one of them is to be used .also , all of ( and not only ) can be removed from the right - hand side : can be used to recover it .the basis is uniquely determined by the dataset and the confidence threshold , but the variants can be constructed , in general , in several ways , because each closed set in the rule may have several minimal generators , and even several different generators of minimum size .we can see the variants as applications of our deduction schemes .the result of substituting a generator for the left - hand side of a rule is equivalent to the rule itself : in one direction it is exactly scheme , and in the other is a chained application of to add the closure to the right - hand side and to put it back in the left - hand side . substituting a generator for the right - hand side corresponds to scheme in both directions .the use of generators instead of closed sets in the rules is discussed in several references , such as or . in the style of , we would consider a minmax variant , which allows one to show to the user minimal sets of antecedents together with all their nontrivial consequents .in the style of , we would consider a minmin variant , thus reducing the total number of symbols if minimum - size generators are used , since we can pick any generator .each of these known bases incurs a risk of picking more than one minimum generator for the same closure as left - hand sides of rules with the same closure of the right - hand side : this is where they may be ( and , in actual cases , have been empirically found to be ) larger than , because , in a sense , they would keep in the basis all the variants .facts analogous to corollaries [ charreprbasisk ] and [ charreprbasisay ] hold as well if the closure condition is added throughout , and provide further alternative definitions of the same basis .we use one of them in our experimental setting , described in section [ experim ] .we now see that this set of rules entails exactly the rules that reach the corresponding confidence threshold in the dataset : fix a dataset and a confidence threshold .let be any basis for implications that hold with confidence 1 in . 1 .all the rules in hold with confidence at least . is a complete basis for the partial rules under closure - based redundancy .[ newsoundcomplete ] all the rules in must hold indeed because all the left - hand sides are actually -antecedents . to prove that all the partial rules that hold are entailed by rules in , assume that indeed holds with confidence , that is , ; thus is a -antecedent of . if , then and the implication will follow from ; we have to discuss only the case where , which implies that . consider the family of closed sets that include and have as -antecedent ; it is a nonempty family , since fulfills these conditions .pick maximal in that family . then since and .now , is a -antecedent of , but not of any strictly larger closed itemset . also , any subset of is a proper subset of .let be closed , a -antecedent of , and minimal with respect to these properties ; assume that is a -antecedent of a closed set strictly larger than . from andlemma [ chain ] , would be also a -antecedent of , which would contradict the maximality of .therefore , can not be a -antecedent of a closed set strictly larger than and , together with the facts that define , we have that is a basic -antecedent of whence .we gather the following inequalities : and ; this is exactly what we need to infer that from theorem [ closredchar ] .now we can move to the main result of this section : this basis has a minimum number of rules among all bases that are complete for the partial rules , according to closure - based redundancy with respect to .fix a dataset , and let be the set of rules that hold with confidence in .let be a basis for the set of implications in .let be an arbitrary basis , having closure - based completeness for with respect to .then , must have at least as many rules as .[ minimality ] first , we will prove the following intermediate claim : for each partial rule in , say , there is in a corresponding partial rule of the form with and .we pick any rule , that is , where is a basic -antecedent of ; this rule must be redundant , relative to the implications in , with respect to the new basis under consideration : for some rule , we have that which , by theorem [ closredchar ] , is the same as and , together with .we consider some support ratios : , which means that is a -antecedent of , a closed set including ; by the second condition in the definition of basic -antecedent , this can not be the case unless .then , again , , that is , is a -antecedent of , and is as well ; but and , by minimality of as a basic -antecedent of , it must be that .now , to complete the proof of the theorem , we observe that each such rule in determines univocally both closed sets and , so that the same rule in can not correspond to more than one of the rules in .this requires , therefore , to have at least as many rules as . in applications of , one needs , in general , as a basis both and a basis for the implications , such as the gd - basis . on the other hand , in many practical cases ,implications provide little new knowledge , most often just showing existing ( and known ) properties of the attributes . if a user is satisfied with the basis , and does not ask for a basis for the implications nor the representative rules , then ( s)he may get results faster , since in this case the algorithms would not need to compute minimal generators , and just mining closures and their supports ( and organizing them via the subset relation ) would suffice . note that the joint consideration of the gd - basis and incurs the risk of being a larger set of rules than the representative rules , due to the fact that some rules in the gd - basis could be , in fact , plainly redundant ( ignoring the closure - related issues ) with a representative rule .we have observed empirically that , at high confidence thresholds , the representative rules tend to be a large basis due to the lack of specific minimization of implications , whereas the union of the gd - basis and tends to be quite smaller ; conversely , at lower confidence levels , the availability of many partial rules increases the chances of covering a large part of the gd - basis , so that the representative rules are a smaller basis than the union of plus gd , even if they are more in number than .that is : closure - based redundancy may be either stronger or weaker , in terms of the optimum basis sizes , than plain redundancy .sometimes , even fully coincides with the partial representative rules .this is , in fact , illustrated in the following example .we revisit the example in figure [ smallex ] . as indicated at the end of section [ subsecagyubasis ] , the basis for implicationsconsists of six rules : , , , , , and ; the iteration - free basis and the guigues - duquenne basis coincide here , and these implications are also the representative rules at confidence 1 . at confidence ,these are kept and four _ representative rules _ are added : , , , and .since the four left - hand sides are , actually , closed sets , which is not guaranteed in general , the basis at this confidence includes exactly these four rules : no other closure is a basic -antecedent .however , if the confidence threshold is lowered to , we find seven rules in the basis : , , , , , and , plus the somewhat peculiar , since indeed the support of is above the same threshold ; the rules , , and also hold , but they are redundant with respect to or : and are -antecedents of but are not basic ( by way of being also -antecedents of ) , whereas is a -antecedent of but is not basic either since it is not minimal . additionally , the sizes of the rules can be reduced somewhat : suffices to give or indeed since is equivalent by reflexivity to and there is a full - confidence implication in the gd - basis that gives us .this form of reasoning is due to , and a similar argument can be made for several of the other rules .alternatively , there exists the option of omitting those implications that , seen as partial rules , are already covered by a partial rule : in this example , these are and , covered by ( but _ not _ by , which needs to infer ) ; similarly , and are plainly redundant with .in fact , it can be readily checked that the seven partial rules in plus the two remaining implications in the gd - basis , and , form exactly the representative rules at this confidence threshold . for many real - life datasets , including all the standard benchmarks in the field , the closure space is huge , and reaches easily hundreds of thousands of nodes , or indeed even millions .a standard practice , as explained in the introduction , is to impose a support constraint , that is , to ignore ( closed ) sets that do not appear often enough .it has been observed also that the rules removed by this constraint are often appropriately so , in that they are less robust and prone to represent statistical artifacts rather than true information .hence , we discuss briefly what happens to our basis proposal if we work under such a support constraint . for a dataset and confidence andsupport thresholds and , respectively , denote by the set of rules that hold in with confidence at least and support at least .we may want to construct either of two similar but different sets of rules : we can ask just how to compute the set of rules in that reach that support or , more likely , we may wish a minimum - size basis for .we solve both problems .we first discuss a minimum - size basis for .of course , the natural approach is to compute the rule basis exactly as before , but only using closed sets above the support threshold .indeed this works : fix a dataset . for any fixed confidence threshold and support threshold ,the construction of basic -antecedents , applied only to closed sets of support at least , provides a minimum - size basis for .consider any rule of support at least and confidence at least .then is a -antecedent of ; also , .arguing as in the proof of theorem [ newsoundcomplete ] but restricted to the closures with support at least , we can find a rule where both and have support at least , is a basic -antecedent of , and such that and so that it covers .minimum size is argued exactly as in the proof of theorem [ minimality ] : following the same steps , one proves that any complete basis consisting of rules in must have separate rules to cover each of the rules formed by basic -antecedents of closures of support .we are therefore safe if we apply the basis construction for to a lattice of frequent closed sets above support , instead of the whole lattice of closed sets .however , this fact does not ensure that the basis obtained coincides with the set of rules in the whole basis having support above . there may be rules that are not in because a large closure , of low support , prevents some from being a basic antecedent .if the large closure is pruned by the support constraint , then may become a basic antecedent .the following result explains with more precision the relationship between the basis and the rules of support .fix a dataset , a confidence threshold , and a support threshold .assume that and that ; then if and only if is a basic -antecedent of in the set of all closures of support at least .this proposition says that , in order to find , that is , the set of rules in that have support at least , we do not need to compute all the closures and construct the whole of ; it suffices to perform the construction on the set of closures of support . of course , in both caseswe must then discard the rules of support less than .we call this sort of process _ double - support mining _ : given user - defined and , use the product to find all closures of support , compute on these closures , and finally prune out the rules with support less than to obtain , if that is what is desired .consider a pair of closed sets with ; we must discuss whether is a basic -antecedent of in two different closure lattices : the one of all the closed sets and the one of frequent closures at support threshold .the properties of being a -antecedent and of being minimally so refer to and themselves or to even smaller sets , and are therefore unaffected by the support constraint .we must discuss just the existence of some proper superset of having as a -antecedent . in case a basic -antecedent of , no proper superset of has as -antecedent , whatever the support of ; therefore , will be found to be a basic -antecedent of also in the smaller lattice of frequent closures . to show the converse, it suffices to argue that , for any proper superset of , if is a -antecedent of , then .indeed , ; hence , if no such is found in the frequent closures lattice at support threshold , no such exists at all .whereas our interests in this paper are rather foundational , we wish to describe briefly the direct applicability of our results so far .we have chosen an approach that conveniently uses as a black - box a separate closed itemsets miner due to borgelt .we have implemented a construction of the gd basis using a hypergraph transversal method to construct representative rules of confidence 1 following the guidelines of and subsequently simplifying them to obtain the gd basis as per ; and we have implemented a simple algorithm that scans repeatedly the closed sets mined by the separate program and constructs all basic -antecedents .a first scan picks up -antecedents from the proper closed subsets and filters them for minimality ; once all minimal antecedents are there for all closures , a subsequent scan filters out those that are not basic by way of being antecedents of larger sets .effectively the algorithm does not implement the definition but the immediate extension of the characterization in corollary [ charreprbasisay ] to the closure - based case .a natural alternative consists in preprocessing the lattice as a graph in order to find the predecessors of a node directly ; however , in practice , with this alternative , whenever the graph requires too much space , we found that the computation slows down unacceptably , probably due to a worse fit to virtual memory caching .our implementation gives us answers in just seconds in most cases , on a mid - range windows xp laptop , taking a few minutes when the closure space reaches a couple dozen thousand itemsets ..number of rules in various bases for benchmark datasets . [ cols="<,>,>,>,>,>,>,>,>",options="header " , ] on the basis of this implementation , we have undertaken some empirical evaluations of the sizes of the basis .we consider that the key point of our contribution is the mathematical proof of absolute size minimality , but , as a mere illustration , we show the figures of some of the cases explored in in table [ maintable ] .the datasets and thresholds are set exactly as per that reference ; column s / c " is the confidence and support parameters .columns `` traditional '' ( for the number of rules under the standard traditional definition ) and `` closure - based '' ( for the number of rules obtained by the closure - based method proposed in ) are taken verbatim from the same reference .we have added the number of rules in the representative basis for implications at 100% confidence `` rrimp '' , that coincides with the iteration - free basis and other proposals as discussed at the beginning of subsection [ subscharclosred ] ; the size of the gd basis for the same implications ( often yielding huge savings ) ; and the number of rules in the basis of partial rules , which , in the totality of these cases , did coincide with the representative rules at the corresponding thresholds . as discussed in the end of section [ mybasissect ] ,representative rules encompass implications but must be taken jointly with the gd basis , so we give also the corresponding sum . the confidence chosen in for this comparison , namely , coincident with the support threshold ,is , in our opinion , too low to provide a good perspective ; at these thresholds , representative rules essentially correspond to support bounds ( rules with empty left - hand side ) . to complement the intuition , we provide the evolution of the sizes of the representative rules and the basis for the dataset pumsb - star , downloaded from , at the same support thresholds of 40% and 60% used in table [ maintable ] , with confidence ranging from 99% to 51% , at 1% granularity .the guigues - duquenne bases at these support thresholds consist of 48 and 5 rules respectively .these have been added to the size of in figures [ pumsbstar40 ] and [ pumsbstar60 ] . at these confidence tresholds ,the traditional notion of association rules gives from 105086 up to 179684 rules at support 40% , and between 268 and 570 rules at support 60% .note that , in that notion , association rules are restricted , by definition , to singleton consequents ; larger numbers would be found if this condition is lifted for a fairer comparison with the bases we study .these figures show the advantage of the closure - based basis over representative rules up to the point where the implications become subsumed by partial representative rules .we want to point out as well one interesting aspect of the figures obtained .the standard settings for association rules lead to a monotonicity property , by which lower confidence thresholds allow for more rules , so that the size of the output grows ( sometimes enormously ) as the confidence threshold decreases .however , in the case of the basis and the representative rules , some datasets exhibit a nonmonotonic evolution : at lesser confidence thresholds , sometimes less rules are obtained . inspecting the actual rules, we can find the reason : sometimes there are several rules at , say , 90% confidence that become simultaneously redundant due to a single rule of smaller confidence , say 85% , which does not appear at 90% confidence .this may reduce the set of rules upon lowering the confidence threshold .we move on towards a further contribution of this paper : we propose a stronger notion of redundancy , as progress towards a complete logical approach , where redundancy would play the role of entailment and a sound and complete deductive calculus is sought . considering the redundancy notions described so far, the following question naturally arises : beyond all these notions of redundancy that relate one partial rule to another partial rule , possibly in presence of implications , is it indeed possible that a partial rule is entailed jointly by two partial rules , but not by a single one of them ? and , if so , when does this happen ?we will fully answer this question below .the failures of transitivity and augmentation may suggest the intuition of a negative answer : it looks like any combination of two partial rules of confidence at least , but with , will require us to multiply confidences , reaching as low as or lower ; but this intuition is wrong .we will characterize precisely the case where , at a fixed confidence threshold , a partial rule follows from exactly two partial rules , a case where our previous calculus becomes incomplete ; and we will identify one extra deduction scheme that allows us to conclude as consequent a partial rule from two premise partial rules in a sound form . the calculus obtained is complete with respect to entailment from two premise rules .we present the whole setting in terms of closure - based redundancy , but the development carries over for plain redundancy , simply by taking the identity as closure operator .a first consideration is that we no longer have a single value of the confidence to compare ; therefore , we take a position like the one in most cases of applications of association rule mining in practice , namely : we fix a confidence threshold , and consider only rules whose confidence is above it . an alternative view, further removed from practice , would be to require just that the confidence of all our conclusions should be at least the same as the minimum of the confidences of the premises .as an example , consider the following fact ( the analogous statement for does not hold , as discussed below ) : let .assume that items , , , are present in and that the confidence of the rules and is above in dataset .then , the confidence of the rule in is also above .[ example ] we do not provide a formal proof of this claim since it is just the simplest particular case of theorem [ dospremisas ] below .we consider the following definition : given a set of implications , and a set of partial rules , rule is -redundant with respect to them ( or also -entailed by them ) , denoted , if every dataset in which the rules of have confidence 1 and the confidence of all the rules in is at least must satisfy as well with confidence at least .the entailment is called `` proper '' if it does not hold for proper subsets of ; otherwise it is `` improper '' .[ fullredundancy ] note that , in this case , the parameter is necessary to qualify the entailment relation itself . in previous sections we had a mere confidence inequality that did not depend on . the main result of this section is now : let be a set of implications , and let . consider three partial rules , , , and .then , if and only if either : 1 . , or 2 . , or 3 . , or 4 .all the following conditions simultaneously hold : 1 . 2 . 3 . 4 . 5 . 6 . 7 . [ dospremisas ] let us discuss first the leftwards implication . in case( 1 ) , rule holds trivially .clearly cases ( 2 ) and ( 3 ) also give ( improper ) entailment . for case( 4 ) , we must argue that , if all the seven conditions hold , then the entailment relationship also holds .thus , fix any dataset where the confidences of the premise rules are at least : these assumptions can be written , respectively , and , or equivalently for the corresponding closures .we have to show that the confidence of in is also at least .consider the following four sets of transactions from : and let , , , and be the respective cardinalities .we first argue that all four sets are mutually disjoint .this is easy for most pairs : clearly and have incompatible behavior with respect to ; and a tuple in either or has to satisfy , which makes it impossible that that tuple is accounted for in either or .the only place where we have to argue a bit more carefully is to see that and are disjoint as well : but a tuple that satisfies both and , that is , satisfies their union , must satisfy every subset of the corresponding closure as well , such as , due to condition ( v ) .hence , and are disjoint .now we bound the supports of the involved itemsets as follows : clearly , by definition of , .all tuples that satisfy are accounted for either as satisfying as well , in , or in in case they do nt ; disjointness then guarantees that .we see also that , because is satisfied by the tuples in , by definition ; by the tuples in or , by condition ( i ) ; and by the tuples in , by condition ( iii ) ; again disjointness allows us to sum all four cardinalities . similarly , using instead ( ii ) and ( iv ) , we obtain .the next delicate point is to show an upper bound on ( and on symmetrically ) .we split all the tuples that satisfy into two sets , those that additionally satisfy , and those that do nt .tuples that satisfy and not are exactly those in , and there are exactly many of them .satisfying and is the same as satisfying by condition ( i ) , and tuples that do it must also satisfy by condition ( vi ) .therefore , they satisfy both and , must belong to , and there can be at most many of them .that is , and , symmetrically , resorting to ( ii ) and ( vii ) , .thus we can write the following inequations : adding them up , using , we get that is , , so that as was to be shown .now we prove the rightwards direction ; the bound is not necessary for this part .since all our supports are integers , we can assume that the threshold is a rational number , , so that we can count on and .we will argue the contrapositive , assuming that we are in neither of the four cases , and showing that the entailment does not happen , that is , it is possible to construct a counterexample dataset for which all the implications in hold , and the two premise partial rules have confidence at least , whereas the rule in the conclusion has confidence strictly below .this requires us to construct a number of counterexamples through a somewhat long case analysis . in all of them, all the tuples will be closed sets with respect to ; this ensures that these implications are satisfied in all the transactions .we therefore assume that case ( 1 ) does not happen , that is , ; and that cases ( 2 ) and ( 3 ) do not happen either .now , theorem [ closredchar ] tells us that implies , and that implies .along the rest of the proof , we will refer to the properties explained in this paragraph as the `` known facts '' . then , assuming that case ( 4 ) does not hold either , we have to consider multiple ways for the conditions ( i ) to ( vii ) to fail .failures of ( i ) and ( ii ) , however , can not be argued separately , and we discuss them together ._ case a_. exactly one of ( i ) and ( ii ) fails . by symmetry ,renaming into if necessary , we can assume that ( i ) fails and ( ii ) holds .thus , but .then , by the known facts , .we consider a dataset consisting of one transaction with the itemset , transactions with the set , and transactions with the set , for a total of transactions .then , the support of is either or , and the support of is at most , for a confidence bounded by for the rule .however , the premise rules hold : since ( i ) fails , the support of is at most , and the support of is at least , for a confidence at least for ; whereas the support of is , that of is at least , and therefore the confidence is at least ._ case b_. this corresponds to both of ( i ) and ( ii ) failing .then , for a dataset consisting only of s , the premise rules hold vacuously whereas fails .we can also avoid arguing through rules holding vacuously by means of a dataset consisting of one transaction and transactions . _remark_. for the rest of the cases , we will assume that both of ( i ) and ( ii ) hold , since the other situations are already covered .then , by the known facts , we can freely use the properties and . _case c_. assume ( iii ) fails , , and consider a dataset consisting of one transaction , transactions , and transactions .here , by the known facts , the support of is zero .it suffices to check that the antecedent rules hold .since ( iii ) fails , and ( i ) holds , the support of is exactly and the support of is at least , for a confidence of at least ; whereas the support of is at most ( depending on whether ( iv ) holds ) for a confidence of rule of at least which is easily seen to be above .the case where ( iv ) fails is fully symmetrical and can be argued just interchanging the roles of and ._ case d_. assume ( v ) fails .it suffices to consider a dataset with one transaction and transactions . using ( i ) and ( ii ) , for both premises the confidence is , the support of is 1 , and the support of is zero by the known fact and the failure of ( v ) ._ case e_. we assume that ( vi ) fails , but a symmetric argument takes care of the case where ( vii ) fails . thus , we have . by treating this case last , we can assume ( i ) , ( ii ) , and ( v ) hold , and also the known facts that and .we consider a dataset with one transaction , one transaction , transactions , and transactions ( note that this last part may be empty , but ; the total is transactions ) . by ( v ) , the support of is at least , whereas the support of is at most , given the available facts .since , rule does not hold .however , the premises hold : all supports are at most , the total size , and the supports of ( using ( i ) ) and are both .this completes the proof . a small point that remains to be clarifiedis the role of the condition .as indicated in the proof of the theorem , that condition is only necessary in one of the two directions .if there is entailment , the conditions enumerated must hold irrespective of the value of .in fact , for , proper entailment from a set of two ( or more ) premises never holds , and -entailment in general is characterized as ( closure - based ) redundancy as per theorem [ closredchar ] and the corresponding calculus . indeed : let . then , if and only if either : 1 . , or 2 . , or 3 . .[ menorunmedio ] the leftwards proof is already part of theorem [ dospremisas ] . for the converse , assume that the three conditions fail : similarly to the previous proof , we have as known facts the following : , implies and implies .we prove that there are datasets giving low confidence to and high confidence to both premise rules .if both and then we consider one transaction , one transaction , and a large number of transactions which do not change the confidences of the premises but lead to a confidence of at most for . also , if but , where the symmetric case is handled analogously , we are exactly as in case a in the proof of theorem [ dospremisas ] and argue in exactly the same way . the interesting case is when both and ; then both and .we fix any integer and use the fact that to ensure that the fraction is positive and that the inequality can be transformed , by solving for , into ( following these steps for either makes the denominator null or reverses the inequality due to a negative sign ) .we consider a dataset with one transaction for and transactions for each of and .even in the worst case that either or both of and show up in all transactions , the confidences of and are at least , whereas the confidence of is zero .we work now towards a rule form , in order to enlarge our calculus with entailment from larger sets of premises .we propose the following additional rule : ( 2a) and state the following properties : given a threshold and a set of implications , 1 .this deduction scheme is sound , and 2 .together with the deduction schemes in section [ redundcalculus ] , it gives a calculus complete with respect to all entailments with two partial rules in the antecedent .[ newrule ] this follows easily from theorem [ dospremisas ] , in that it implements the conditions of case ( 4 ) ; soundness is seen by directly checking that the conditions ( i ) to ( vii ) in case 4 of theorem [ dospremisas ] hold : let and ; then , conditions ( i ) and ( ii ) hold trivially , and the rest are explicitly required in the form of implications in the premises ( notice that implies that and are equivalent ) .completeness is argued by considering any rule entailed by and jointly with respect to confidence threshold ; if the entailment is improper , apply theorem [ calculussoundcomplete ] , otherwise just apply this new deduction scheme with and to get and apply to obtain .it is easy to see that the scheme is indeed applicable : proper entailment implies that all seven conditions in case ( 4 ) hold and , for , we get from ( i ) and ( ii ) that ; under this equality , the remaining five conditions provide exactly the premises of the new deduction scheme .our main contribution , at a glance , is a study of confidence - bounded association rules in terms of a family of notions of redundancy .we have provided characterizations of several existing redundancy notions ; we have described how these previous proposals , once the relationship to the most robust definitions has been clarified , provide a sound and complete deductive calculus for each of them ; and we have been able to prove global optimality of an existing basis proposal , for the plain notion of redundancy , and also to improve the constructions of bases for closure - based redundancy , up to global optimality as well .many existing notions of redundancy discuss redundancy of a partial rule only with respect to another single partial rule ; in our section [ closbasedent ] , we have moved beyond into the use of two partial rules . for this approach to redundancy , we believe that this last step has been undertaken for the first time here; the only other reference we are aware of , where a consideration is made of several partial rules entailing a partial rule , is the early , which used a much more demanding notion of redundancy in which the exact values of the confidence of the rules were both available on the premises and required in the conclusion . in our simpler context , we have shown that the following holds : for , there is no case of proper -entailment from two premises ; beyond , there are such cases , and they are fully captured in terms of set inclusion relationships between the itemsets involved .we conjecture that a more general pattern holds .more precisely , we conjecture the following : for values of the confidence parameter , such that ( where ) , there are partial rules that are properly entailed from premises , partial rules themselves , but there are no proper entailments from or more premises .that is , intuitively , higher values of the confidence threshold correspond , successively , to the ability of using more and more partial premises .however , the combinatorics to fully characterize the case of two premises are already difficult enough for the current state of the art , and progress towards proving this conjecture requires to build intuition to much further a degree .this may be , in fact , a way towards stronger redundancy notions and always smaller bases of association rules .we wish to be able to establish such more general methods to reach absolutely minimum - size bases with respect to general entailment , possibly depending on the value of the confidence threshold as per our conjecture as just stated .we observe the following : after constructing a basis , be it either the representative rules or the family , it is a simple matter to scan it and check for the existence of pairs of rules that generate a third rule in the basis according to theorem [ dospremisas ] : then , removing such third rules gives a smaller basis with respect to this more general entailment .however , we must say that some preliminary empirical tests suggest that this sort of entailments from two premises seems to appear in practice very infrequently , so that the check is computationally somewhat expensive compared to the scarce savings it provides for the basis size .now that all our contributions are in place , let us review briefly a point that we made in the introduction regarding what is expected to be the role of the basis .the statement that association rule mining produces huge outputs , and that this is indeed a problem , not only is acknowledged in many papers but also becomes self - evident to anyone who has looked at the output of any of the association miner implementations freely accessible on the web ( say for one ) .however , we do not agree that it is _ one _ problem : to us , it is , in fact , _ two _ slightly different problems , and confusing them may lead to controversies that are easier to settle if we understand that different persons may be interested in different problems , even if they are stated similarly . specifically , let us ask whether a huge output of an association miner is a problem for the user , who needs to receive the output of the mining process in a form that a human can afford to read and understand , or for the software that is to store all these rules , with their supports and confidences . of course , the answer is `` both '' , but the solutions may not coincide . indeed ,sophisticated conceptual advances have provided data structures to be computed from the given dataset in such a way that , within reasonable computational resource limits , they are able to give us the support and confidence of any given rule in the given dataset ; maybe a good approximation is satisfactory enough , and this may allow us to obtain some efficiency advantages .the set of frequent sets , the set of frequent closures , and many other methods have been proposed for this task ; see , , , , , , , , and the surveys and .our approach is , rather , logical in nature , and aimed at the other variant of the problem : what rules are irredundant , in a general sense . from these, redundant rules reaching the thresholds can be found , `` just as rules '' .so , we formalize a situation closer to the practitioner s process , where a confidence threshold is enforced beforehand and the rules with confidence at least are to be discussed ; but we do _ not _ need to infer from the basis the value of the confidence of each of these other rules , because we can recompute it immediately as a quotient of two supports , found in an additional data structure that we assume kept , such as the closures lattice with the supports of each closed set . therefore , our bases , namely , the already - known representative rules and our new closure - based proposal , are rather `` user - oriented '' : we know that all rules above the threshold can be obtained from the basis , and we know how to infer them when necessary ; thus , we could , conceivably , guide ( or be guided by ) the user if ( s)he wishes to see all the rules that can be derived from one of the rules in the basis ; this user - guided exploration of the rules resulting from the mining process is alike to the `` direction - setting rules '' of , with the difference that their proposal is based on statistical considerations rather than the logic - based approach we have followed .the advantage is that our basis is not required to provide as much information as the bases we have mentioned so far , because the notion of redundancy does not require us to be able to compute the confidence of the redundant rules .this is why we can reach an optimum size , and indeed , compared to or , differs because these proposals , essentially , pick all minimal generators of each antecedent , which we avoid .the difference is marginal in the conceptual sense ; however the figures in practical cases may differ considerably , and the main advantage of our construction is that we can actually prove that there is no better alternative as a basis for the partial rules with respect to closure - based redundancy .further research may proceed along several questions .we believe that a major breakthrough in intuition is necessary to fully understand entailment among partial rules in its full generality , either as per our conjecture above or against it ; variations of our definition may be worth study as well , such as removing the separate confidence parameter and requiring that the conclusion holds with a confidence at least equal to the minimum of the confidences of the premises .other questions are how to extend this approach to the mining of more complex dependencies or of dependencies among structured objects ; however , extending the development to sequences , partial orders , and trees , is not fully trivial , because , as demonstrated in , there are settings where the combinatorial structures may make redundant certain rules that would not be redundant in a propositional ( item - based ) framework ; additionally , an intriguing question is : what part of all this discussion remains true if implication intensity measures different from confidence ( , ) are used ?the author is grateful to his research group at upc and to the regular seminar attendees ; also , for many observations , suggestions , comments , references , and improvements , the author gratefully acknowledges cristina trnuc , vernica dahl , tijl de bie , jean - franois boulicaut , the participants in seminars where the author has presented this work , the reviewers of the conference papers where most of the results of this paper were announced , and the reviewers of the present paper .
|
association rules are among the most widely employed data analysis methods in the field of data mining . an association rule is a form of partial implication between two sets of binary variables . in the most common approach , association rules are parametrized by a lower bound on their confidence , which is the empirical conditional probability of their consequent given the antecedent , and/or by some other parameter bounds such as `` support '' or deviation from independence . we study here notions of redundancy among association rules from a fundamental perspective . we see each transaction in a dataset as an interpretation ( or model ) in the propositional logic sense , and consider existing notions of redundancy , that is , of logical entailment , among association rules , of the form `` any dataset in which this first rule holds must obey also that second rule , therefore the second is redundant '' . we discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them . we show that the main alternatives we discuss correspond actually to just two variants , which differ in the treatment of full - confidence implications . for each of these two notions of redundancy , we provide a sound and complete deduction calculus , and we show how to construct complete bases ( that is , axiomatizations ) of absolutely minimum size in terms of the number of rules . we explore finally an approach to redundancy with respect to several association rules , and fully characterize its simplest case of two partial premises .
|
in this paper , we will consider a _shot noise process _ which is a real - valued random process given by where is a given ( deterministic ) measurable function ( it will be called the _ kernel function _ of the shot noise process ) , the are the points of a poisson point process on the line of intensity , where and is a positive -finite measure on and the are independent copies of a random variable ( called the _ impulse _ ) , independent of .shot noise processes are related to many problems in physics as they result from the superposition of `` shot effects '' which occur at random .fundamental results were obtained by rice .daley gave sufficient conditions on the kernel function to ensure the convergence of the formal series in a preliminary work .general results , including sample paths properties , were given by rosiski in a more general setting .in most of the literature the measure is the lebesgue measure on such that the shot noise process is a stationary one . in order to derive more precise sample paths properties andespecially crossings rates , mainly two properties have been extensively exhibited and used .the first one is the markov property , which is valid , choosing a noncontinuous positive causal kernel function , that is , for negative time .this is the case , in particular , of the exponential kernel for which explicit distributions and crossings rates can be obtained .a simple formula for the expected numbers of level crossings is valid for more general kernels of this type but resulting shot noise processes are nondifferentiable .the infinitely divisible property is the second main tool .actually , this allows us to establish convergence to a gaussian process as the intensity increases .sample paths properties of gaussian processes have been extensively studied and fine results are known concerning the level crossings of smooth gaussian processes ( see , e.g. ) . the goal of the paper is to study the crossings of a shot noise process in the general case when the kernel function is smooth . in this settingwe lose markov s property but the shot noise process inherits smoothness properties .integral formulas for the number of level crossings of smooth processes was generalized to the non - gaussian case by but it uses assumptions that rely on properties of some densities , which may not be valid for shot noise processes .we derive integral formulas for the mean number of crossings function and pay a special interest in the continuity of this function with respect to the level . exploiting further on normal convergence , we exhibit a gaussian regime for the mean number of crossings function when the intensity goes to infinity. a particular example , which is studied in detail , concerns the shot noise process where almost surely and is a gaussian kernel of width , such a model has many applications because it is solution of the heat equation ( we consider as a variable ) , and it thus models a diffusion from random sources ( the points of the poisson point process ) .the paper is organized as follows . in section [ crossingssec ], we consider crossings for general smooth processes .we give an explicit formula for the fourier transform of the mean number of crossings function of a process in terms of the characteristic function of .one of the difficulties is then to obtain results for the mean number of crossings of a given level and not only for almost every .thus we focus on the continuity property of the mean number of crossings function .section [ crosssn ] is devoted to crossings for a smooth shot noise process defined by ( [ sn1deq ] ) . in order to get the continuity of the mean number of crossings function , we study the question of the existence and the boundedness of a probability density for . in section [ highintensitysec ] ,we show how , and in which sense , the mean number of crossings function converges to the one of a gaussian process when the intensity goes to infinity .we give rates of this convergence .finally , in section [ gaussiankernel ] , we study in detail the case of a gaussian kernel of width .we are mainly interested in the mean number of local extrema of this process , as a function of .thanks to the heat equation , and also to scaling properties between and , we prove that the mean number of local extrema is a decreasing function of , and give its asymptotics as is small or large .the goal of this section is to investigate crossings of general smooth processes in order to get results for smooth shot noise processes .this is a very different situation from the one studied in [ , ] where shot noise processes are nondifferentiable .however , crossings of smooth processes have been extensively studied , especially in the gaussian processes realm ( see , e.g. ) which are second order processes . therefore ,in the whole section , we will consider second order processes which are both almost surely and mean square continuously differentiable ( see , section 2.2 , e.g. ) .this implies , in particular , that the derivatives are also second order processes .moreover , most of known results on crossings are based on assumptions on density probabilities , which are not well adapted for shot noise processes . in this section ,we revisit these results with a more adapted point of view based on characteristic functions . when is an almost surely continuously differentiable process on , we can consider its multiplicity function on an interval ] such that is called `` crossing '' of the level .then ) ] .now we have to distinguish three different types of crossings ( see , e.g. , ) : the up - crossings that are points for which and , the down - crossings that are points for which and and the tangencies that are points for which and .let us also recall that according to rolle s theorem , whatever the level is , )\le n_{x'}(0,[a , b])+1 \qquad \mbox{a.s.}\ ] ] note that when there are no tangencies of for the level , then ) ] the mean number of crossings of the level by the process in ] for . in this case ) ] a.s .( see , e.g. ) .one way to obtain results on crossings for almost every level is to use the well - known _ co - area formula _ which is , in fact , valid in the more general framework of bounded variations functions ( see , e.g. , ) .when is an almost surely continuously differentiable process on ] is integrable on and ) \,d\alpha=\int_a^b |x'(t)| \,dt ] .moreover , taking the expected values we get by fubini s theorem that ) \,d\alpha=\int_a^b{\mathbb{e}}(|x'(t)|)\,dt.\ ] ] therefore , when the total variation of on ] is integrable on .this is the case when is also mean square continuously differentiable since then the function is continuous on ] for almost every level but one can not conclude for a fixed given level .however , it allows us to use fubini s theorem such that , taking expectation in ( [ coarea ] ) , for any bounded continuous function , ) \,d\alpha=\int_a^b{\mathbb{e } } ( h(x(t ) ) |x'(t)| ) \,dt . \label{sncoareaeq}\ ] ] in the following theorem we obtain a closed formula for the fourier transform of the mean number of crossings function , which only involves characteristic functions of the process .this can be helpful when considering shot noise processes whose characteristic functions are well known .[ crossingspropgenerale ] let with .let be an almost surely and mean square continuously differentiable process on ] and its fourier transform ) ] can be computed by ) & = & - \frac{1}{\pi}\int_a^b \int_{0}^{+\infty } \frac{1}{v } \biggl ( \frac{\partial\psi_t}{\partial v } ( u , v ) - \frac{\partial\psi _ t}{\partial v } ( u ,- v ) \biggr ) \,dv \,dt\\ & = & - \frac{1}{\pi } \int_a^b\int_{0}^{+\infty } \frac { 1}{v^2}\bigl(\psi_t(u , v ) + \psi_t(u ,- v ) - 2\psi_t(u,0 ) \bigr ) \,dv \ , dt.\end{aligned}\ ] ] choosing in equation ( [ sncoareaeq ] ) of the form for any real , shows that )=\int_a^b{\mathbb{e } } ( e^{iux(t ) } |x'(t)| ) \,dt ] with respect to , since is a second order random variable ] .as goes to , then goes to , and moreover , for all , and , we have , thus by lebesgue s dominated convergence theorem , the limit of exists as goes to infinity and its value is latexmath:[\ ] ] we now have to choose in an appropriate way .the choice of will be given by the bound on .assume in the following that satisfies the condition given by , and let us set then for all ] such that goes to as goes to infinity . by continuity of , we have .now , we also have .indeed , if it were , we could again apply the implicit function theorem in the same way at the point , and get a contradiction with the maximality of . then , by assumption ( b ) , we have .we can again apply the implicit function theorem , and we thus obtain that there exist two open intervals and containing , respectively , and , and a function such that and , .moreover , we can compute the derivatives of at .we start from the implicit definition of : . by differentiation , we get . taking the value at , we get .we can again differentiate , and find .taking again the value at , we get thus it shows that has a strict local maximum at ; there exist a neighborhood of and a neighborhood of such that for all points in , then implies , which is in contradiction with the definition of on .this ends the proof of ( i ) , and also of ( iii ). for ( ii ) , assume that and are two points such that and such that there exists such that .then , if , the implicit function theorem implies that for all ] .the authors are also very grateful to the anonymous referees for their careful reading and their relevant remarks contributing to the improvement of the manuscript .
|
in this paper , we consider smooth shot noise processes and their expected number of level crossings . when the kernel response function is sufficiently smooth , the mean number of crossings function is obtained through an integral formula . moreover , as the intensity increases , or equivalently , as the number of shots becomes larger , a normal convergence to the classical rice s formula for gaussian processes is obtained . the gaussian kernel function , that corresponds to many applications in physics , is studied in detail and two different regimes are exhibited . . .
|
in recent years , there is a great deal of attention paid to the development of high dimensional classification methods .many independence rules are proposed to deal with the situations where the correlations between variables are weak .tibshirani et al . (2002 ) proposed the nearest shrunken centroid ( nsc ) classifier .fan and fan ( 2008 ) proposed the features annealed independence rule ( fair ) .moreover , bickel and levina ( 2004 ) showed that the independence rule , naive bayes ( nb ) performs better than the naive fisher discriminant ( nfr ) where the variables are correlated . when the correlations are significant , nfr is about the same as random guess .they also showed that a classification procedure using a subset of well selected features is better than that using all the features , which typically accumulates much noise in estimating population centroids in high dimensional space .in addition , methods integrating the covariance structure have been proposed in the literature , such as support vector machines ( vapnik 1995 ) , shrunken centroids regularized discriminant analysis ( scrda ) ( guo et al . 2005 ) , sparse linear discriminant analysis ( shao et al .2011 ) and ( lange et al . 2014 ) .a recent work fan et al .( 2012 ) proposed a new method that involves correlation information , called regularized optimal affine discriminant ( road ) .interestingly enough , the classification error of the road decreases as the correlation coefficient increases .two variants are screening - based rules , named s - road1 and s - road2 , which select only 10 features and 20 features , respectively . in the simulation study , under the -small `` and equal correlation setting , the road method outperforms the available classifiers mentioned above .s - road2 also performs well , while s - road1 fails for highly correlated variables .notice that the road and its variants have to select variables in the procedure of classification .although variable selection has been extensively developed in last decades , their practical implementation still faces several difficult issues such as the choice of turning parameter or thresholding values . in this paper , we investigate whether there are straightforward methods that can have competitive performances without preliminary variable selection .in addition , existing methods mainly focus on -small '' case and the localized mean vector scenario ( see follows for exact definition ) . however , the case of -large " with comparable magnitude and the delocalized scenario are common issues in high dimensional classification .the classification rules proposed in the paper will help to handle these situations .saranadasa ( 1993 ) proposes the determinant - based ( d- ) and trace - based ( t- ) criteria .their asymptotic misclassification probabilities are established for normal populations . in this paper , we focus on the performance of these two criteria in the delocalized scenario without the normal assumption .specifically , consider two -dimensional multivariate populations and with respective mean vectors , and common covariance matrix .the parameters , and are unknown and thus estimated using training samples from and from with respective sample size and . a new observation vector , is known to belong to or and the aim is to find exactly its origin population .more complicated sample setting can refer to leung ( 2001 ) , which considers mixed continuous and discrete variables in each group .cheng ( 2004 ) studies the situation where the two populations have different covariance matrices .krzyko and skorzybut ( 2009 ) considers the multivariate repeated measures data with kronecker product covariance structures .let , be the two training sample mean vectors where if the vector is classified to the population , then the overall within group sum of squares and cross products matrix is while , if is classified to , then the sum is intuitively , one would decide when is in some sense " than .the d - criterion defines this smallness to be and the t - criterion defines it to be two scenarios of mean difference are defined as follows : 1 . _localized scenario _ : the difference is concentrated on a small number of variables .we set and equals to a sparse vector : , where is the sparsity size .notice that the location of the non - zero components does not influence the performance of various classifiers ._ delocalized scenario _: the difference is dispersed in most of the variables . to ease the comparison with the localized scenario, we choose the parameters such that the averaged mahalanobis distances are the same under these two scenarios .this is motivated by the fact that following fisher ( 1936 ) , the difficulty of classification mainly depends on the mahalanobis distance between two populations . more precisely , we set and the elements of are randomly drawn from the uniform distribution where is the mahalanobis distance under the localized scenario , and is a parameter chosen to fulfill the requirement where is the mahalanobis distance under the delocalized scenario .direct calculations lead to for an equal correlation structure , for and .for an autoregressive correlation structure , , we find by focusing on the delocalized scenario , simulation study is conducted to display the performances of proposed procedures . as the main contribution of this paper, we generalize the d- and t- criteria from normality to general populations and establish their asymptotic misclassification probabilities .as it will be proven , the misclassification probability of the d - criterion will depend on the mahalanobis distance between the two populations , and the misclassification probability of the t - criterion will depend on the difference of two group mean vectors and the skewness and kurtosis coefficients of the two populations and .the rest of the paper is organized as follows . in section [ sec:1 ] , the asymptotic misclassification probability of the d - criterion under general populations is derived and monte carlo experiments are conducted to compare the performance with that of several existing classification rules . in section [ sec:7 ] , the asymptotic misclassification probability of the t - criterion under general populations is derived . anda real data is used to present the competitive performance of the t - criterion .the conclusion is made at the end of the paper .technical proofs are relegated to the appendix .unlike the normal populations assumed in saranadasa ( 1993 ) , we assume that the populations and have the general form as introduced in bai and saranadasa ( 1996 ) , i.e. \(a ) the population has the form , where is a mixing or loading matrix , and has independent and identically distributed , centered and standardized components .moreover , and we set .\(b ) similarly , the population has the form , where has independent and identically distributed , centered and standardized components .we set and . in consequence , the new observation where in distribution and if .throughout the paper , we set , and .notice that the data - generation model ( a)(b ) are quite general meaning that the population are linear combinations of some unobservable independent component .they are also adopted in overall recent studies on high - dimensional statistics , see chen et al .( 2010 ) , li and chen ( 2012 ) , srivastava et al .( 2011 ) and etc .the d - criterion ( [ e1 ] ) is easily seen equivalent to classifying into when where involves correlation information between variables .this criterion has a straightforward form and does not need a preliminarily selected subset of features or any thresholding parameter . the associated error of misclassifying into is under the data - generation models ( a ) and ( b ) , since , we have , or , where the misclassification probability ( [ e5 ] ) is rewritten as here is the first main result of this paper .[ t1 ] under the data - generation models ( a ) and ( b ) , assume that the following hold : 1 . and , where ; 2 . and for some constant .then as , the misclassification probability ( [ e7 ] ) for the d - criterion satisfies where is the mahalanobis distance between the two populations and .the proof of the theorem is given in appendix 1 .the significance of the result is as follows .the asymptotic value of depends on the values of , and , and is independent of other characteristics of the distributions and .firstly , this asymptotic value is symmetric about , so the value remains unchanged under a switch of the populations and .secondly , if and do not have large difference , i.e. or , the asymptotic value of mainly depends on when is fixed . in other words ,the classification task becomes easier for the d - criterion when the mahalanobis distance between two populations increases as expected .when , the number of features is very close to the sample size , the classification task becomes harder for the d - criterion due to the instability of the inverse , a phenomenon well - noticed in high - dimensional statistical literature . under normal assumption ,saranadasa ( 1993 ) derived another asymptotic value for notice that , with let us comment on the difference between and .the value of does not influence on the difference significantly . without loss of generality ,let .the factor is 1/2 when and satisfy . under this setting, figure [ fig:1 ] shows the asymptotic values , and compares them to empirical values from simulations , as ranges from 0.1 to 0.9 with step 0.1 .obviously , the difference between the two values are non - negligible ranging from to .moreover , is much closer to the empirical values than .so our asymptotic result is more accurate .other experiments have shown that only when the ratio of and reaches some small values as of order , the difference between them can be negligible .( solid ) , ( dashes ) and empirical values ( dots ) with 10,000 replications under normal samples . and ranges from 50 to 450 with step 50 . ]additional experiments are conducted to check the accuracy of the asymptotic value .figure [ fig:2 ] compares the values of to empirical values from simulations for normal samples .the empirical misclassification probabilities are very close to the theoretical values of .it s the same for both and situations .we conduct extensive tests to compare the d - criterion with several existing classification methods for high - dimensional data , the road method and its variants s - road1 ands - road2 , scrda , and the nb method , as well as the oracle .the oracle is defined following fan et al .( 2012 ) as the fisher s lda with true mean and true covariance matrix . in all simulation studies ,the number of variables is , and the sample sizes of the training and testing data in two groups are .the sparsity size is set to be . a similarsetting is used in fan et al .delocalized scenario is considered . in this part, the covariance is set to be an equal correlation matrix and correlation coefficient ranges from 0 to 0.9 with step 0.1 .simulation results for normal samples are shown in table [ tab:1 ] and a graphical summary is given in figure [ fig:3 ] including the median classification errors and standard errors .the d - criterion performs similarly to the road in terms of classification errors and is more robust than road when is smaller than 0.5 .the nb and the t - criterion lose efficiency when correlation exists in this setting . notice that the results of scrda calculated using the r package provided by guo et al .( 2005 ) are not included .the package turns out to fail in some of our settings and report " value .the percentage of failures in the simulations can reach 58% . therefore , it is unreliable to include scrda for comparison .c|cccccccc & d - criterion & road & s - road1 & s - road2 & nb & oracle & t - criterion + 0 & 9.6(1.55 ) & 9.4(2.91)&11.4(3.54)&9.6(3.24 ) & 6.6(1.23 ) & 5.6(1.13 ) & 6.2(1.18 ) + 0.1 & 9.2(1.52 ) & 8.4(2.50)&8.6(2.58)&8.4(2.50 ) & 12.4(1.57)&5.4(1.12 ) & 12.4(1.57 ) + 0.2 & 8.0(1.49 ) & 7.2(2.39)&7.4(2.42)&7.2(2.39 ) & 16.8(1.77)&4.4(1.06 ) & 16.8(1.76 ) + 0.3 & 6.4(1.37 ) & 6.0(1.87)&6.0(1.86)&6.0(1.87 ) & 20.2(1.88)&3.4(0.96 ) & 20.2(1.87 ) + 0.4 & 5.0(1.24 ) & 4.6(1.55)&4.6(1.55)&4.6(1.55 ) & 22.6(1.94)&2.4(0.82 ) & 22.6(1.94 ) + 0.5 & 3.4(1.04 ) & 3.2(1.02)&3.2(1.02)&3.2(1.02 ) & 24.6(2.00)&1.6(0.65 ) & 24.6(1.99 ) + 0.6 & 2.0(0.79 ) & 1.8(0.73)&1.8(0.74)&1.8(0.73 ) & 26.2(2.04)&0.8(0.46 ) & 26.2(2.03 ) + 0.7 & 0.8(0.51 ) & 0.8(0.47)&0.8(0.47)&0.8(0.47 ) & 27.4(2.06)&0.2(0.26 ) & 27.4(2.05 ) + 0.8 & 0.2(0.22 ) & 0.2(0.20)&0.2(0.20)&0.2(0.20 ) & 28.6(2.08)&0.0(0.09 ) & 28.6(2.07 ) + 0.9 & 0.0(0.02 ) & 0.0(0.02)&0.0(0.02)&0.0(0.02 ) & 29.6(2.10)&0.0(0.00 ) & 29.6(2.10 ) + c|cccccccc & d - criterion & road & s - road1 & s - road2 & nb & oracle & t - criterion + 0 & 12.0(1.55 ) & 9.0(2.76)&9.0(2.80)&9.0(3.24 ) & 9.1(1.29 ) & 7.8(1.29 ) & 8.6(1.24 ) + 0.1 & 11.6(1.56 ) & 9.8(3.11)&15.2(6.32)&11.6(3.61 ) & 15.2(4.17)&7.6(1.27 ) & 14.8(3.40 ) + 0.2 & 10.4(1.48 ) & 8.6(2.81)&19.6(6.76)&11.4(3.44 ) & 19.2(7.00)&6.6(1.23 ) & 19.0(5.79 ) + 0.3 & 9.0(1.38 ) & 7.4(2.36)&24.0(7.26)&10.6(3.00 ) & 22.4(8.83)&5.6(1.16 ) & 22.0(7.58 ) + 0.4 & 7.6(1.27 ) & 6.0(1.50)&27.6(8.06)&9.2(2.73 ) & 24.8(10.15)&4.6(1.06 ) & 24.2(8.99 ) + 0.5 & 6.0(1.13 ) & 4.8(1.00)&28.9(9.35)&7.8(2.26 ) & 27.0(11.11)&3.4(0.91 ) & 26.2(10.11 ) + 0.6 & 4.4(0.97 ) & 3.4(0.84)&29.2(10.83)&6.0(1.73 ) & 29.0(11.90)&2.4(0.75 ) & 27.6(11.02 ) + 0.7 & 2.8(0.78 ) & 2.0(0.65)&29.2(12.32)&4.0(1.26 ) & 30.6(12.51)&1.4(0.57 ) & 29.0(11.79 ) + 0.8 & 1.2(0.53 ) & 0.8(0.43)&28.8(13.74)&2.0(0.90 ) & 32.0(13.01)&0.6(0.36 ) & 30.2(12.44 ) + 0.9 & 0.2(0.23 ) & 0.2(0.20)&28.6(15.06)&0.4(0.39 ) & 33.4(13.35)&0.0(0.14 ) & 31.2(12.96 ) + simulation results for student s t ( degree of freedom is set to be 7 ) samples are shown in table [ tab:2 ] .all classifiers have slightly higher misclassification rates for student s t samples. s - road1 and s - road2 have larger standard errors . and s - road1 , nb and t - criterion lose efficiency when correlation is significant .the d - criterion outperforms the others except road in term of classification error .but the d - criterion has the smallest standard error which is close to that of oracle . in this part, the covariance is set to be an autoregressive correlation matrix and ranges from 0 to 0.9 with step 0.1 .previous the results have shown that nb is not a good rule when significant correlation exists . therefore , nb is no more included in comparison .since the comparison results are similar in normal samples and studentt t samples , we only use normal samples in this part .simulation results are shown in table [ tab:3 ] and a graphical summary is given in figure [ fig:4 ] .the t - criterion is only suitable for independent case , and loses efficiency when .the d - criterion has the same performance with road and s - road2 in terms of classification error .moreover , the d - criterion is much more robust and has a standard error close to that of the oracle .c|ccccccc & d - criterion & road & s - road1 & s - road2 & oracle & t - criterion + 0 & 9.6 ( 1.55 ) & 9.4 ( 2.91 ) & 11.6 ( 3.54)&9.6 ( 3.24 ) & 5.6(1.13)&6.2(1.18 ) + 0.1 & 11.8(1.68 ) & 11.4(3.42)&12.8(3.67)&11.6(3.61 ) & 0.0(0.09)&8.0(1.31 ) + 0.2 & 14.2(1.80 ) & 13.4(4.27)&14.4(4.02)&13.6(4.39)&0.0(0.15)&10.0(1.44 ) + 0.3 & 16.4(1.89 ) & 15.4(5.48)&16.0(4.61)&15.6(5.55)&0.4(0.33)&12.2(1.57 ) + 0.4 & 18.6(1.99 ) & 17.4(6.78)&17.8(5.95)&17.6(6.73)&1.8(0.64)&14.8(1.70 ) + 0.5 & 20.8(2.07 ) & 19.6(7.54)&20.0(7.29)&19.8(7.52)&4.6(1.02)&17.8(1.81 ) + 0.6 & 22.6(2.16 ) & 22.0(7.53)&22.6(7.34)&22.2(7.46)&8.6(1.38)&21.4(1.92 ) + 0.7 & 23.6(2.26 ) & 23.8(7.71)&26.0(7.54)&24.0(7.64)&12.6(1.71)&25.0(2.03 ) + 0.8 & 22.8(2.38 ) & 23.2(8.14)&30.6(7.67)&23.8(8.19)&14.6(1.94)&31.0(2.12 ) + 0.9 & 17.0(2.39 ) & 17.0(7.31)&33.4(9.13 ) & 18.0 ( 8.26)&11.4(1.93)&37.0(2.19 ) + in conclusion , compared to these existing methods , the d - criterion is competitive for -large " situation specifically under delocalized scenario and autoregressive correlation structure . in such a scenario ,the d - criterion has a classification error comparable to that of the road - family classifiers while being the most robust with a much smaller standard error close to that of the oracle .notice that one limitation of the d - criterion is that the dimension must be smaller than the sample size .in addition , when the ratio is close to , the performance of this criterion becomes bad due to the matrix is close to singular .the t - criterion in contrast does not have such a limitation .the t - criterion ( [ e2 ] ) is easily seen equivalent to obviously , the t - criterion has a very simple form only involving the group mean vectors .in particular , it does not require to select a subset of features or to choose a threshold parameter . when , the error of misclassifying into is here is the second main result of this paper . throughout the paper , is a length vector with all entries 1 , is a length vector with all entries 0 .[ t2 ] under the data - generation models ( a ) and ( b ) , assume that the following hold : 1 . and for some constant ; 2 .the covariance matrix is diagonal , i.e. ; 3 . ; and 4 . as , where . then we have as and , where the proof of the theorem is given in appendix 2 .assumption 1 is needed for dealing with non - normal populations .assumption 3 is a weak and technical condition without any practical limitation .assumption 4 is satisfied for most applications where typically and are all of order .the main term of is , since it has the order and other terms are . in order to get more accurate result in finite sample case ,these terms are kept in the theorem .notice that the main term of the approximation of depends on the ratio .if the components of satisfy , and for positive constants , then when , and in other words , the classification task becomes easier when the dimension grows .in other scenarios , this misclassification probability is not guaranteed to vanish .for example , under a localized scenario , , for and is fixed and independent of , then next , we provide below some simulation results to demonstrate the importance of keeping the terms in .the experiments use and various combinations of sample sizes with normal samples and gamma samples , respectively .empirical classification errors are compared in figure [ fig:5 ] to the following three approximations of the variance : * ; * ; * . among the three ,the proposed approximation matches very well the empirical values , while is by far the worst in all tested cases . as for and , they are by definition the same for normal samples ( since ) . for gamma samples ,they remain close each other particularly when the relative difference of sample sizes become small , and has an overall slightly better performance than ( in these tested cases ) .notice that the gamma standardized variables are where is gamma distributed with unit shape and scale parameters so that . under normal assumption, the expectation of ( defined in appendix ) is the same with ( [ e18 ] ) , and the variance simplifies to which coincides with the result established in saranadasa ( 1993 ) .we conduct simulations to show the performances of the t - criterion for normal distributions under delocalized scenario . in the simulation studies ,the number of variables is . without loss of generality , the sample sizes of the training and testing data in two groups are equal and range from 100 to 500 with step 50 .the covariance is set to be an identity matrix and the sparsity size is .simulation results are shown in table [ tab:4 ] .the classification error decreases as sample size increases .meanwhile , small standard errors indicate that the t - criterion is robust with respect to the delocalization nature of mean differences .notice that the t - criterion is an independence rule .it s suitable for case where variables are independent or the correlations between variables are weak .as shown in tables 1 - 3 , the t - criterion has very high misclassification rate when variables have significant correlations .c|cccc|ccccc & & + & 100 & 150 & 200 & 250 & 300 & 350 & 400 & 450 & 500 + median & 13.00 & 11.00 & 9.75 & 9.00 & 8.50 & 8.14 & 7.88 & 7.56 & 7.40 + s.e . &( 2.52 ) & ( 1.90 ) & ( 1.57 ) & ( 1.35 ) & ( 1.20 ) & ( 1.11 ) & ( 1.01 ) & ( 0.95 ) & ( 0.89 ) + in this part , we analyze a popular gene expression data : ( golub et al . 1999 ) .the leukemia data set contains genes for acute lymphoblastic leukemia and acute myeloid leukemia vectors in the training set .the testing set includes 20 acute lymphoblastic leukaemia and 14 acute myeloid leukemia vectors .obviously , this data set is a -small " case .the classification results for the t - criterion , road , scrda , falr , nsc and nb methods are shown in table [ tab:5 ] .( the results for road , scrda , fair , nsc and nb are found in fan et al .( 2012 ) . )the t - criterion is as good as road and nb in terms of training error .road and fair perform better than t - criterion in terms of testing error .both of nb and t - criterion make use of all genes , but t - criterion outperforms nb .meanwhile , t - criterion performs better than nsc .overall , on this data set , t - criterion outperforms scrda , nsc and nb , equally well as fire , and is beaten only by road ( 2 v.s .1 errors ) .it s quite surprising that a - minded rule like t - criterion has a performance comparable to a sophisticated rule like road. used + t - criterion & 0 & 2 & 7129 + road & 0 & 1 & 40 + scrda & 1 & 2 & 264 + fair & 1 & 1 & 11 + nsc & 1 & 3 & 24 + nb & 0 & 5 & 7129 +we have proposed two new classification rules for high - dimensional data , namely the d - criterion and the t - criterion .both methods consider the overall within group sum of squares and cross products matrices .the d - criterion compares the determinants of these matrices and integrates correlation information between variables .the d - criterion performs well when correlations between variables become significant . when the correlation coefficient increases , the classification error of the d - criterion drops .the incorporation of covariance structure therefore strengthens the effectiveness in high dimensional classification . the t - criterion , on the other hand ,compares the traces of these matrices and involves only group mean vectors .the implementation of these two criteria is straightforward and it does not suffer from challenging issues such as variable selection , thresholding or control of the sparsity size that are required in the existing methods .we found d - criterion is particularly competitive in delocalized scenario . when , the t - criterion is quite effective as proven by the real data analysis .moreover , using the explicit forms of the criteria and recent results from random matrix theory , we are able to derive asymptotic approximations for the misclassification probability of both criteria . notice that such asymptotic approximations are unknown for most of the existing high - dimensional classifiers in the literature .simulation results have shown that the proposed approximations are quite accurate for both normal and non - normal populations .under the data - generation models ( a ) and ( b ) , let . conditioned on , the misclassification probability ( [ e7 ] ) can be rewritten as where therefore , where is assumed implicitly .it is easy to obtain the conditional expectation ( [ e13 ] ) . for the conditional variance of ,we first calculate the conditional second moment ^ 2 \\ & & \quad \quad + \alpha_2 ^ 2 [ \mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } -2 ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } + ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1 } ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})]^2 \\ & & \quad \quad-2\alpha_1\alpha_2 [ \mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } - 2 \bar{\mathbf{x}}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } + \bar{\mathbf{x}}^{\ast \prime } \tilde{\mathbf{a}}^{-1 } \bar{\mathbf{x}}^{\ast } ] \\ & & \quad \quad \quad \quad \times [ \mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } -2 ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } + ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1 } ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu } } ) ] \big\}.\end{aligned}\ ] ] since ^ 2 = ( \gamma_x -3 ) \sum_l b_{ll}^2 + \big(\textrm{tr } \tilde{\mathbf{a}}^{-1}\big)^2 + 2 \textrm{tr } ( \tilde{\mathbf{a}}^{-2});\\ & & \mathit{e}_\omega\left[\mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast}\cdot \bar{\mathbf{x}}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast}\right ] = \theta_x \sum_l b_{ll } \big(\tilde{\mathbf{a}}^{-1}\bar{\mathbf{x}}^{\ast}\big)_l;\\ & & \mathit{e}_\omega\left [ \mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast}\cdot ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } \right ] = \theta_x \sum_l b_{ll } \big(\tilde{\mathbf{a}}^{-1}(\bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})\big)_l ; \\ & & \mathit{e}_\omega \left [ \bar{\mathbf{x}}^{\ast \prime } \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } \cdot \mathbf{z}^{\ast \prime}\tilde{\mathbf{a}}^{-1 } \bar{\mathbf{x}}^{\ast}\right ] = \bar{\mathbf{x}}^{\ast \prime } \tilde{\mathbf{a}}^{-2}\mathbf{x}^{\ast};\\ & & \mathit{e}_\omega \left [ ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-1}\mathbf{z}^{\ast } \cdot \mathbf{z}^{\ast \prime } \tilde{\mathbf{a}}^{-1 } ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})\right ] = ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}})^\prime \tilde{\mathbf{a}}^{-2 } ( \bar{\mathbf{y}}^\ast + \tilde{\vec{\mu}}),\end{aligned}\ ] ] we obtain finally , by equation ( [ e14 ] ) follows . the lemma [ l2 ] is proved .the first step of the proof of theorem [ t1 ] is similar to the one of the proof of theorem [ t2 ] where we ensure that satisfies the lyapounov condition .the details are referred to ( [ e14 ] ) .therefore , conditioned on , as , the misclassification probability for the d - criterion satisfies next , we look for main terms in and , respectively , using lemma [ l2 ] . for , we find the following equivalents for the three terms by the assumption 2 in theorem [ t2 ] , the covariance matrix is . under the data - generation models ( a ) and ( b ), the misclassification probability ( [ e10 ] ) can be rewritten as where 1 . + and 2 . + and \textrm{tr}(\mathbf{\sigma}^2 ) + \beta_2(\theta ) \mathbf{i}^\prime \gamma^3\vec{\delta } + 4\alpha_2\vec{\delta}^\prime \mathbf{\sigma}\vec{\delta},\label{e19}\end{aligned}\ ] ] where if removing the small terms with order , then the formula of in theorem [ t2 ] is obtained . for the variance , we have ^ 2 \nonumber \\ & = & \sigma_{ll}^2 \cdot \mathit{e}\left\{\alpha_1 ( z^\ast_l-\bar{x}^\ast_l)^2-\alpha_2(z^\ast_l -\bar{y}^\ast_l-\tilde{\mu}_l)^2 + \alpha_2\tilde{\mu}_l^2\right\}^2\nonumber \\[1 mm ] & = & \sigma_{ll}^2\cdot \big\{\alpha_1 ^ 2 \mathit{e}(z^\ast_l-\bar{x}^\ast_l)^4 + \alpha_2 ^ 2 \mathit{e}(z^\ast_l-\bar{y}^\ast_l)^4 + 4 \alpha_2 ^ 2\tilde{\mu}_l^2 \mathit{e}(z^\ast_l-\bar{y}^\ast_l)^2 \nonumber \\[1 mm ] & & \hskip1 cm -2\alpha_1\alpha_2 \mathit{e}\left[(z^\ast_l-\bar{x}^\ast_l)^2 ( z^\ast_l-\bar{y}^\ast_l)^2\right ] -4 \alpha_2 ^ 2 \tilde{\mu}_l \mathit{e}(z^\ast_l -\bar{y}^\ast_l)^3 \nonumber \\[1 mm ] & & \hskip1 cm + 4\alpha_1 \alpha_2 \tilde{\mu}_l \mathit{e}[(z^\ast_l-\bar{x}^\ast_l)^2(z^\ast_l-\bar{y}^\ast_l)]\big\}.\end{aligned}\ ] ] moreover , ^ 4 & = & \gamma_x\left(1+\frac{1}{n_1 ^ 3}\right ) + \frac{6n_1 ^ 2 + 3n_1 - 3}{n_1 ^ 3},\\ \mathit{e}[z^\ast_l-\bar{y}^\ast_l]^4 & = & \gamma_x+\frac{\gamma_y}{n_2 ^ 3}+\frac{6n_2 ^ 2 + 3n_2 - 3}{n_2 ^ 3},\\ \mathit{e}[z^\ast_l-\bar{y}^\ast_l]^2 & = & \alpha_2^{-1},\\ \mathit{e}[z^\ast_l -\bar{y}^\ast_l]^3 & = & \theta_x -\frac{\theta_y}{n_2 ^ 2},\\ \mathit{e}\left\{[z^\ast_l-\bar{x}^\ast_l]^2[z^\ast_l-\bar{y}^\ast_l]^2\right\ } & = & \gamma_x + \frac{1}{\alpha_1\alpha_2}-1,\end{aligned}\ ] ] and finally , we obtain + \alpha_2 ^ 2\left[\gamma_x+\frac{\gamma_y}{n_2 ^ 3 } + \frac{6n_2 ^ 2 + 3n_2 - 3}{n_2 ^ 3}\right]\\ & & \quad \quad + 4\alpha_2 ^ 2\tilde{\mu}_l^2\alpha_2^{-1 } -2\alpha_1\alpha_2 \left[\gamma_x + \frac{1}{\alpha_1\alpha_2}-1\right ] + 4\alpha_1\alpha_2\tilde{\mu}_l \theta_x -4\alpha_2 ^ 2 \tilde{\mu}_l \left[\theta_x-\frac{\theta_y}{n_2 ^ 2}\right ] \bigg\}\\ & = & \sigma_{ll}^2 \bigg\ { \gamma_x\left(\alpha_1 ^ 2+\frac{\alpha_1 ^ 2}{n_1 ^ 3}+\alpha_2 ^ 2 - 2\alpha_1\alpha_2\right ) + \frac{\alpha_2 ^ 2\gamma_y}{n_2 ^ 3 } + \alpha_1 ^ 2\frac{6n_1 ^ 2 + 3n_1 - 3}{n_1 ^ 3}\\ & & \quad \quad + \alpha_2 ^ 2\frac{6n_2 ^ 2 + 3n_2 - 3}{n_2 ^ 3}-2 + 4\alpha_2\tilde{\mu}_l^2 + 2\alpha_1\alpha_2 + 4\alpha_2(\alpha_1-\alpha_2)\tilde{\mu}_l \theta_x + \frac{4\tilde{\mu}_l}{n_2 ^ 2}\theta_y \bigg\}\\ & = & \sigma_{ll}^2\big\{\beta_0+ \beta_1(\gamma ) + \beta_2(\theta)\tilde{\mu}_l + 4\alpha_2\tilde{\mu}_l^2\big\}.\end{aligned}\ ] ] equation ( [ e19 ] ) follows .then can be rewritten as \textrm{tr}(\mathbf{\sigma}^2)\\ & & + \left [ 4\frac{n_2}{n_2 + 1}\left(\frac{1}{n_2 + 1}-\frac{1}{n_1 + 1}\right)\theta_x+ \frac{4}{n_2 ^ 2}\theta_y \right]\mathbf{1}_p^\prime \gamma^3\vec{\delta}\\ & & + 4\frac{n_2}{n_2 + 1}\vec{\delta}^\prime \mathbf{\sigma}\vec{\delta}\\ & \approx & \big [ \frac{4}{n_1}+\frac{4}{n_2}+\frac{3}{n_1 ^ 2 } + \frac{3}{n_2 ^ 2 } + \frac{2}{n_1n_2 } -\frac{3}{n_1 ^ 3 }-\frac{3}{n_2 ^ 3 } + \frac{\gamma_x}{n_1 ^ 2 } + \frac{\gamma_y}{n_2 ^ 2 } -\frac{2\gamma_x}{n_1n_2 } + \frac{\gamma_x}{n_1 ^ 3 } -\frac{\gamma_y}{n_2 ^ 3}\big]\textrm{tr}(\mathbf{\sigma}^2)\\ & & + \left[4\left(\frac{1}{n_2}-\frac{1}{n_1}\right)\theta_x + \frac{4}{n_2 ^ 2}\theta_y\right]\mathbf{1}_p^\prime \gamma^3 \vec{\delta}\\ & & + 4\left(1-\frac{1}{n_2}\right)\vec{\delta}^\prime \mathbf{\sigma}\vec{\delta}.\end{aligned}\ ] ] only keep the terms with order and we can get the formula of in theorem [ t2 ] . the lemma [ l3 ] is proved .we know that {1\leq l \leq p} ] , that is , there is a constant such that \to 0.\end{aligned}\ ] ] since the of $ ] is + \big|\tilde{\mu}_l\big|^2 \right\}\\ & \leq & \sigma_{ll } \left\ { 6\left[2\gamma_{4+b^\prime , x}^{1/(4+b^\prime ) } + \gamma_{4+b^\prime , y}^{1/(4+b^\prime)}\right ] + \big|\tilde{\mu}_l\big|^2 \right\}.\end{aligned}\ ] ] then ^{2+b } \leq c_b \sigma_{ll}^{2+b } \cdot \left\ { 1 + \big|\tilde{\mu}_l\big|^{4+b^\prime } \right\},\end{aligned}\ ] ] where is some constant depending on . therefore , as , ^{2+b } & \leq & c_b \cdot \frac{\sum_l \sigma_{ll}^{2+b } + \sum_l \sigma_{ll}^{2+b}|\tilde{\mu}_l|^{4 + 2b}}{\left ( \sum_l \sigma_{ll}\delta_l^2 \right)^{1+b/2}}\\ & = & c_b\cdot \frac{\sum_l \sigma_{ll}^{2+b } + \sum_l \delta_l^{4 + 2b}}{(\sum_l \sigma_{ll}\delta_l^2)^{1+b/2 } } \quad \to 0,\end{aligned}\ ] ] by the assumption 4 in theorem [ t2 ] . finally , we have \rightarrow n(0,1 ) , \ as \ p\to \infty ,\ n_\ast \to \infty.\end{aligned}\ ] ] this ends of the proof of theorem [ t2 ] .golub tr , slonim dk , tamayo p , huard c , gaasenbeek m , mesirov jp , coller h , loh ml , downing jr , caligiuri ma , bloomfield cd , lander es ( 1999 ) molecular classification of cancer : class discovery and class prediction by gene expression monitoring .sci 286:531 - 537 saranadasa h ( 1993 ) asymptotic expansion of the misclassification probabilities of d- and a - criteria for discrimination from two high dimensional populations using the theory of large dimensional random matrices .j multivar anal 46:154 - 174
|
in this paper , we generalize two criteria , the determinant - based and trace - based criteria proposed by saranadasa ( 1993 ) , to general populations for high dimensional classification . these two criteria compare some distances between a new observation and several different known groups . the determinant - based criterion performs well for correlated variables by integrating the covariance structure and is competitive to many other existing rules . the criterion however requires the measurement dimension be smaller than the sample size . the trace - based criterion in contrast , is an independence rule and effective in the dimension - small sample size " scenario . an appealing property of these two criteria is that their implementation is straightforward and there is no need for preliminary variable selection or use of turning parameters . their asymptotic misclassification probabilities are derived using the theory of large dimensional random matrices . their competitive performances are illustrated by intensive monte carlo experiments and a real data analysis . example.pdf gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
invasion fronts play an important organizing role in spatially extended systems , with applications ranging from ecology to material science .they arise when a system is quenched into an unstable state and spatially localized fluctuations drive the system away into a more stable state . in an idealized situation ,fluctuations are reduced to one small localized perturbation of the initial state .such an initial disturbance then grows and spreads spatially , leaving behind a new state of the system .beyond such an idealized scenario , one might expect that localization of fluctuations is quite unlikely in a large system .the mechanism of a spatially spreading disturbance is however still relevant , at least for the description of transients , when initial disturbances are localized at several well separated locations in physical space . on the other hand ,in particular for problems in ecology , unstable states prevail over large parts of space without disturbance because invasive species are simply absent in most of the domain , and spreading of the invasion is mediated by slow diffusive motion combined with exponential growth . in general , localization of disturbancescan be achieved systematically when the system is quenched into an unstable state in a spatially uniformly progressing way , a scenario particularly relevant in a number of engineering applications .conceptually , one is interested in two aspects of the invasion process : 1 . what is the invasion speed ?2 . what is the state in the wake selected by the invasion process ? the first question is most natural in ecological contexts while the second question occurs naturally in manufacturing and engineering applications. both questions are clearly intimately related .one may envision , for instance , that different invasion speeds are associated with different possible states in the wake of the front and , in a simple scenario of almost linear superposition , the state in the wake of a primary invasion would be the fastest spreading state .the present work focuses mostly on the first question , while pointing out in some situations the intimate connection with the second aspect . in many simple , mostly scalar contexts , invasion speedsare well defined and can be characterized in various fashions , for instance using generalized eigenvalue problems or min - max principles . the key ingredient to almost all those characterizations is an order preservation property of the nonlinear evolution of the system .while very effective when available , such a property is intrinsically violated whenever we are interested in pattern - forming systems . in a somewhat less comprehensive and less rigorous fashion , spreading speeds have been known to be related to concepts of absolute and convective instability .invasion speeds are characterized as critical states : an observer traveling at the spreading speed observes a marginally stable system .this characterization originates in the theory of absolute and convective instability , motivated originally by studies of plasma instabilities with many applications in fluid dynamics . without striving to give a comprehensive ( or even adequate ) review of the relevant literature , we will pursue this approach in a systematic fashion . trying to press some folklore observations into precise lemmas, we uncover a number of apparently unknown ( or at least under - appreciated ) aspects of convective and absolute instabilities , which directly impact the characterization of spreading speeds . as a general rule ,the analysis here is linear , but intrinsically motivated by the desire to derive criteria and consequences for nonlinear invasion processes . beyond providing a precise language for the mathematically inclined reader interested in this approach to invasion problems, we point to several interesting phenomena that deserve further exploration . in particular , we highlight two main results of this work : [ [ multiple - spreading - modes . ] ] multiple spreading modes .+ + + + + + + + + + + + + + + + + + + + + + + + + we show that a number of intriguing subtleties are associated with multiple , degenerate spreading modes .we show that in this case growth modes and spreading speeds lack continuity properties with respect to system parameters and point to consequences for nonlinear invasion problems .[ [ multi - dimensional - spreading - forms - stripes . ] ] multi - dimensional spreading forms stripes . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we prove a fundamental result on multi - dimensional invasion processes which states , loosely speaking , that linearly determined multi - dimensional pattern - forming invasion processes _ always _ select stripes , or one - dimensional patterns , in the leading edge .more complex patterns such as squares or hexagons are always consequences of secondary invasion processes .[ [ outline . ] ] outline .+ + + + + + + + this paper is organized as follows .we will characterize pointwise growth rates through pointwise green s functions in section [ s:1 ] , thus distinguishing between convective and absolute instabilities in a quantitative and systematic fashion .in section [ s:2a ] , we consider the positive half line and the influence of boundary conditions on the pointwise growth rates . in section [ s : alg ] , we recall the classical characterization of pointwise growth rates via pinched double roots .we show that pinched double roots may overestimate pointwise growth rates but are generically equivalent to singularities of the green s function .we discuss more generally properties of both concepts in section [ s : prop ] . in section[ s : sp ] , we introduce comoving frames and spreading speeds .section [ s : spmult ] contains our main result on multi - dimensional spreading behavior .finally , we discuss several extensions in section [ s : dis ] such as nonlinear problems , problems with periodic coefficients , and localized spatial inhomogeneities .the authors acknowledge partial support through nsf ( dms-1004517 ( mh ) , dms-0806614(as ) , dms-1311740(as ) ) .this research was initiated during an nsf - sponsored reu program in the summer of 2012 .we are grateful to koushiki bose , tyler cox , stefano silvestri and patrick varin for working out some of the examples in this article .we also thank ryan goh for many stimulating discussions related to the material in sections [ s : alg ] and [ s : prop ] .in this section , we study pointwise growth in terms of properties of the convolution kernel of the resolvent .we focus on parabolic systems , section [ s:1.1 ] , discuss differences between pointwise stability and stability in norm , section [ s:1.2 ] , and give a general , geometric characterization of pointwise growth in section [ s:1.3 ] .we conclude with a number of examples that will also be relevant later , section [ sec : examples ] . in this section , we will review the notion of convective and absolute instabilities in dissipative systems . in particular , we study instabilities related to invasion phenomena in reaction - diffusion systems .first consider a general system of parabolic equations where is a constant coefficient elliptic operator of order .that is , is a vector - valued polynomial of order such that the order coefficients satisfy strict ellipticity .more explicitly , we write with multi - index notation so that and .we then require that the -matrices satisfy strict ellipticity .equivalently , there exists some such that for all and .our main interest is in `` generic '' compactly supported ( smooth ) initial conditions and their behavior as .the spectrum of in translation - invariant spaces such as consists of continuous essential spectrum only , that is , is not fredholm with index 0 when it is not invertible .the spectrum can be determined from the dispersion relation as this can be readily established in using a fourier transform , but it also holds in most other translation - invariant norms such as and .we say is stable if and is unstable if .on the other hand , choosing exponentially weighted norms , for some weight - vector , one finds this can be readily established using the isomorphism which conjugates and .our interest here is in a slightly different notion of stability , where one poses the system on but measures stability or instability only in bounded regions of .we will see that stability and instability properties do not depend on the particular choice of the window of observation and we refer to this type of stability property as _ pointwise stability _ or _ pointwise instability_. to be precise , consider the solution with dirac initial condition , .we say is pointwise stable if for some .we say is pointwise unstable if for some .using parabolic regularity , one readily finds * pointwise instability -instability * -stability pointwise stability the intermediate regime of pointwise stability and -instability is often referred to as _ convective instability _ , while pointwise instability is also often called _absolute instability_. from the estimates on the essential spectrum , we obtain that where and .this estimate is in fact sharp by spectral mapping theorems which hold for sectorial operators of the form considered here ; see for instance .on the other hand , for compactly supported on , the observation on exponential weights shows that with any in general , this estimate is not sharp in a fundamental way .for instance , the best choice of might depend on the wavenumber . in a more subtle way, unstable absolute spectrum may give rise to instabilities in _ any _ exponential weight while not generating pointwise instabilities in the above sense .we will discuss this particular discrepancy and resulting complications later when pointing out differences between pinched double roots and pointwise growth rates .we will concern ourselves with the one - dimensional case where and .we discuss multi - dimensional instabilities in section [ s : spmult ] .consider the one - dimensional restriction of ( [ e:0 ] ) , where and note that this condition is seen to be equivalent to ( [ e : cond ] ) for a suitable choice of coordinates in . in order to solve ( [ e:0 ] ) , we use the laplace transform to write the contour is initially chosen to be sectorial and to the right of the spectrum of .further , the resolvent is given as a convolution with the green s function (x)=\int_yg_\lambda(x - y)f(y){\mathrm{d}}y.\ ] ] the green s function in turn can be found via the inverse fourier transform of the fourier conjugate operator , .\ ] ] when has compact support we can evaluate ( [ e:1 ] ) pointwise , the contour can be any contour with the same asymptotics as when so that is analytic in the region bounded by and . in particular , we can choose provided that does not possess singularities in .this is much weaker than the typical condition for the boundedness of the resolvent in .similar to the discussion on exponential weights in ( [ e : ref ] ) , one could for instance compute the fourier transform ( [ e : f ] ) by shifting the contour of integration in the complex plane to .we can also construct directly using the reformulation of the resolvent equation as a first - order constant - coefficient ode for .denote by the greens function to this first order equation .then where is the projection on the first component and is the embedding into the last component .that is , and . the first - order green s functionis determined by the decomposition into stable and unstable subspaces and , with associated spectral projections .indeed , with denoting the ( linear , ) flow to ( [ e:1st ] ) , note that the definition of is unambiguous for large due to ellipticity of .moreover , . to see this, note that for large , eigenvalues of with multiplicities are ( to leading order ) roots of .this implies and by ellipticity of , ( [ e : ell ] ) .since , to leading order , roots come in pairs and , we have that .we are interested in the extension of the green s function and the subspaces as functions of and possible singularities in the form of poles or branch points .we first demonstrate that singularities of these objects occur simultaneously .[ l:1st-2nd ] the following regions coincide : * ; * ; * .since is analytic , is analytic and ( [ e : ge ] ) shows that .also , ( [ e:1n ] ) shows that .it remains to show that analyticity of implies analyticity of .we will therefore construct explicitly from and its derivatives .we need to solve , which we write more explicitly as we change variables for , setting , where we write .note that is left unchanged , and hence .then the system ( [ rhs ] ) becomes + we may then solve for by noting that the values for , , are obtained in a similar manner using ( [ tilde ] ) and the change of coordinates relating to . in order to relate the two green s functions and , we must write these expressions as convolutions against the inhomogeneous terms .since derivatives of these functions exist in the definition of , we must integrate by parts and this introduces derivatives of the pointwise green s function into the expressions .any derivative of that has order greater than can be eliminated by using the defining property of , that is , where is the dirac delta function and superscripts refer to derivatives .in this fashion , we are able to compute as a function of and its derivatives . for example , the first rows of are further entries can be computed by a straightforward , albeit tedious , calculation .all that remains is to show that the derivatives of involved in the definition of are analytic whenever is analytic .let be an interval .define the operator this operator is conjugate to the operator , which maps to where is the discrete forward difference operator . for and sufficiently small, is , in turn , -close to the operator which maps to .standard existence and uniqueness implies that is invertible and consequently both and are invertible for sufficiently small .we then have more importantly , is an analytic function of as an element of . then , for fixed derivatives up to order can be computed and are analytic .this establishes the analyticity of given the analyticity of and completes the proof of lemma [ l:1st-2nd ] .the above discussion roughly guarantees upper bounds for pointwise growth in terms of singularities of .this motivates the following definition .[ d : pgr ] we say is a _ pointwise growth mode _( pgm ) if is not analytic in at . the _ pointwise growth rate _( pgr ) is defined as the maximal real part of a pointwise growth mode , or , the following result shows that , in an appropriate sense , pointwise growth modes give sharp characterizations of pointwise growth .[ c : lb ] the pointwise growth rate defines generic pointwise growth of the solutions as follows .* _ upper bounds : _ for any , any compactly supported initial condition , and any fixed interval , we have for the solution with initial condition * _ lower bounds : _ for any , there exists and so that the solution with initial condition satisfies the upper bounds were established in lemma [ l:1st-2nd ] .lower bounds can be obtained indirectly .suppose that we had upper bounds of the form ( [ e : ub ] ) with for all .considering the solution with initial data , we find exponential decay for the fundamental solution .taking the laplace transform of yields the green s function which is analytic in , contradicting the assumption . for an -unstable system, we find a trichotomy : * if the pointwise growth rate is negative , we say that the instability is _ convective _ ; * if the pointwise growth rate is positive , we say that the instability is _ absolute _ ; * if the pointwise growth rate is zero , we say that the instability is ( pointwise ) _ marginal _ ; we will collect a set of examples that will serve as illustrations for some of the more subtle effects that we shall discuss later on . in these examples , we compute the green s function and and point out the correspondence between singularities of the green s function and singularities of from lemma [ l:1st-2nd ] . [ [ convection - diffusion . ] ] convection - diffusion .+ + + + + + + + + + + + + + + + + + + + + the first example that we consider is the scalar convection - diffusion equation , we follow the procedure outlined above . a laplace transform in time converts the pde into a system of first order ordinary differential equations which reads , adopting the notation from ( [ e:1st ] ) , the fundamental matrix associated to this system can be computed from the eigenvalues and eigenvectors of , let be the matrix whose rows are the eigenvectors corresponding to the stable left eigenvalues , and let be the matrix whose columns are the eigenvectors corresponding to the stable right eigenvalues .then .that is , we note that the eigenvalues are analytic and distinct away from .this implies that is analytic away from as well .the green s function for the first order equation , , is given through ( [ e : ge ] ) , and , since there is only one stable and unstable eigenvalue , we find as noted in lemma [ l:1st-2nd ] , possesses the same domain of analyticity as .finally , using the relation , we find that the green s function is given by in agreement with lemma [ l:1st-2nd ] , inherits the analyticity properties of : it is analytic except for a singularity at .[ [ cahn - hilliard . ] ] cahn - hilliard .+ + + + + + + + + + + + + + consider the following parabolic partial differential equation , this equation is encountered , for example , as the linearization of the cahn - hilliard equation at a homogeneous steady state . if the steady state is stable and if it is unstable with respect to long wavelength perturbations .we will compute the pointwise green s function and determine its singularities . after a laplace transform in time ,the system is a fourth order ordinary differential equation with four eigenvalues , we define the principle value of the square root to lie in the upper half of the complex plane. then the stable eigenvalues are for any eigenvalue , the stable right eigenvector is and left eigenvector is . using these vectors and the formula the projection onto this eigenspace is it is then straightforward to write down the stable projection as the sum of the stable projections associated to the two stable eigenvalues ( [ eq : stablech ] ) .explicit formulas for the first order green s function are rather involved , so we skip straight to the pointwise green s function , observe that has singularities when and when .we will focus on the singularity at .we will see that the nature of this singularity depends on the sign of . when , the singularity is removable .this can be seen as follows .consider a fixed value of and let with .since , the factors inside the roots in ( [ eq : stablech ] ) converge to a common value on the positive real axis . due to our choice of the principle value of the root, we then observe that the roots lie in the upper half plane and converge as to the values .consequently , the pointwise green s function has a finite limit as and the singularity is removable .this should be contrasted with the case when .now the two factors converge to a common point on the negative real axis as . upon taking the root ,the two factors converge to the purely imaginary value in the limit . as a result , the cancellation mechanism that was at play in the case of no longer holds and the singularity is a pole for .[ [ counter - propagating - waves . ] ] counter - propagating waves .+ + + + + + + + + + + + + + + + + + + + + + + + + + the following example illustrates an important subtlety .the subspaces are of course analytic as elements of the grassmannian whenever is analytic . however , the converse is not true since and may intersect .we consider the following system , as in the previous examples , we begin by transforming the system of second order equations into a system of first order equations .we use the standard ordering in contrast to ( [ e:1st ] ) and obtain the eigenvalues of this system are determined by the eigenvalues of the blocks corresponding to the and systems in isolation .there , we have the stable and unstable eigenvalues and eigenvectors give rise to the eigenspaces , following the general procedure outlined above , we obtain the stable projection as well as the green s function .the stable projection is , where the diagonal elements are the projections for the and sub - systems in isolation and are similar to the stable projection ( [ ps - conv - diff ] ) .the sub - matrix describes the effect of the coupling and is given by the explicit formula in terms of is , note that all components of the stable projection have a singularity at .only the projection matrix has a second singularity at .this singularity is a pole and , in this respect , is fundamentally distinct from the singularity at .with the projections determined , we can compute the pointwise green s function .we have , where for , and both possess a singularities at and . for ,the singularity at disappears .we also remark that the subspaces and are analytic in for all . at intersect non - trivially , but remain analytic .[ r : usc ] in all the previous example , one can verify explicitly that pointwise growth modes depend continuously on system parameters . in the present example of counter - propagating waves, however , a pointwise growth mode disappears at the specific value , so that the pointwise growth rate is only upper semi - continuous .we will generalize this observation later , lemma [ l : jump ] .[ r : cp ] this last example can be made even more obvious when abandoning the restriction to parabolic equations .neglecting diffusion , the system ( [ e : cp ] ) becomes a simple system of counter - propagating waves fix .for we have and , both of which define analytic families of subspaces in .however , since , we see that is not analytic at .one readily finds that , which has a simple pole at .this is reflected in the fact that as , which is typically nonzero .on the other hand , for , the stable subspace is simply , and is analytic .in this section , we discuss a slightly different concept of pointwise stability .we think of a nonlinear growth process that has created a competing , more stable state , which now forms an effective boundary condition for the instability .we therefore study pointwise growth in problems on the half - line with some arbitrary boundary conditions at , see section [ s:3.1 ] .we illustrate the relation to pointwise growth modes , continuing the previous examples , in section [ s:3.2 ] , and briefly discuss relations to the evans function in section [ s:3.3 ] .we will come back to the observations made here when discussing relations to nonlinear problems in section [ s : dis ] .we consider the parabolic equation ( [ e:01 ] ) on the half line , together with boundary conditions , where is linear with full rank , with -dimensional kernel .we assume that the boundary conditions give a well - posed system in the sense that for , sufficiently large . as a consequence, we can define as the projection along onto .we also need the transported projection .the green s function associated with these boundary conditions is in particular , is analytic precisely when is analytic .clearly , singularities and pointwise growth may depend on the boundary conditions .one may wish to separate the influence of boundary conditions from properties of the medium .for this purpose , we can consider the subspace as a complex curve in the grassmannian and discuss its singularities .[ l:1s ] we refer to singularities of as _ boundary right - sided pointwise growth modes _ ( bpgm ) and to singularities of simply as right - sided pointwise growth modes ( rpgm ) . the right - sided pointwise growth rate ( rpgr ) is defined as the maximal real part of a right - sided pointwise growth mode , or , considering and the unstable subspace , one can define left - sided pointwise growth .we will now collect some properties of bpgms and rpgms .first , we observe that right - sided pointwise growth modes are boundary right - sided pointwise growth modes . [l : sing ] singularities of are singularities of .suppose is analytic .then the range and kernel are analytic , in particular is analytic .there is a partial converse to lemma [ l : sing ] when we allow more general , dynamic , boundary conditions .[ l : sieq ] suppose that is analytic in the open region .then there exists an analytic family of boundary conditions so that the associated projection is analytic in . we need to find a complementary subspace to .such analytic complements always exist provided the domain is a stein space ; see for instance ( * ? ? ?* thm 1 ) and references therein .open subsets of are stein manifolds by the behnke - stein theorem ; see for instance .the following lemma clarifies the relation between right - sided pointwise growth and pointwise growth .[ l : frpgmpgm ] singularities of are singularities of .analyticity of the projection implies analyticity of its range .the converse is true generically ( see the discussion in section [ s : alg ] ) but not true in general ; see the example on counter - propagating waves , below . in order to determine analyticity of the subspace , one can use local charts in the grassmannian , for instance writing the subspace as a graph over a reference subspace , effectively embedding the grassmannian locally into .alternatively , one can use the plcker embedding into differential forms , working globally , albeit in a high - dimensional space .it is useful to recall our original motivation .suppose we are given a parabolic equation of the form ( [ e:01 ] ) with pointwise growth rate determined from a maximal pointwise growth mode .we now restrict ourselves to the positive half - line , imposing boundary conditions at is such a way that the initial value problem is well - posed .the question is whether boundary conditions can be selected so that the right - sided pointwise growth rate is strictly less than the pointwise growth rate .that is can boundary conditions be selected so that faster pointwise rates of decay are observed for the problem on the half - line than for the problem on the whole real line ?the answer is given in the lemmas above . to be precise ,let be the pointwise growth mode with maximal real part. then if is also a rpgm then lemma [ l : sing ] implies that is also a bpgm and faster rates of decay are not possible by selecting appropriate boundary conditions . on the other hand , if is a pgm but not a rpgm then lemma [ l : sieq ] guarantees that boundary conditions exist for which faster rates of decay are observed .we now return to the series of examples introduced in section [ sec : examples ] . to begin , we compute the rpgms for the convection - diffusion and cahn - hilliard examples , showing the rpgms always coincide with pgms in these two examples .next we consider the counter - propagating waves example .this is a particularly rich example that demonstrates that pgms , rpgms and bpgms are not necessarily equivalent . having computed the rpgm , we will turn our attention to finding suitable boundary conditions such that faster pointwise rates of decay are observed on the half - line than for the same problem on the whole real line . [ [ convection - diffusion.-1 ] ] convection - diffusion .+ + + + + + + + + + + + + + + + + + + + + the stable subspace is given by , which possesses a singularity at the pointwise growth mode .[ [ cahn - hilliard.-1 ] ] cahn - hilliard .+ + + + + + + + + + + + + + for , the two - dimensional subspace possesses a singularity at .in fact , the subspace is spanned by , where are the stable eigenvalues from ( [ eq : stablech ] ) . at , distinct , so that we can write the subspace as a graph from into , represented by the square matrix which is not analytic since is not analytic .one can similarly see that all other pointwise growth modes correspond to singularities of and therefore all pgms are rpgms ( and therefore bpgms ) .[ [ counter - propagating - waves.-1 ] ] counter - propagating waves .+ + + + + + + + + + + + + + + + + + + + + + + + + + we computed the stable subspace in ( [ e : esu ] ) .we can write the stable subspace globally as a graph over with values in , which gives the matrix representation we clearly see singularities where the diagonals are singular , i.e. at .when , we also see a singularity when , .however , such a singularity does not occur along the principal branch of the square root , so that in this case , the pointwise growth mode at is _ not _ a right - sided pointwise growth mode . when considering the counter - propagating waves example with the drift directions switched in both the and component, we see that is singular at , when , .we therefore need to analyze the subspace in a different coordinate system of the grassmannian .writing the stable subspace as a graph over into , we find the matrix representation which shows analyticity near , . as was the case above , the pointwise growth mode at is not an rpgm .we note that is not continuous in at .a somewhat lengthy calculation shows that the right - sided pointwise growth modes need not be continuous when adding bidirectional coupling , as we noticed , at , right - sided pointwise growth modes are located at .for , small , right - sided pointwise growth modes are `` created '' at , located at . rather than exhibiting the lengthy calculations that reveal the singularity , we refer to remark [ r : rpgmnotc ] in the following section , where this fact is shown using the fact that for , the growth mode is in some sense simple .again , all of the above can be observed in the simpler example of hyperbolic transport from remark [ r : cp ] .the stable subspace is given through is analytic at , both for and for , when .again , notice that the stable subspace is not continuous in at .[ [ counter - propagating - waves - bpgm - versus - rpgm . ] ] counter - propagating waves bpgm versus rpgm .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + consider example ( [ e : cc ] ) with .we will impose boundary conditions at and investigate pointwise stability of the zero state .we know that the example ( [ e : cc ] ) , with , has a pointwise growth mode at and therefore the dynamics are pointwise marginally stable on the whole real line .we also observed that is not a rpgm .based upon this , we expect to observe the following dynamics * marginal pointwise stability on ; * exponential decay on for suitable boundary conditions ; lemma [ l : sieq ] .since the stable subspace can be written as a graph over , dirichlet boundary conditions , are always transverse to and therefore guarantee exponential decay .phenomenologically , compactly supported initial conditions in are transported towards the boundary where the dirichlet condition causes exponential decay .the equation is dominated by transport away from the boundary , where is `` fed '' into the system through the boundary condition , which again causes exponential decay . reversing the drift direction , that is , considering we observe that the stable subspace at is given by , entirely contained in the -equation .consider boundary conditions at of the form , .it is straightforward to verify that these boundary conditions are transverse to at .thus , is not a bpgm .the bpgms can be determined explicitly from the singularities of this projection has a singularity at for and we expect to observe pointwise exponential decay with this rate for any .phenomenologically , we solve the -equation with homogeneous dirichlet boundary conditions , observing transport to the boundary at , with source .since such a system will decay exponentially in the absence of a source and we expect to relax to a constant , say , away from the boundary , introducing a boundary layer with positive slope .this in turn generates a _ negative _ boundary source in the -equation through , which then propagates to the right in the medium , generating a negative source in the -equation .this will decrease the average of and therefore the slope of the boundary layer until eventually the system approaches zero locally uniformly exponentially .see figure [ fig : rightbc ] .component in ( [ e : cc ] ) . onthe left is a simulation of the case when ( dirichlet boundary conditions ) . on the rightis a simulation with .when , the component remains nonzero . for , pointwise exponential decayis observed with rate ., title="fig:",scaledwidth=49.0% ] component in ( [ e : cc ] ) .on the left is a simulation of the case when ( dirichlet boundary conditions ) . on the rightis a simulation with .when , the component remains nonzero . for , pointwise exponential decayis observed with rate ., title="fig:",scaledwidth=49.0% ] we note that the immediate choice of transverse boundary conditions would make the system ill - posed .on the other hand , choosing separated boundary conditions , , such as the dirichlet boundary conditions above , would necessarily yield a non - zero intersection with the stable subspace that spans .this leads to a zero eigenvalue .still , we have demonstrated how appropriate choices of boundary conditions in the sense of lemma [ l : sieq ] can stabilize the system .more generally , our point here is that any instability would be generated by the boundary conditions in the sense that the pointwise growth rate depends non - trivially on the details of the boundary condition , unless it is simply . without diffusion, negative coupling through the boundary can be achieved via at , which gives a boundary subspace which is transverse to the stable subspace or , respectively , in .one can also verify that the system is well - posed as the right - hand side generates a strongly continuous semi - group . in the resolvent set ,singularities of the green s function correspond to eigenvalues and the order of the pole can be related to multiplicities .zumbrun and howard showed that this relation continues in a pointwise sense .that is , they showed that one can define projections on eigenvalues in a pointwise sense using the pointwise green s function .our situation here is much simpler since we assume translation invariance , so that the green s function acts simply through convolution . on the other hand ,our main interest here is in singularities of the green s function caused by `` asymptotic '' stable and unstable eigenspaces which were excluded in . insisting on an evans function approach, one could track -dimensional subspaces in using differential forms .differential forms solve an induced linear equation with equilibria corresponding to invariant subspaces .singularities of the green s functions are induced by two mechanisms : singularities of the stable or unstable subspaces , and intersections of stable and unstable subspaces .right - sided pointwise growth modes track precisely the first type of singularities .pointwise growth modes combine both types of singularities and singularities correspond to either branch points or roots of the evans function . in an evans function approach ,one usually regularizes singularities of by going to appropriate riemann surfaces and tracks intersections of and ( or ) by forming a wedge product of the differential forms associated with the two subspaces .in this section , we review the more common approach to pointwise stability based on pinched double roots of the dispersion relation . we compare pinched double roots with pointwise growth modes and we illustrate differences in examples . the more common approach to stability and instability problems uses the fourier transform to reduce the stability problem to a parameterized family of matrix eigenvalue problems , , with roots precisely where we call the dispersion relation .there are roots of for fixed , whereas there are roots for fixed . using ellipticity of , we find that for , there are precisely roots with and roots with .note that the roots are in general not analytic in .non - analyticity can occur only when at least two of the roots coincide .this occurs in the case of multiple zeros of the dispersion relation .[ d : dr ] we say is a double root of the dispersion relation if we say that is a simple double root when and .one readily verifies that simple double roots are simple as solutions to the complex system ( [ e : dr ] ) .[ d : pi ] we say that a double root is _ pinched _ if there exists a continuous curve , with strictly increasing , , for , and continuous curves of roots to so that in analogy to algebraic and geometric multiplicities of eigenvalues , one can think of pinched double roots as _ algebraic _ pointwise growth modes , as opposed to _geometric _ pointwise growth modes that characterize singularities of the green s function .we then refer to the largest real part of a pinched double root as the _ algebraic pointwise growth rate_. the following lemmas and remarks relate pinched double roots and pointwise growth modes .[ l : p - p ] let be a pointwise growth mode .then there exists so that is a pinched double root .suppose there are no pinched double roots with .then the subspaces and correspond to different spectral sets of and can therefore be continued in an analytic fashion as complementary eigenspaces , contradicting the assumption of a pointwise growth mode with . together with lemma [ l : frpgmpgm ] , this gives rpgm pgm pdr .[ l : p - p1]if is a simple pinched double root , then is not analytic at . in a simple pinched double root , we have expansions , where we assumed without loss of generality that . because , the eigenvectors to the eigenvalues become co - linear at , since otherwise would have a two - dimensional null - space and . if the eigenspace were analytic , we could trivialize it locally by an analytic change of coordinates and consider the eigenvalue problem within this one - dimensional eigenspace , which clearly guarantees analyticity of the eigenvalue , thus contradicting the presence of a simple pinched double root . as a consequence ,the stable subspace is not analytic .together with lemma [ l : frpgmpgm ] , this also gives `` simple pdr '' pgm .[ r : not ] we will see in the example of counter - propagating waves , below that the assumption of a simple pinched double root is indeed necessary . in particular , there are pinched double roots that do not correspond to pointwise growth modes .in other words , pinched double roots may overestimate pointwise growth rates , albeit only in non - generic cases , when pinched double roots are not simple .[ r : rpgmnotc ] we can use lemma [ l : p - p1 ] to see that right - sided pointwise growth modes are not continuous . in ( [ e : cc ] ) , there is a double pinched double root at for , which does not give rise to a singularity of the stable subspace .upon perturbing to , we find two simple double roots at , , and therefore a one - sided pointwise growth mode .we compute pinched double roots for the examples in section [ sec : examples ] and contrast them with the singularities found in the pointwise green s function .we emphasize the example of counter - propagating waves , which possesses a pinched double root without having a pointwise growth mode at .[ [ convection - diffusion.-2 ] ] convection - diffusion. + + + + + + + + + + + + + + + + + + + + + the dispersion relation and its derivative with respect to are setting both equations equal to zero we find there is only one double root at .writing the roots of the dispersion relation as we see that the double root occurs when the terms inside the square root vanish . taking we find that this double root is pinched .note that the pointwise green s function derived in section [ sec : examples ] has a singularity at precisely this point .thus , in this example pinched double roots and algebraic pointwise growth modes are equivalent , according to lemmas [ l : p - p ] and [ l : p - p1 ] .[ [ the - cahn - hilliard - linearization . ] ] the cahn - hilliard linearization .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the dispersion relation and its derivative with respect to are the double roots occur for and . from find four roots of the dispersion relation , when , then the two stable roots are recall that we have taken the principle value of the square root to lie in the upper half of the complex plane . owing to this , when the double root at involves the roots and is pinched .therefore , when there exist algebraic and pointwise growth modes in the right half plane . when , the two roots involved in the double root at are the two stable roots above and therefore not pinched .again , pinched double roots and pointwise growth modes ( [ eq : ptgch ] ) coincide for , in accordance with lemmas [ l : p - p ] and [ l : p - p1 ] .note also that for , there is a multiple pinched double root at .more precisely , =0 at this double root. there also is a pointwise growth mode at , although lemma [ l : p - p1 ] does not guarantee the existence .[ [ counter - propagating - waves.-2 ] ] counter - propagating waves .+ + + + + + + + + + + + + + + + + + + + + + + + + + the dispersion relation and its derivative with respect to are the double roots are at and .they all pinch for any value of .however , from the projection , we notice that is no longer a pointwise growth mode for .this example shows that pinched double roots may not give rise to pointwise growth modes , as announced in remark [ r : not ] .[ r : cont ] we observe in this example that pinched double roots `` resolve '' the discontinuity observed in remark [ r : usc ] : pinched double roots are continuously depending on in this example , while a pointwise growth mode disappears at and the pointwise growth rate jumps .in this section , we give rough -bounds , existence results and some counting results for pointwise double roots and pointwise growth modes , section [ s:5.1 ] , and study continuity properties in section [ s:5.2 ] .recall that the operator is invertible on , say , if and only if is invertible for all .then is given by the zero set of , .similarly , we found spectra in exponentially weighted spaces from roots of .[ l : ess ] pinched double roots are bounded to the right by the essential spectrum .more precisely , for any algebraic pointwise growth mode , the pinching path necessarily intersects the essential spectrum for any . from an algebraic pointwise growth mode, we can follow as . since and , there is so that either or . since , .of course , bounds on pinched double roots imply bounds on pointwise growth modes and right - sided pointwise growth modes by lemmas [ l : frpgmpgm ] and [ l : p - p ] . exploiting ellipticity, we can show that ( and therefore all pinched double roots ) is contained in a sector .the simple example shows that not all well - posed pdes possess algebraic pointwise growth modes . on the other hand ,the class of parabolic equations that we have focused on do .[ l : ex ] the parabolic pde ( [ e:0 ] ) possesses at least one finite ( right - sided ) pointwise growth mode .we show that the projection can not be analytic on and conclude from the construction that the range can not be analytic either .we therefore compute the leading order expansion for large and show that the resulting function can not be an analytic function of .more precisely , we find that the analytic continuation of along the circle , large , fixed , does not result in a univalent function , indicating at least one branch point singularity inside the circle .for this , we expand the dispersion relation in , setting and , to find denote by , the eigenvalues of .the roots are therefore given as here , we use the branch of the root fixing the positive real line .ellipticity guarantees that , which then readily implies that for there are precisely roots with and roots with , thus giving rise to projections on .the set of roots corresponding to can be continued in for all , so that the projections possess an analytic extension via dunford s integral . inspecting the formula for the , we notice however that for , the roots are multiplied by , approximately , and therefore , given by the analytic continuation from to , equals at . as a consequence , is not analytic in , in fact not even well - defined on .this shows existence of a singularity of and the existence of a pointwise growth mode as claimed .clearly , the range of gives the analytic extension of the stable subspace , which therefore can not be analytic , either .counting algebraic growth modes is difficult because of the pinching condition .the following example shows that the number of algebraic growth modes can actually jump .consider again with dispersion relation .one readily finds double roots at for , one can readily see that the purely imaginary pair of roots split off the imaginary axis and converge to as is increased to infinity , so that those two roots are pinched . on the other hand ,the two roots that create the double root at split into two imaginary roots as increases and eventually meet , separately , other roots when passes , and it is impossible to continue the specific root beyond this point .however , choosing any path from to off the real axis , one sees that the roots initially split with opposite real parts . since the path does not cross the ( real ) essential spectrum , the roots actually pinch along any such path . in summary , there are two pinched double roots , for all , and the third double root pinches along any non - real path . for , one readily sees that pinches .the other two double roots , however , can not pinch since the essential spectrum is to the left of the double root .pinching paths for these double roots would need to pass around the origin . in summary, there are two pinched double roots for and only one pinched double root for .one can however count double roots in general .general double roots are solutions to the system of polynomial equations , which in turn can be counted using bzout s theorem or resultants .exact counts need to take multiple roots into account , where multiplicity can be defined via an algebraic intersection number ( see ) or topologically via brower s degree .[ l : res ] consider a system of strictly parabolic equation of order ( [ e:01 ] ) and assume that the eigenvalues of are all algebraically simple .then the dispersion relation possesses precisely double roots , counted with multiplicity .the assumption on the eigenvalues of guarantees that double roots are uniformly bounded for bounded matrices , .one can therefore perform a homotopy to a simple , homogeneous equation with , for which double roots can be computed explicitly as follows .first , double roots of multiplicity arise when .second , double roots of multiplicity 2 arise for values when two factors are equal , which occurs when , which yields a total of roots . together with the double roots at , we find the desired total of .we remark that the result also follows from a direct computation of the resultant of and with respect to the variable , or from bzout s theorem .in fact , substituting in the dispersion relation , one finds that there are no intersections at infinity , which assures that the number of roots ( in ) is given by the product of the degrees of and , .dividing by then gives the number of roots .consider again the fourth - order evolution with dispersion relation .the derivative yields precisely three double roots , counted with multiplicity , which corresponds to at .multiple eigenvalues of can produce continua of double roots , as the example of scalar diffusion in , with double roots , arbitrary , shows .3 . for ,corresponding to a system of convection - diffusion equations with scalar diffusion , one finds double roots at , and a double double root at .in particular , there are 4 double roots for , 2 double roots for , and infinitely many double roots for .the following lemma establishes that the sudden increase in the pointwise growth rate upon arbitrarily small perturbations as exemplified in remark [ r : usc ] is the only type of discontinuity that can occur .[ l : jump ] pointwise growth rates are lower semi - continuous .we need to show that pointwise growth modes are robust .more specifically , we show that they can not disappear under arbitrarily small perturbations .pointwise growth modes correspond to singularities of the projection .these possess puisseux - expansions near singularities .in particular , the winding number of at least one coefficient of is not a positive integer , also for small perturbations . as a consequence, can not be analytic for nearby systems in a neighborhood of a singularity .similar difficulties occur when considering right - sided pointwise growth rates .we mentioned in the example of counter - propagating waves in section [ s:3.2 ] that right - sided pointwise growth rates need not be continuous in system parameters ; see also remark [ r : rpgmnotc ] , below .the following lemma establishes lower semi - continuity .[ l : jump2 ] right - sided pointwise growth rates are lower semi - continuous .similar to the proof of lemma [ l : jump ] , we suppose that possesses a singularity at . by compactness of the grassmannian, there exists an accumulation point .near , the stable subspace possesses a puisseux expansion and the winding number of at least one coefficient is not a positive integer , a fact that persists upon perturbing .algebraic pointwise growth rates are easier to control , as the following result shows .[ l : robust ] assume that there there are finitely many double roots .then algebraic pointwise growth rates are robust .more precisely , consider a parameterized family of operators such that the coefficients of the differential operator depend continuously on .then the algebraic pointwise growth rate is continuous in .assume that is a pinched double root at .without loss of generality , assume .we claim that there is a pinched double root nearby for sufficiently small . for all , there are finitely many double roots in a small neighborhood of the origin that all converge to the origin as . at , we have finitely many roots , , of the dispersion relation that converge to the origin as .also , there is so that the roots are analytic functions , when . here, we again use the standard cut of the square root , so that when .since we assume that the origin is a pinched double root , we have that , say , , and , for and .define clearly , double roots correspond to solutions of the analytic system of equations in two variables .note that is an isolated solution to this system , since otherwise an analytic function would vanish identically for some , contradicting the fact that was a pointwise growth rate . as a consequence ,the brower degree is positive .next , consider with sufficiently small .for , not too small , there are roots which converge to as . define we now argue by contradiction .we assume that there are no pinched double roots in a neighborhood of for values of arbitrarily small . under this assumption ,we claim that are analytic . to see this, first notice that for small , the distance between roots is bounded from below , we now choose a family of curves as the boundary of a union of small balls around the roots of interest , we then define the analytic functions from cauchy s theorem , we find that are the symmetric power sum polynomials in the eigenvalues , using newton s identities , we can express all elementary symmetric polynomials in terms of power sums . in other words , the generate the ring of symmetric polynomials .the coefficients of , viewed as a polynomial in , are symmetric polynomials and can therefore be expressed in terms of the , which shows that is analytic in and .completely analogously , we find that is analytic . by assumption , we know that the complex system does not possess solutions for close to zero .we will now produce a contradiction by showing that and are homotopy equivalent on the boundary of a small neighborhood of , that is , there is a homotopy between the two equations that does not possess roots on the boundary .we specifically consider the boundary of , , with small .we decompose the boundary into two parts , where and , respectively .in region , we fix sufficiently small and notice that , as , and converge to in region . as a consequence , for sufficiently small , and are homotopy equivalent via a straight homotopy on region . with and fixed as above ,we now consider region .the groups of roots and are well defined and separated by a finite distance for .the construction of from above is therefore continuous in , so that is continuous in on .the same reasoning applies to and . taken together, we conclude that and possess the same degree on , a contradiction to the fact that the degree of vanishes and the degree of does not .this concludes the proof of continuity of algebraic pointwise growth modes .[ r : p = p ] using thom transversality one can see that generically all double roots are simple . by lemma [ l : p - p1 ] ,simple pinched double roots are pointwise growth modes , which implies algebraic pointwise growth rates generically equal pointwise growth rates and pointwise growth rates generically are continuous .in this section we exploit pointwise stability concepts to characterize spatial spreading of instabilities .some of the following definitions and results are contained in but will be repeated here to make the discussion more accessible .we are interested in unstable states , .instability of the spectrum implies that localized perturbations will grow exponentially in the -norm .however , they may decay in a localized window of observation .slightly generalizing from the previous discussion , we now allow this window of observation to move with speed .that is , we consider ( [ e:0 ] ) in a comoving frame of reference and study pointwise growth depending on . in the following ,we choose to rely on pinched double roots as criteria for pointwise growth .as we saw , pinched double roots may overestimate pointwise growth rates . on the other hand ,pinched double roots are technically easier to work with , giving in particular continuity of growth rates .we are interested in the set of speeds for which there are pinched double roots in of the dispersion relation in a comoving frame , [ d : ss ] we say that is the spreading speed ( to the right ) of ( [ e:0 ] ) if note that , by the previous discussion , the system will typically ( that is , whenever the algebraic pointwise growth modes are pointwise growth modes ) be pointwise unstable in frames with speed less than but arbitrarily close to . in this context , it will be helpful to think of group velocities in a generalized fashion . [d : cg ] let and .then we define the group velocity as one readily verifies that in a comoving frame .moreover , implies and hence the presence of a double root .therefore , is a double root in a suitable comoving frame when is real .[ l : max ] let be an element in the essential spectrum with extremal real part , that is , * is simple , , ; * is maximal , for , ; * is locally extremal : the locally unique eigenvalue has .then is an algebraic pointwise growth mode in a frame with speed , with . without loss of generality , .since is simple , we can locally solve for .we claim .notice that passing to the comoving frame shifts the essential spectrum as .since was extremal ( in fact a global maximum ) in the steady frame , it is therefore also extremal in the comoving frame , which proves the claim .next , notice that since we passed to a frame where .therefore , corresponds to a double root .we need to show that the double root is pinched .we therefore choose such that , and solve for roots , , using the newton polygon .since , we have . in particular , there is a root with and a root with along the curve , when or when . the case , can be excluded since in this case so that the essential spectrum lies on a curve , contradicting the assumption that was a maximum of the essential spectrum .the lemma guarantees that the supremum in the definition of the spreading speed is not taken over the empty set provided that the essential spectrum is unstable . on the other hand , spreading speeds are finite in parabolic equations such as the one considered here . [l : sinf ] there exists so that all pinched double roots lie in for . for sufficiently large , we claim that for sufficiently negative is located in .scaling , , and , we find that dropping hats , setting , and substituting the eigenvalues for the matrix , we find at leading order so that for all after exploiting . introducing lower order terms and going back through the scaling , we find that for speeds , sufficiently large , and weight , the real part of the essential spectrum is bounded by for some . spreading speeds near unstable statesare well defined and finite . in a frame moving with , there is a pinched double root , located on the imaginary axis .spreading speeds are finite and well - defined by lemmas [ l : sinf ] and [ l : max ] .continuity of pinched double roots gives the existence of a pinched double root on at .[ r : snu ] if the pinched double root on the imaginary axis at is simple , we can infer that .in fact , locally near the double root , the dispersion relation has the expansion , which shows that the double root moves as . since for , we can conclude .[ r : gen ] of course , one can also define spreading speeds to the left , most easily by reflecting and computing in the reflected system .more precise information on the spreading behavior is contained in spreading sets , which are subsets of so that there are unstable pinched double roots in for .continuous dependence of pinched double roots on shows that the complement of is open .[ r : ref ] suppose that the system under consideration possesses a reflection symmetry , possibly combined with an involution , .equivalently , , so that . we can then write of course , in this case , that is propagation to the right and to the left are equivalent .also , group velocities vanish when , since . also , and is robust since we can solve as a real equation . in this case ,group velocities are purely imaginary and automatically vanish when is extremal . on the other hand , for complex and , group velocities do not vanish in general .such examples arise in local instabilities with nonzero frequency and wavenumber , sometimes referred to as turing - hopf ; see for instance .[ l : usc ] spreading speeds are upper semi - continuous with respect to system parameters . fix a system with spreading speed .continuity of pinched double roots and parabolicity imply that for any there exists so that for any pinched double root and all .continuity of with respect to system parameters and a priori upper bounds on imply that for systems that are -close , for all pinched double roots and all .this implies that for all nearby systems , thus establishing upper semi - continuity .consider the dispersion relation factors , double roots from the first factor are pinched and stabilize at .double roots from the second factor are also pinched but always stable when , with nonnegative real part at for , .double roots resulting from collisions of roots from the first and second factor solve , hence yield for and . in summary , we have that for , , but for , . adding an equation yields a reflection - symmetric example , .one can also construct examples without gradients exploiting the fact that the group velocity of marginally unstable modes in turing - hopf instabilities is typically non - zero at onset .one can also see from this example that continuity can not be achieved with small modifications in the definition , such as taking the supremum over speeds where .the spreading of an instability in multi - dimensional space often occurs in the form of roughly radial propagation of disturbances .after initial transients , such behavior can be well described by the unidirectional propagation of a possibly transversely modulated planar interface .one can understand such behavior by studying spreading behavior into a fixed direction , say the -coordinate , of a mode that is extended in the -direction .more precisely , we consider modes that are modulated in the -direction in the form . for the sake of notation , we restrict to and consider the parabolic equation with initial conditions that are localized in , , which gives the parameterized family of equations we can now repeat the discussion of one - dimensional systems and define spreading speeds for each fixed .parabolicity implies that ( [ e:11 ] ) is stable for sufficiently large .since depends upper semi - continuously on system parameters , lemma [ l : robust ] , we can conclude that attains its maximum at some finite .[ d : trans ] we say that the invasion process is transversely modulated if does not attain its maximum at .there then exists so that for all where is defined , and .we then call a transverse selected wavenumber of the invasion process. one can easily construct examples where in anisotropic systems , considering for instance with , where modes with are in fact stable .such invasion processes are often observed when one - dimensional patterns , such as roll solutions in convection experiments , are conquered by hexagon patterns through an invasion process , effectively breaking the transverse -translation symmetry .a more interesting question is whether transversely invasion processes occur in isotropic systems , which will be the topic of the remainder of this section .we consider systems that are isotropic , that is , invariant with respect to rotations and reflections .again , we restrict to for simplicity of exposition , the discussion readily generalizes .we say that ( [ e:00 ] ) is isotropic when is a solution if and only if is a solution for any , and is a representation of on .equivalently , for any .the ansatz now gives the dispersion relation in the isotropic case , the previous discussion implies that only depends on the length of the wave vector , so that it can be expressed as a function of , only .this extends to complex wavenumbers so that for simplicity of notation , we will drop tildes in the following and write , with .pinched double roots in -comoving frames solve our main result shows that there are no transversely modulated invasion processes .[ t:1 ] transversely modulated invasion processes do not exist .more precisely , attains its maximum at .[ r : nol ] we stress that the theorem concerns _ linear _ predictions in the leading edge .it has indeed frequently been noticed that for all those invasion processes , stripes parallel to the front interface dominate the leading edge of the front ; see for instance .of course , nonlinear systems may well exhibit transversely modulated patterns in the wake of a primary invasion .our point here is that the emergence of transverse modulation is a nonlinear phenomenon , caused by secondary invasion or fast nonlinear , so - called pushed fronts .we refer to the discussion sections for more details .the proof of this result will occupy the remainder of this section .the key calculation is an implicit differentiation of the dispersion relation which reveals that is strictly decreasing in . since , in general , spreading speeds may not be differentiable in the parameter , we approximate the dispersion relation by a nearby dispersion relation where this dependence is piecewise smooth . for the approximation ,we rely on transversality and perturbation arguments , while keeping the special structure of the dispersion relation dictated by isotropy . to be precise , we consider dispersion relations , where denotes the coefficients of the complex multivariable polynomial . in this notation, we define we sometimes write , , we also consider note that or vanish precisely at multiple double roots .we introduce coefficients of explicitly via the expansion at the origin , we can now calculate derivatives of with respect to those coefficients : consider now the domain of excluding , so that our goal is to move the parameters to ensure that the do not vanish .we will accomplish this by using transversality .we adopt the usual definition , where a smooth map between smooth manifolds is transverse to a smooth submanifold of if for all . in our specific case is a point , and and are open subsets of and , respectively .transversality is then equivalent to the fact that the derivative is onto .[ l : t0 ] the maps and , considered on domains defined in ( [ e : g ] ) , are transverse to .more specifically , and are onto . inspecting the formulas for partial derivatives ( [ e : der ] ) shows that , are linearly independent over as long as , which establishes that the range of is real 6-dimensional .this implies transversality of to .similarly , , are linearly independent over and is transverse to . using sard s transversality theorem , we can conclude that the restriction to a fixed parameter is transverse for a residual , in particular dense , subset of parameter values .[ c : t0 ] for all in a residual subset , sard s transversality implies that are transverse to for in a residual subset of .since the domain is 5-dimensional and the target manifold is 6-dimensional , the linearization can not be onto , hence transversality implies that is not in the image .we will use a very similar transversality argument to exclude at double roots , provided that .consider therefore on we also define analogously maps via restriction .[ l : t1 ] the map considered on , is transverse to .more specifically , is onto . for , and linearly independent over . for , and linearly independent .[ c : t1 ] for all in a residual subset , again , we conclude from sard s transversality that is transverse in a residual subset . since the target space is ( real ) 4-dimensional , the domain only 3-dimensional , transversality implies that there are no roots of .we say that a solution of is a simple spreading speed if is invertible at the solution .[ p : t ] for all , spreading speeds are simple unless .we compute here , the first two columns are acting on and the last column on .note that , considered as a map on , this matrix is invertible provided since and do not vanish at solutions for , corollary [ c : t0 ] .we next claim that is not possible for a spreading speed when .note that corollary [ c : t1 ] guarantees this fact in the case , only .suppose therefore that are a root of .using isotropy , one directly verifies that another solution is given by in the substitution , one exploits that , so that the arguments of remain the same upon substitution .note also that since .summarizing , we have found a solution , which however was excluded by corollary [ c : t1 ] , with the exception of the case .this proves the lemma . as a consequence ,choosing , we find that solutions to come as smooth curves , with end points ( and possible singularities ) only at .also , on these curves unless .[ l : mon ] suppose and let be a generalized spreading speed with . then recall that expanding near a solution and denoting by and the increments , we find at first order exploiting ( [ e : fu ] ) we find and , taking real parts , differentiating gives and the desired result .we argue by contradiction .consider a dispersion relation , associated polynomial coefficients , so that for some .we would like to consider systems with .therefore , first modify the dispersion relation setting , for some sufficiently small , and write for the associated vector of coefficients .since this perturbation merely shifts values of , double roots are simply shifted by .now choose -close to .by continuity of pinched double roots , the real part of pinched double roots for will be strictly larger than the real part of double roots for as long as . as a consequence , the associated spreading speeds and satisfy . using upper semi - continuity , lemma [ l : usc ] ,we conclude that , arbitrarily small provided are sufficiently small .in particular , since the spreading speed is realized by a finite number of pinched double roots on the imaginary axis , all of which satisfy the monotonicity formula from lemma [ l : mon ] , the spreading speed is strictly decreasing for each .this contradicts our assumption and proves the theorem .we summarize our results , section [ s : sum ] , and comment on systems without translation symmetry in section [ s : inh ] .we then comment extensively on challenges with nonlinear systems , section [ s : nonl ] , and conclude with a short outlook in section [ s : con ] .we considered generalized spectral indicators for pointwise growth and associated growth rates . for linear systems , pointwise growth modes ( pgm )determine exponential decay and growth in a finite window of observation for a system on the real line .pointwise growth modes correspond to singularities of pointwise projections .when the domain is the positive half line , right - sided pointwise growth modes ( rpgm ) take this role , at least for suitable boundary conditions .right - sided pointwise growth modes correspond to singularities of the stable subspace and are a subset of pointwise growth modes .pinched double roots ( pdr ) are defined via determinants rather than matrices and determine pointwise growth only in generic situations .as opposed to ( one - sided ) pointwise growth modes , they are however continuous with respect to system parameters . from an algorithmic point of view , one can compute double roots ( dr ) , then specialize to pinched double roots , and finally check on the presence of pointwise and right - sided pointwise growth modes ; we refer to for computational aspects of the first steps in this procedure .we gave a number of examples that highlight the difference between these concepts .a key role was played by the example of counter - propagating waves ( cpw ) , the following table summarizes some of our results , listing existence of growth modes or pinched double roots at in the examples , as well as continuity , semi - continuity , and availability ( and continuity ) of counts . in most examples that we have encountered ,double roots appear to be most amenable to explicit analysis. the pinching condition can be more cumbersome to analyze .pointwise growth modes and right - sided pointwise growth modes need only be computed in the non - generic cases when multiple double roots determine growth . in such cases , one can focus on a _ local _ analysis near the pinched double root and compute or , which can then often be split in singular and non - singular subspaces , analytic . based on pinched double roots , we defined spreading speeds as maximal speeds of comoving frames with marginally stable pinched double roots .we do not know if pointwise growth modes or right - sided pointwise growth modes are continuous with respect to changes in the laboratory frame . as a consequence, a definition of spreading speeds based on these more subtle concepts would be less workable at this point . as an application , we studied linear spreading speeds in two - dimensional domains , depending on a transverse wavenumber .we showed that linearly determined , transversely planar , non - modulated fronts are always fastest .we do not have a simple intuitive explanation of this fact .we discuss generalizations and new phenomena associated with spatially inhomogeneous systems , where is smoothly depending on and ellipticity conditions ( [ e : ell ] ) are satisfied uniformly in .we discuss periodic and homoclinic / heteroclinic coefficients . in -periodic media , one can follow the exposition in this paper very closely and construct pointwise first - order green s functions using the -periodic linear evolution to the first - order equation .analyticity properties of the green s function are independent of .they depend only on the pointwise projection .this can be readily seen using floquet theory , which transforms the -periodic linear differential equation into a constant - coefficient system via an -dependent change of variables .one can also define an analytic dispersion relation via continuity results carry over , but counts do not apply since the dispersion relation is not polynomial .in fact , there are typically infinitely many double roots .we refer to for a discussion of pointwise growth and double roots in this context .periodic coefficients arise for instance when studying secondary invasion . as we saw in section [ s : spmult ] , primary pattern - forming invasion typically creates one - dimensional stripes parallel to the front interface .often these striped patterns are unstable and a secondary invasion process will create more complex patterns such as squares and hexagons .this secondary invasion process can to some approximation be studied using the linearization at the primary , unstable striped pattern . of course, the linearization at this striped pattern will not be isotropic , even if the underlying equation is , so that one may now observe transversely modulated fronts . beyond periodic coefficients , generalizations to quasi - periodic and random mediahave been studied , mostly in scalar equations .we refer to and references therein without attempting a generalization of our concepts in this direction . also of interest are situations where is heteroclinic ( or homoclinic when ) . to some extent, this case has been studied extensively in the context of stability problems of nonlinear waves using the evans function ; see for instance .the relation with pointwise growth becomes apparent in this context when extending evans functions across the essential spectrum using the gap lemma . in the context of pointwise stability ,our discussion here is similar to .associated with , we consider the subspaces , associated with we consider .for , these subspaces contain bounded solutions on and , respectively , to the first - order equations associated with .assuming sufficiently rapid convergence in , the -dependent problem possesses subspaces that contain initial conditions to bounded solutions on and , respectively , for the -dependent problem .moreover , these subspaces differ from by an analytic linear transformation , only .the first - order green s function is given by where are the projections along onto .singularities of the green s function therefore stem from either the intersections occur in similar fashions as pointwise growth modes or boundary pointwise growth modes and can be tracked using evans functions .associated with such intersections are solutions with certain exponential asymptotic behavior , that yield spreading speeds via ; see . of course , this discussion can now be combined with the case of periodic coefficients , thus giving a systematic basis to pointwise growth and invasion speeds in problems with _ asymptotically periodic coefficients_. we will come back to these issues when discussing nonlinear invasion problems in the next section .we think of the linear theory as a _ predictor _ for nonlinear phenomena . in the case of simple roots , there are typically open regions in parameter space where linear predictions are correct .we comment below on mechanisms that lead to deviations from linear predictions . for simple pinched double roots , all concepts of pointwise stability studied here coincide , and there is a fairly universal description of associated phenomena .as far as the invasion speed is concerned , one observes a dichotomy between fronts that propagate with the linear spreading speed ( pulled fronts ) and fronts that propagate faster than the linear spreading speed ( pushed fronts ) .the prototypical example are fronts in the nagumo equation invading the unstable state and leaving behind the stable state . for ,these fronts propagate with the linear speed , for , the invasion speed is faster .more general ( explicit ) examples are known for the quintic - cubic ginzburg - landau equation .while speed predictions are fairly reliable , wavenumber predictions involve a wider variety of phenomena , even for pulled fronts .we assume that the spreading speed is realized by a simple pinched double root , which predicts marginal stability in a frame moving with the spreading speed .in other words , we expect to see linear oscillations with frequency in this frame of reference .the simplest prediction for patterns in the wake of the front would be to ask for the pattern to be in strong resonance with this frequency , in the comoving frame , so that there would exist a _ coherent invasion front _ , , and .this strong resonance is sometimes referred to as `` node conservation '' , referring to the actual process of creating patterns with nodes ( zeros ) which mark the minimal period of the pattern . however , subharmonic invasion fronts , , are also frequently observed , .the frequency of the coherent invasion front puts constraints on patterns in the wake of the front .assume that a wave train is created in the wake of the front , that is , for , where , and is the nonlinear dispersion relation in the wake .periodicity in the comoving frame then requires that which , considered as an equation for , determines the wavenumber in the wake .examples are systems such as the cahn - hilliard equation or the swift - hohenberg equation , with , which gives , as well as the complex ginzburg - landau equation where one can sometimes show the existence of such coherent invasion fronts , , and prove local stability .selection of slowest fronts however has not been shown in any such context , which makes mathematically rigorous statements on wavenumber selection impossible .nevertheless , it appears that stability of such a coherent invasion front implies `` node conservation '' in the invasion process , while instabilities lead to changed wavenumbers .note that when referring to stability of a coherent invasion front , we are asking about pointwise stability in the sense discussed in section [ s : inh ] with asymptotically periodic coefficients . on the other hand , coherent invasion fronts with may simply not exist .a prototypical example are relaxation oscillators of the form with . for small , the equilibrium is unstable with selected speed and frequency , , since the problem is a small perturbation of the scalar -problem .frequency would predict a stable stationary pattern in the wake of the front , which however does not exist for the given choice of , for sufficiently small .stable patterns in the problem are rather modulations of the relaxation oscillation , with .strongly resonant wavenumber selection ( or node conservation ) ( [ e : ks ] ) then implies , which implies .one numerically observes phase slips ( failure of node conservation ) in the leading edge , but this phenomenon does not appear to be well understood theoretically . [ [ simple - growth - modes - secondary - fronts - and - wavenumber - corrections . ] ] simple growth modes secondary fronts and wavenumber corrections . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when a strongly resonant primary front is unstable , we can attempt to predict secondary invasion speeds , frequencies , and wavenumbers based on the linearization at this primary front .such spreading speeds can now be determined by singularities of the evans function ( resonance poles ) or by right - sided pointwise growth modes .whenever these secondary spreading speeds are slower than the primary speeds , one can expect to see an increasingly long transient of the primary unstable pattern in the growing region between primary and secondary front .when the secondary speed exceeds the primary speed , the secondary front locks to the first front and we immediately see the pattern created by the secondary front , which amounts to an effective correction of the observed wavenumber . in , secondary spreading speeds and selected wavenumbers were computed based on right - sided pointwise growth modes .it was found that right - sided pointwise growth modes underestimate the secondary invasion speeds , hinting at a resonance pole as the cause of destabilization of the primary front . in , the pattern selected by the primary frontis in fact stable , yet we observe locked secondary fronts in certain parameter regimes . beyond these predictions ,the phenomenon of staged invasion was investigated theoretically in , in the context of a simple coupled - mode problem . the predicted secondary spreading speedis based on right - sided pointwise growth modes or resonance poles .these predictions are validated by the construction of sub- and super - solutions and expansions for the width of the region occupied by the primary pattern in the locked regime are given .a similar perspective can also shed light on the formation of transverse patterns through invasion processes .our results in section [ s : spmult ] predict that the primary invasion mechanism creates a striped pattern .we can therefore first restrict to invasion fronts that are independent of , propagating in the -direction , and find fronts as described above . when studying secondary invasion , however , we need to take into account the possibility of transverse patterning .we can , in principle , repeat the analysis outlined in the one - dimensional case for fixed transverse modulation , , and study instabilities via right - sided pointwise growth modes or resonance poles .the wavenumber with the fastest spreading speed is then the linear prediction for secondary patterns . since right - sided pointwise growth modes are evaluated using floquet theory , transverse patterns can , in principle , be modulated both in and , and a detailed analysis should distinguish between hexagons and squares , say , in the wake of fronts , as observed in , for instance .we note at this point that the description of secondary , transversely patterned fronts leads to systems with an invariant subspace given by the -independent solutions . in the simplest context, this is apparent in an amplitude approximation to hexagon - roll competition in the swift - hohenberg equation .we therefore expect double double roots similar to the example of counter - propagating waves to occur in a robust fashion .we will discuss nonlinear phenomena associated with double double roots , next . with robust examples in ecology and pattern formation ,double double roots are one of the main challenges that we isolated here . like any other algebraic pointwise growth mode ,these double double roots give linear predictions for the selected speed of the nonlinear system . in the context of a lotka - volterra competition model, a double double root was found that overestimates the invasion speed of the nonlinear system , see .examples in show that double double roots sometimes give correct predictions for spreading speeds . in the following ,we relate some of the results and observations in to our point of view. we will refer to double double roots as _ relevant _ if the linearly selected speed is the nonlinear speed and _ irrelevant _ if the nonlinear speed is slower .we will first lay out some general systems of equations that may give rise to double double roots .we will then relate these double double roots to the concepts of pgms , rpgms and bpgms developed earlier in this article .an important difference between relevant and irrelevant double double roots will be explained .we consider the skew - coupled system for , , with appropriate conditions on the nonlinear functions .note that the subspace is invariant , but that the skew - product structure of the nonlinear system is not enforced by the presence of this invariant subspace .we have already encountered one such system in the example of counter propagating waves .the linearization of ( [ e : skp ] ) at the origin possesses a lower block - triangular form , where are differential operators of order .due to the skew - product structure , the dispersion relation factors , double double roots now occur in a robust fashion whenever indeed , roots are analytic in as solutions to .if the double double root is pinched , then by definition is an algebraic pointwise growth mode .however , the stable and unstable eigenspaces will remain analytic in a neighborhood of and the double root does not yield a right- ( or left- ) sided pointwise growth mode .linearizing the eigenvalue problem in by writing the system as a first - order system in , the double double root corresponds to eigenvectors , .typically , , so that the eigenspace to in the full system is only one - dimensional and spanned by . once again , this is in complete analogy to the case of the counter - propagating wave problem. we can continue eigenvalues to to analytically and distinguish three cases : one does however observe a significant difference between ( ii ) and ( iii ) in the context of a system of coupled fisher - kpp equations , see .there , it is observed numerically that if the double double root is of the form ( ii ) , then the double double root is relevant and the nonlinear speed is the linear speed . on the other hand ,if the double double root is of the form ( iii ) the double double root is irrelevant and the observed speed is slower .we will now motivate these observations .suppose we are considering invasion to the right , with positive spreading speed .suppose that this invasion occurs as a traveling front moving with a speed that is smaller than the linear invasion speed given by the double double root , and consider the linearization at the associated traveling wave , with associated evans function . in case ( ii ), the stable subspace at flips at , so that its projection on the -component is -dimensional .since the full linearization leaves the -subspace invariant , the unstable subspace at is -dimensional , and , again as a consequence of the skew - product structure , stable and unstable subspaces at can not be transverse , which implies the existence of a resonance pole at and pointwise instability of the slower traveling front . beyond the simple skew - product structure ( [ e : skp ] ) , we expect a number of interesting phenomena . in ( i)-(iii ) , we distinguish between uncoupled and unidirectionally linearly coupled systems .one can easily envision coupling in either direction via nonlinear terms , such as , , and try to derive nonlinear predictions for spreading speeds in the leading edge .such nonlinear coupling generated slow pushed fronts in the lotka - volterra equation ; see .nonlinear interactions may well couple `` modes '' that are not in strong resonance .again , some effects become apparent when studying linear systems with general boundary conditions . in , the absolute spectrum was defined through the dispersion relation as follows . for fixed , order the roots of by real part , so that . the absolute spectrum is defined as clearly , pinched double roots belong to the absolute spectrum .for generic boundary conditions , the spectrum on finite but large domains converges to the absolute spectrum setwise , see . comparing to our discussion, the absolute spectrum incorporates possible interactions between roots and , with equal temporal behavior and equal spatial decay rates , while double roots require strictly equal spatial behavior . allowing for time - periodic forcing at the boundary, one can also define absolute floquet spectra , when and .we expect that unstable absolute spectra will impact spreading speeds in a similar way as double double roots do .consider for instance the interaction of a hopf bifurcation and a pitchfork bifurcation , which would be described by amplitude equations for , the amplitude of the hopf mode , and , the amplitude of the pitchfork mode .since frequencies associated with the hopf mode are nonzero , we will not see double double roots in a coupled system . in an amplitude equation description , one does however average out oscillations with an ansatz , so that in the amplitude equation approximation we would see double double roots , with possible _ relevant _ nonlinear coupling between hopf and pitchfork . coming back to the point of view taken in the beginning , section [ s:1.2 ] , absolute spectra give optimal decay in optimally chosen exponentially weighted spaces . similarly , relevant and irrelevant doubledouble roots , cases ( ii ) and ( iii ) in the previous section , can also be distinguished via exponential weights : in the ( irrelevant ) case ( iii ) , it is possible to choose exponential weights separately for and so that the linearization is invertible ; see also .this is not possible in case ( ii ) since exponential weights require stronger decay in the -component , which is incompatible with the direction of coupling .the results in this paper are mostly concerned with the linear theory in the leading edge of invasion processes .our systematic treatment revealed exotic , `` degenerate '' cases , which however occur in a robust fashion when studying concrete systems , in ecology or in pattern formation .it also revealed a linear rigidity in the formation of patterns , favoring stripes in all linear invasion processes .the discussion in this last section points towards a plethora of interesting nonlinear phenomena .our discussion of those phenomena is piecemeal at best , but we expect that the linear theory will be an important ingredient to any systematic theoretical or computational exploration , at the least helping to categorize phenomena .b. fiedler , a. scheel _ spatio - temporal dynamics of reaction - diffusion patterns ._ in trends in nonlinear analysis , m. kirkilionis , s. krmker , r. rannacher , f. tomi ( eds . ) , springer - verlag , berlin , 2003 ( 145 pages ) .e. foard and a.j ._ survey of morphologies in the wake of an enslaved phase - separation front in two dimensions .e * 85 * ( 2012 ) 011501 .r. friedrich , g. radons , t. ditzinger , and a. henning ._ ripple formation through an interface instability from moving growth and erosion sources .* 85 * ( 2000 ) , 4884 .m. kotzagiannidis , j. peterson , j. redford , a. scheel , and q. wu ._ stable pattern selection through invasion fronts in closed two - species reaction - diffusion systems ._ in rims kokyuroku bessatsu * b31 * ( 2012 ) , far - from - equilibrium dynamics , eds. t. ogawa , k. ueda , pp 79 - 93 .x. liang , x. lin , and h. matano ._ a variational problem associated with the minimal speed of travelling waves for spatially periodic reaction - diffusion equations . _ trans ._ 362 _ ( 2010 ) , 56055633 .m. a. shubin ._ on holomorphic families of subspaces of a banach space . _ integral equations operator theory * 2 * ( 1979 ) , 407420 .w. van saarloos ._ front propagation into unstable states _ phys* 386 * ( 2003 ) , 29 - 222 .
|
this article is concerned with pointwise growth and spreading speeds in systems of parabolic partial differential equations . several criteria exist for quantifying pointwise growth rates . these include the location in the complex plane of singularities of the pointwise green s function and pinched double roots of the dispersion relation . the primary aim of this work is to establish some rigorous properties related to these criteria and the relationships between them . in the process , we discover that these concepts are not equivalent and point to some interesting consequences for nonlinear front invasion problems . among the more striking is the fact that pointwise growth does not depend continuously on system parameters . other results include a determination of the circumstances under which pointwise growth on the real line implies pointwise growth on a semi - infinite interval . as a final application , we consider invasion fronts in an infinite cylinder and show that the linear prediction always favors the formation of stripes in the leading edge .
|
lisa is a esa - nasa mission for observing low frequency gravitational waves in the frequency range from hz to 1 hz . in order for lisa to operate successfully , it is crucial that the three spacecraft which form the hubs of the laser interferometer in space maintain nearly constant distances between them , though their order of magnitude is km .the existence of orbits having this property was firstly reported by bender as the basis of lisa . in order to thoroughly study the optical links and light propagation between these moving stations , we however need a detailed model of the lisa configuration .we therefore find it useful to recall explicitly the not so trivial principles of a stable formation flight . in this brief work ,we firstly study three keplerian orbits around the sun with small eccentricities and adjust the orbital parameters so that the spacecraft form an equilateral triangle with nearly constant distances between them. then we find that to the first order in the parameter , where km , is the distance between two spacecraft and a. u. km , the distances between spacecraft are exactly constant ; any variation in arm - lengths should result from higher orders in or from external perturbations of jupiter and the secular effect due to the earth s gravitational field .( the eccentricity is related in a simple way to and is proportional to to the first order in . ) in fact our analysis shows that such formations are possible with any number of spacecraft provided they lie in a magic plane making an angle of with the ecliptic .we establish this general result with the help of the hill s or clohessy - wiltshire ( cw ) equations .the exact orbits of the three spacecraft are constructed so that to the first order in the parameter , the distances between any two spacecraft remain constant .below we give such a choice of orbits .this choice is clearly not unique and other choices are possible which satisfy some criteria of optimality such as the distances between spacecraft vary as little as possible .we construct the orbit of the first spacecraft and then obtain the other two orbits by rotations of and .the equation of an elliptical orbit in the plane is given by , x = r(+ e ) , y = r , [ orbit ] where is the semi - major axis of the ellipse , the eccentricity and the eccentric anomaly .the focus is at the origin .the eccentric anomaly is related to the mean anomaly by , + e = t , [ anml ] where is the time and the average angular velocity .we have chosen the zero of time when the particle is at the farthest point from the focus ( this is contrary to what most books do and because of this choice of initial condition we have a positive sign instead of a negative sign on the left hand side of eq.([anml ] ) ) .we choose the barycentric frame with coordinates as follows : the ecliptic plane is the plane and we first consider a circular reference orbit of radius 1 a. u. centered at the sun .the plane of the lisa triangle makes an angle of with the ecliptic plane .as we shall see later , we deduce from the cw equations that this allows constant inter - spacecraft distances to the first order in .this fact dictates the choice of orbits of the spacecraft formation .we choose spacecraft 1 to be at its highest point ( maximum z ) at .this means that at this point , and .thus to obtain the orbit of the first space - craft we must rotate the orbit in eq .( [ orbit ] ) by a small angle about the so that the spacecraft 1 is lifted by an appropriate distance above the plane . in order to obtain the inclination of , the spacecraft must have its -coordinate equal to .the geometry of the configuration is shown in figure 1 . while the cw frame is labelled by .sc1 , sc2 and sc3 denote the three spacecraft .the radius of the reference orbit is taken to be a. u. and s denotes the sun.,scaledwidth=80.0% ] from the geometry and are obtained as , & = & , + e & = & ( 1 + + ^2 ) ^1/2 - 1 , and the orbit equations for the spacecraft 1 are given by : x_1 & = & r(_1 + e ) , + y_1 & = & r _ 1 , + z_1 & = & r(_1 + e ) .[ tltorb ] the eccentric anomaly is implicitly given in terms of by , _1 + e _ 1 = t. the orbits of the spacecraft 2 and 3 are obtained by rotating the orbit of spacecraft 1 by and about the ; the phases however , must be adjusted so that the spacecraft are about the distance from each other .the orbit equations of spacecraft are : x_k & = & x_1 - y_1 , + y_k & = & x_1 + y_1 , + z_k & = & z_1 , [ orbits ] with the caveat that the is replaced by the phases where they are implicitly given by , _ k + e _ k = t - ( k -1 ) .these are the exact equations of the orbits of the three spacecraft . with these orbitsthe inter - spacecraft distance varies upto about 100,000 km . in figure 2we show how the inter - spacecraft distances vary over the course of a year .km , when the exact orbits are computed .to the first order in the lengths of the arms remain constant and are equal to km.,scaledwidth=60.0% ] note that there are other choices of and close to the above values for the three orbits which give smaller variations in the armlengths . in this subsectionwe obtain the orbits to the first order in . the tilt and the eccentricity given to this order by , = , e = .we find that is proportional to and , and .then to this order and now writing in terms of , eqs.([tltorb ] ) become : x_1 & = & r(_1 + e ) , + y_1 & = & r _ 1 , + z_1 & = & e r _ 1 , where the eccentric anomaly can be explicitly solved for to the first order in in terms of the time : _ 1 = t - e t. the approximate orbits of the spacecraft 2 and 3 can be obtained , as before , by rotating the orbit of spacecraft 1 by and respectively about the -axis as in eq.([orbits ] ) .the corresponding phases and now , can be explicitly obtained in terms of : _ k = t - ( k - 1 ) - e . in the next section we prove that to the first order in , the distance between any two space - craft is , that it is a constant and remains so _ at all times _ ; _ the lisa constellation moves rigidly as an equilateral triangle with its centroid tracing a circle with radius of 1 a. u. with the sun as its centre . _ to check this from the above equations is straightforward .we can compute the distance between spacecraft 1 and 2 , which at the lowest order in proves to be , the two other distances are equal to the preceding by symmetry .this model succeeded because we already knew the result . in the next section ,we show with the help of a more sophisticated formalism , how this case is a special case of a much more general result and that stable formations with infinite number of spacecraft are possible .this result is important because we then have large number of flight formations to choose from .depending on the required physical criteria optimal flight formations may be selected .clohessy and wiltshire make a transformation to a frame - the cw frame which has its origin on the reference orbit and also rotates with angular velocity .the direction is normal and coplanar with the orbit , the direction is tangential and comoving , and the direction is chosen orthogonal to the orbital plane .they write down the linearised dynamical equations for test - particles in the neighbourhood of a reference particle ( such as the earth ) . since the frame is noninertial ,coriolis and centrifugal forces appear in addition to the tidal forces .the cw equations for a free test particle of coordinates are : where year .the general solution , depending on six arbitrary parameters is : we observe that both and contain constant terms and also contains a term linear in .the constant term in is merely an offset and could be removed without loss of generality by a trivial translation of coordinate along the same orbit .the removal of the offset also removes the linear term in ( the runaway solution ) .in contrast with the offset , the offset corresponds to a different orbit with a different period than that of the reference particle , namely , the origin of the cw frame .thus the only actual and important requirement is that of vanishing of the offset term .this term represents coriolis acceleration in the direction and comes from integrating the equation in the cw equations ( [ clowi ] ) .if we require a solution with no offsets , we must have : _ 0 + 2x_0=0 , + _ 0-y_0=0 .[ rnwy ] with these additional constraints on the initial conditions , the bounded and centred solution is : x(t)=y_0 t + x_0 t , + y(t)=y_0 t - 2x_0 t , + z(t ) = z_0 t + t. if moreover we require the distance of the particle from the origin to be constant , equal to , say , we get the following equation : ( y_0 ^ 2 + 4 x_0 ^ 2 + ) ^2 t+ ( x_0 ^ 2+y_0 ^ 2+z_0 ^ 2 ) ^2 t & + & + ( -3 x_0 y_0 ) t t = d^2 . && after identifying the terms of frequencies and ( sin and cos ) , we obtain the two equations : z_0 ^ 2 - & = & 3 ( x_0 ^ 2 - y_0 ^ 2 ) , + & = & 3 x_0 y_0 .adding the first to times the second yields the complex condition : ( z_0 + i ) ^2=3(x_0 + i ) ^2 , from which we obtain , z_0 = x_0 , = y_0 , where .the solutions satisfying the requirements of ( i ) no offset and ( ii ) fixed distance to origin are finally of the form , where _ 0 = , _ 0 = .the initial conditions are now expressed in terms of instead of .we call the solutions satisfying the above requirements as _ stable_. the results that we obtained by taking keplerian orbits to the first order in , are the same as those obtained by using the preceding cw equations . in the cw frame the equations of the orbits simplify and it is easy to verify the result .the transformation is only in the plane ; the coordinate is undisturbed .since we have chosen the reference orbit to be the circle centred at the sun and radius of a. u. , the cw frame is related to our barycentric frame by : x & = & ( x - r t ) t + ( y - r t ) t , + y & = & -(x - r t ) t + ( y - r t ) t , + z & = & z. the orbit equations for the three spacecraft derived in the last section , now simplify and can again be written in a compact form : x_k & = & e r , + y_k & = & -2 e r , + z_k & = & e r , where labels the three space - craft .one immediately recognizes the form of eqs.([cwp ] ) for the special case of with the initial conditions and .the symmetry is now obvious .it is straightforward to verify that the distance between any two spacecraft is .thus the lisa spacecraft constellation rigidly moves as an equilateral triangle of side in this approximation .in fact , it is possible to establish a general result : _ in the cw frame there are just two planes which make angles of with the ( x - y ) plane , in which test particles obeying cw equations and the stability conditions ( as defined above ) , perform rigid rotations about the origin with angular velocity . _ to see this , consider a test particle at arbitrary whose orbit is parametrized by eqs.([cwp ] ) .consider the frame which is obtained from the cw frame , by first rotating about the -axis by to obtain the intermediate frame and then rotating this frame about the -axis by .the first rotation transforms the particle trajectories to lie in the plane .the second rotation by about the -axis makes the particle in this new frame _ stationary _ !thus we have in the double - primed coordinates : x(t ) = _ 0 _ 0 , y(t ) = _ 0 _ 0 , showing that the particle is at rest in the new rotating frame .there is thus a one to one mapping from the set of all stable ( as defined above ) solutions of the cw equations to the two planes whose normals are inclined at or with respect to the direction and rotating at the angular velocity , the rotation axis being .the lisa plane corresponds to the choice of , and it is now clear that any particle at rest in this plane , remains at rest in it , so that any number of spacecraft in this plane would remain at constant relative distances , at least in the cw approximation , equivalent to a first order calculation in the eccentricities .this further implies that so far as ` rigid ' flight formations are desired , equilateral triangle is not the only choice .arbitrary formations with any number of spacecraft are possible as long as they obey the cw equations and satisfy the stability requirements as detailed above .we have explicitly constructed three heliocentric spacecraft orbits which to the first order in eccentricities maintain equal distances between them which is taken to be 5 million km .we have shown with the help of a more sophisticated formalism - the cw equations - that there are two planes in the cw frame , in which particles obeying the cw equations and satisfying stability requirements , namely , no offsets ( and hence no runaway behaviour ) and maintaining equal distance from the origin , maintain their relative distances in the cw approximation which is equivalent to a first order calculation in the eccentricities .this has the implication that formations not necessarily triangular and with any number of spacecraft are possible as long as they obey the stability constraints and lie in any one of these planes ; their relative distances will be maintained within the cw approximation .this result opens up new possibilities of spacecraft constellations with various geometrical configurations and any number of spacecraft which would be useful to future space missions .sd would like to thank ifcpar for travel support and the observatoire de la cte dazur , nice , france for local hospitality , where substantial part of this work was carried out .sk would like to acknowledge dst , india for the wos - a .all the authors would like to thank e. chassande - mottin for detailed discussions .10 `` lisa : a cornerstone mission for the observation of gravitational waves '' , system and technology study report , 2000 .vincent and p.l .bender , proc .astrodynamics specialist conference ( kalispell usa ) ( univelt , san diego , 1987 ) 1346 . w. h. clohessy and r. s. wiltshire , journal of aerospace sciences , 653 - 658 ( 1960 ) ; + d. a. vallado , _ foundations of astrodynamics and applications _ , 2nd edition 2001 , microcosm press kluwer ; + also in s. nerem , http://ccar.colorado.edu/asen5050/lecture12.pdf(2003 ) .r. m. l. baker and m. w. makemson , _ an introduction to astrodynamics _ , academic press , 1960 .
|
the joint nasa - esa mission lisa relies crucially on the stability of the three spacecraft constellation . each of the spacecraft is in heliocentric orbits forming a stable triangle . the principles of such a formation flight have been formulated long ago and analysis performed , but seldom presented if ever , even to lisa scientists . we nevertheless need these details in order to carry out theoretical studies on the optical links , simulators etc . in this article , we present in brief , a model of the lisa constellation , which we believe will be useful for the lisa community .
|
the last few years has seen a large advance in our understanding of networks whose structures are by nature non - equilibrium and non - random .these networks have been used to study systems as diverse as the internet router and www hyperlink networks , electric power grids , and cellular metabolic pathways . in particular, these networks prominently feature a power - law frequency distribution for the nodes degree ( scale - free network ) , a network diameter smaller than a comparable random graph , one with the same amount of nodes and the same average degree per node , and a much larger clustering coefficient than a comparable random graph . among the most interesting of the networks studied are those that analyze human social interaction .the phenomenon of six - degrees of separation , first recognized by stanley milgram , is well known and documented in both academic and popular culture .the most detailed of these networks studied are actor collaboration networks and scientific collaboration networks .interestingly , the web of human sexual partners have also been documented .typically these networks share the features mentioned above that differentiate them from random graphs and display a scale - free character .this paper hopes to add to these studies on social contact networks by adding another example : the connections of users in an instant messaging service .instant messaging has grown at a phenomenal rate in the last several years to become a major form of communication both over the internet and within company intranets .instant messaging has become so important in fact that the fcc has attempted to force the largest server for instant messaging , aol time - warner , to open its software for interoperability .instant messaging is distinguished from regular chat as being a one - on - one conversation between two users on an instant messaging network . typically in instant messaging systemseach user has a user name and a contact list containing the user names of other users who they often communicate with .it is this feature of instant messaging that makes it amenable to scientific study and statistical analysis .if one imagines each user as a node and each contact on ther user s contact list as an out - directed edge , the community on an instant messaging network can be modeled using graph theory . using these assumptionsit can be easily seen that an instant messaging network represents a non - equilibrium graph in that nodes ( users ) are added and removed over time and edges most likely accumulate on users in a non - random fashion .one possible model for this growth is the barabsi - albert ( ba ) model where edges are formed by preferential attachment .those nodes with more edges are more likely to accumulate edges as time goes on .similarly one could hypothesize a user is more likely to form out - directed edges ( add users to the user s contact list ) the more users are already present on their contact list and a user is more likely to receive in - directed edges ( being on another user s contact list ) in a similar fashion . with these assumptions an instant messaging network s graphshould probably display a scale - free character . to test this hypothesis an instant messaging network using the open - source jabber protocolwas researched to find such features . to clearly understand some of the assumptions and conclusions in this paper a cursory overview of the jabber protocol is necessary .the jabber protocol is based off of xml and uses a distributed client - server architecture .jabber was consciously based off of the architecture of email systems so instead of one central server like aol s instant messenger or microsoft s msn messenger , jabber has many servers in many locations .jabber clients are the users who communicate with instant messaging .clients can communicate with all other clients on their jabber servers and with clients on other jabber servers since the jabber servers can communicate with each other .in addition jabber supports additions called transports that allow jabber clients to communicate with clients using other protocols such as those on aol , microsoft , icq , or yahoo instant messaging .the network studied was the instant messaging database from nioki.com , a french language teen - oriented web site .appropriate measures were taken to completely preserve the anonymity and privacy of the users as is explained in detail in appendix a. the nioki.com database contained 50,158 users ( nodes ) with almost 500,000 edges . due tothe model explained earlier , this instant messaging network was modeled as a directed graph .this is different from other social network studies such as the actor - movie collaboration network which was modeled as a bipartite graph and the scientific collaborations which were modeled as undirected graphs .the nioki.com instant messaging network was found to exhibit all the characteristics of a scale free network .the inward and outward directed edge frequency distributions both followed power laws with a 2.2 and 2.4 .the average in and out degrees are 9.1 and 8.2 .the average in and out degrees are identical in a network with no outside contacts .however , as explained in the description of jabber , clients have the ability to communicate with clients on other servers outside their current server .so there are probably contacts with clients that are not on nioki.coms server. the data does not indicate who these contacts are but the difference in the average in and out degrees per node intimate their existence .the diameter of the network , = 4.35 so that there are about 4 - 5 users on average between any two users on the network .these values indicate the small world character of the nioki.com network as compared to a random graph ( table [ tab : compare ] ) .since this network is modeled as a directed graph , it presents a rather asymmetric view of human social interaction . in a directed graph it is possible for one user to `` know '' another without the other user reciprocally expressing such a relationshipthis is because the contact list data we have does not require a reciprocal relationship between two users .measurements indicate about 82% of the contacts in the network are in both directions .so on average 82% of the users on a given user s ( user a ) contact list also have the user a on their contact list . in order to get a clearer view and calculate the clustering coefficient ,a new list of users and contacts was created adding those edges necessary to make the network undirected . in this casethe power exponent of the degree distribution became .the average degree was 9.6 and the diameter of the network decreased to = 4.1 .the average clustering coefficient was calculated at = 0.33 further reinforcing the small - world character of the network .the final measures computed were the size of the giant weakly connecting connected component ( gwcc ) and the giant strongly connected component ( gscc ) .the gwcc is the number of nodes on the network that can be reached by any other node in the component ignoring the directions of edges .it was calculated at a very large 49,801 users or over 99% of the users on nioki.com . only about 0.7% of the users were in disconnected components .the gscc is a measure of the number of nodes in the component where any node can be reached from any other node through directed edges .this was calculated to be 44,581 or 89% of nioki.coms users .these very large values for the connected components are probably most likely explained by the structure of the nioki.com website itself .it is mostly made up of users who communicate using the nioki instant messaging service with other users on nioki.com .it is unlikely that nioki.coms instant messenger is used as a primary instant messaging tool for users on other servers or services by most users .being a teen oriented website geared toward socialization it likely has a tightly knit community over shared interests .this is unlike the actor or scientific collaboration databases where communication is mainly limited to professional roles and fields of research ..[tab : compare]comparison of network statistical measures [ cols="^,<,<,<",options="header " , ]the random graphs statistics in table [ tab : compare ] were computed using the following equations which are theoretically explained in detail in . the comparable random graph was assumed to have the same number of nodes and average degree per node as the undirected model of the instant messenger network . from this information the average clustering coefficient of the random graphcan be calculated as it is interesting to note that the clustering coefficient of a random graph is the same as the node connection probability .the shortest path was estimated using the approximation , the possiblity that this network grows according to the barabsi - albert model was considered .a key indication of this would be a measure of the preferential attachment probability . though the empirical data from the network and its scale - free character hint strongly towards the barabas - albert model or a comparable one , time dependent data was not available to allow the determination of the shape ( linear of curved ) or function of .a frequent question with the ever faster globalization and communication in the world is how connected we all really are .this research covered a relatively large sample of about 50,000 people and in some ways gives a glimpse into the connectivity of our society , but in other ways falls short . + this research should give additional credence to the growing evidence of the scale - free nature of social and professional contacts in greater society . combined with the earlier studies on professional collaborations it seems to indicate that society does exhibit a `` small world '' effect .this research finds that the small world of nioki.com is based on a scale - free topology , however , there are other models of small worlds including the watts - strogatz model which exhibit similar features .the small diameter compared to the number of nodes in the network indicates that the degrees of separation in nioki.com are a bit smaller than the six stanley milgram measured in his studies . however , there are caveats to the wholesale application of these results to larger society .nioki.com is probably more connected than society at large due to its foundation of shared interests and demographics ( young adults ) .the large size of the gwcc and gscc is an indication of this .thus , the researcher does not think it would be completely accurate to say everyone in the world is connected by 4 - 5 people .in fact in a recent criticism of milgram s work , it is asserted that studies that do not take into account the increased likelihood of connections based on factors such as demographics or professional affiliation may not clearly represent the larger society . though i believe this research further emphasizes that human social networks have a small world character , this research can not address the question of the connectivity across varying demographics or social boundaries .in instant messaging , like all computer network communications tools , security is often a paramount consideration .though there are many problems of interest in the security of instant messaging from the privacy of conversations to the security of user accounts and passwords the aspect of security most pertinent here is the spread of worms across instant messenger networks .there have been several recent outbreaks of worms on instant messaging networks .the spread of epidemics on scale - free and small world networks has been well studied .it is known that there is no epidemic threshold for infinitely large scale - free networks and worms or viruses can spread rapidly through a network . though there have not been any devastating wormsso far it is wise to prepare to interdict the worst possibility .let us assume a worm spreads through an instant messaging network by infecting a node and then spreading itself along all or some of the out - directed edges to new nodes which also may be infected .this is similar to recent worms which send a message containing an infected link to all members of a user s contact list .the dynamics of this kind of epidemic have been discussed .however , the options for stopping or slowing the epidemic are varied .you can alert users or provide a patch or software to prevent the spread as was done with the code red virus that infected windows 2000 servers . with a more extreme event, however , more radical measures could be necessary .unlike the internet , where control is decentralized , the client - server nature of instant messaging makes more radical measures possible .research has indicated that scale - free networks though they are robust to random node failures , are very vulnerable to attack . a directed attack at the most connected nodes could severly damage a network such as the internet .however , this characteristic could be reversed and turned to an advantage in halting an epidemic . in a severe instant messaging worm outbreakthe server administrators could slow , but not completely stop , the spread of a worm in a way that does not affect the service for many users . by disabling the accounts of the most connected users on the network , they could effectively increase the network s diameter making the propagation of the epidemic much slower and buying time for a patch or another curative measure .the diameter of the nioki.com network after a certain percentage of the most connected sites are removed is shown in figure 4 . removing the top 10% connected users increases the diameter of the network almost twofold . however , even disabling up to 10% of the most connected users would leave connectivity for the other 90% of the network and allow the service to have more time to cope with the outbreak and help other users .there are caveats to this plan , however .this would only work assuming that the most connected users are online frequently enough to spread the worm .if many of the most connected users are rarely online , this strategy may not produce its full effect .also , this strategy would still deny a segment of users usage to the network for an unspecified period of time and would possibly upset many users if it is used too frequently .in this paper the network structure of the instant messaging community of nioki.com was investigated and demonstrated to be a scale - free network .though the preferential attachment was not determined , it is likely that the barabsi - albert or similar model describes its evolution .knowledge of this structure may tell us more about social networks in the real world and how to prevent the spread of worms on instant messaging networks .i would like to thank the many people who helped this research be possible .first , in the jabber and open source community , everyone at jabber.org for their help and suggestions including peter saint - andre . also , i would like to thank nioki.com for working with me to get this data without violating the privacy of their users .in particular i would like to thank stefan praszalowicz . finally , those in the physics and computer science community who helped with suggestions or feedback including albert - lszl barabsi , hawoong jeong , colin steele , steven strogatz , duncan watts , bryan wright , and mark newman for his help with my algorithms .finally , i would like to acknowledge the generous help of david smith , my advisor , under whom i completed this project .references barabsi , a.l . and albert , r , revphys . * 74 * , 47 , ( 2002 ) .dorogovtsev , s.n , and mendes , j.f.f . , adv . in phys . ,51 , 1079 , ( 2002 ) .milgram , s. psychol . today * 2 * 60 , ( 1967 ) .newman , m.e.j ., strogatz , s.h . , and watts , d.j . , arxiv : cond - mat/0007235 , ( 2000 ) .amaral , l.a.n . ,scala , a. , barthlmy , m. , and stanley , h.e .usa * 97 * , 11149 , ( 2000 ) .newman , m.e.j .e. , * 64 * , 016131 .newman , m.e.j .e. , * 64 * , 016132 .liljeros , f. , edling , c.r . ,amaral , l.a.n . ,stanley , h.e . , and aberg , y. , nature , * 411 * , 907 , ( 2001 ) .< aol time warner inc .instant messaging interoperability , http://www.fcc.gov/transaction/aol-tw/instantmessaging.html .< jabber documentation , http://www.jabber.org / docs/. kleinfeld , j. , psychol . today , * 34 * , ( 2002 ) .computerworld , april 10 , 2002 .pastor - satorras , r. and vespignani , a. , physical review letters * 86 * , 3200 ( 2001 ) .pastor - satorras , r. and vespignani , a. physical review e * 65 * 036104 ( 2002 ) .lloyd a.l . andmay , r.m .science , * 292 * , 1316 ( 2001 ) .caida analysis of code - red worm , http://www.caida.org/analysis/security/code-red/ , ( 2001 ) .albert , r. , jeong , h. , and barabsi , a.l. , nature , * 406 * , 378 , 2000 ; erratum , nature * 409 * , 542 ( 2001 ) .cohen , r. , erez , k. , ben - avraham , d. , and havlin , s. , phys .* 86 * , 16 , 3682 , ( 2001 ) .cohen , r. , erez , k. , ben - avraham , d. , and havlin , s. , phys .* 85 * , 21 , 4626 , ( 2000 ) .dezs , z. and barabsi , a.l . , cond - mat/0107420 .of the utmost importance was protecting the privacy of the users of the nioki.com instant messaging network that was researched . here all privacy precautions takenare outlined and explained in detail .the data received by the reseacher of this paper was in the most raw form possible .the data from nioki.com was prepared so that all users and their contacts were anonymized as numbers .users and their contacts were then matched up by matching a user number with a contact number .the researcher did not receive any user names , emails , ip addresses , geographical locations , personal information or activity information , or any other data that could allow him to either determine the identity of any given user or extrapolate anything about a user s activity on nioki.com or the internet at large .it would have been impossible for the researcher to determine any direct personal information or identity about anyone from this raw data .the data was only in the hands of the researcher at all times and was not given to any collaborators , published publicly or distributed in raw form , or sold for profit .the data was statistically analyzed in aggregate and therefore no information about any specific users could be extrapolated .a rough metaphor of this experiment would be analyzing census data for a town .aggregate patterns will emerge but no specific information about individual inhabitants can be gleamed from the data . in order to further protect privacy , this researcher can not distribute the raw data to any others interested , even for research purposes .such requests must be made directly to nioki.com .
|
the topology of an instant messaging system is described . statistical measures of the network are given and compared with the statistics of a comparable random graph . the scale - free character of the network is examined and implications are given for the structure of social networks and instant messenger security .
|
electric power grids are experiencing a widespread of communication networks within their infrastructure realm , allowing for different applications for the end users .this , however , increases their already complex characteristics .hence proper models aiming at capturing new features that might emerge still need to be built . in a recent paper , bale et al .pointed out some limitations of the standard economic - optimizer models for energy systems while indicating the importance of complexity science models to better characterize them .we follow here this line of thought in order to provide a different approach for demand - side management in power grids .our focus is to use an agent - based simulator , recently proposed by the authors in , to model the interrelations of multi - layer systems so as to understand how the dynamics of one layer affect , and are affected by , each other . in this way , we expect to provide a different perspective on demand - side control policies beyond utility maximization by considering three different scenarios : no signaling , global signaling and pricing scheme .specifically , our main contribution is to assess , in both micro and macro levels , how the agent decision dynamics are for different communication network topologies , link error probabilities and demand - side management policies .to do so , we employ a multi - layer model built as follows .the physical layer is a circuit composed by a power source and resistors in parallel .individual agents can add , remove or keep the resistors they have .their decisions aiming at maximizing their own delivered power , which is a non - linear function dependent on the others behavior , and they are based on ( i ) their internal state , ( ii ) their global state perception , ( iii ) the information received from their neighbors in the communication network , and ( iv ) a randomized selfishness related to their willingness of answering to a demand - side control request . by individually modifying the communication network topology , the link error probability and the demand - side management policies keeping fixed the other parameters , we expect to show how these factors affect the power utilization and fairness in the system for different number of agents .when no demand - side control is considered , we show that : ( i ) different communication network topologies ( ring , watt - strogatz - graph and barabasi - albert - graph ) lead to different levels of power utilization and fairness at the physical layer and ( ii ) a certain level of error induces more cooperative behavior at the regulatory layer . on the other hand, if demand - side control or pricing schemes are considered , the system behaves in a more predictable manner and is less dependent on its size . in this case , due to the global knowledge about the state of the system, it also enables much higher utilization and fairness .the rest of this paper is divided as follows .section [ sec : model ] describes the multi - layer model employed here .section [ sec : results ] presents the numerical results used to analyze the proposed scenario . in section [ sec : final ] , we discuss the lessons learned from our model , indicating potential future works .the basis of this article is a multi - layer model of the power grid and a communication system , which we will be briefly described in the following section .our discrete - time , agent - based model assumes these three layers as constitutive parts of the system composed by an electric circuit as the physical infrastructure , a communication network where agents exchange local information and a set of regulations that define the agents behavior , as exemplified in fig . [fig : system ] . as statedbefore the model assumes discrete time steps , denoted by .therefore the interactions between the agents might be viewed as a round - based game . at each step in time , every agent aims to maximize its own power . to achieve this aim ,the agent has three options : add a resistor ( defecting ) , remove it ( cooperating ) , or do nothing ( ignoring ) .the decision is based on the gain from the previous strategy ] in the following manner .if the gain ] . otherwise , if <\lambda_{\mathrm{min}} ] , indicating that the system is in a bad condition , the agent will also switch to cooperate , leading to = -1 ] ) or adds another load in the circuit ( i.e. = + 1 ] of the agents which are in his neighborhood .we assume that agent always transmits its actual state ] is the state information send from to and ] for all , and the network is a bidirectional graph so that an error at does not imply an error at , and vice - versa .if an error happens , the received information ] with and is given by : =p_{\mathrm{typ}}\frac{a_i[t]\mu}{(a_{\mathrm{avg}}[t]+\mu)^{2 } } , \ ] ] where , , ] is the number of active resistors in the system excluding the source resistor and the ones controlled by agent , and =(a_i[t]+r_i[t])/n ] becomes more complicated . to make the analysis clearer , we choose to apply the following approximation : \approx \frac{\mathrm{d } p_{i}}{p_{i}[t]}\approx\frac{\delta a_i[t]}{a_i[t]}-\frac{\;2\;}{n } \ ; \frac{1}{a_{\mathrm{avg}}[t]+\mu}\left(\delta r_i[t]+\delta a_i[t]\right ) , \label{eq : gain}\ ] ] such that the gain ] and in the number of resistors controlled by other agents = r_i[t ] - r_i[t-1] ] and the system parameters and .in this section , we present our main results .first we introduce the initial results introduced in where neither demand - side management signaling nor more complex communication topologies are considered .then , we present our new results indicating the effects of such factors on the system in comparison with our basic model . in fig .[ fig : transition - ws ] one can see the inherent dynamics of the system when no demand - side management is done and the communication network topology is a ring .we identify two phenomena that are important for our analysis , as explained next .first , we see that the average number of cooperators varies with the size of the system , showing a very high number of cooperation for larger systems . however , as we will see later in fig .[ fig : transition - ws ] , this fact is not enough to tell the whole story . as we will see later , this high level of cooperation is not necessarily a universally good outcome .the second phenomena we see is a change in behavior of the system for different sizes .while for small systems we see an almost checkerboard like distribution of white ( cooperation ) and black ( defection ) dots , mid - sized systems show a strikingly different , which shows that only very small systems of less then 10 agents are able to stay very close to the optimum , while systems of more then 10 agents start to deviate from it .these curves also reflect the wave - like behavior for mid - sized systems , where the number resistors drops due to almost global cooperation .this brings the system close to the optimal point , when the number of resistors rises once again . for large systems ,the system does not get close to the optimum .however , it is more stable and predictable than the other cases. the background of how this global behavior emerges from the local interactions shall not be part of this paper .a detailed analysis of those rather complex dynamical behaviors is found elsewhere . in this paper, we will view this intrinsic unstable performance as a undesirable outcome . in the next subsection, we will explore ways to prevent this behaviors by using demand - side signaling with different communication topologies . as we will see later ,our goal is to provide a global indication via either direct signaling or pricing so the agents can use this information to make their individual decisions . in fig .[ fig : utili ] , we can see the results of a global signal that sent to all agents when the system is beyond the optimal point .this signal might be interpreted as a way to implement a demand response mechanism where the distribution operator experience a peak in consumption . from the agents view, this signal provides a global information about the system so they do not have to rely only on their neighbors .this global information is assumed to be reliable and is treated in the same way as the neighbor information : if the signal is present and the agent experiences a small gain , it starts cooperating . as we can see in fig .[ fig : utili ] , the signal enables the system to reach the optimum point independent from its size . for comparisonthe original system is also shown where it is , on average not possible for any size to reach such high states of power utilization .we can also see that there is no big difference in the behavior for different network topologies .another option to stabilize the system would be to rely on pricing .for this purpose , we have to adapt our model . instead of just maximizing its own power demand, the agents now have to maximize their utility taking the price into account .the price function needs to reflect the state of the system as a signal of overuse ( when necessary ) . inorder to achieve build such a function , we first need to build an utility function that reflects how the power demand is valued .we adopted the following utility from : ,\omega_i ) = \begin{cases } \omega p_i - \frac{\alpha}{2}p_i^2 & \text{if } 0 \leq p_i < \frac{\omega}{\alpha } \\\frac{\omega^2}{2\alpha } & \text{if } p_i \geq \frac{\omega}{\alpha } \end{cases},\ ] ] where being the power consumption while and determine how consumption is valued .power utilization for different communication strategies and topologies depending on the system .the green line shows the utilization for the basic model , where the utilization changes dramatically due to intrinsic dynamics . for the simulations with a global demand response signal ( red , brown ) a much more stable behavior can be archived . ]the function is chosen so that there exists a linear marginal benefit up to a certain point , at which the benefit does not increase anymore with higher consumption . in our case, we keep fixed at while is uniformly distributed between , representing different values of consumers about the importance of power consumption . the price function, meanwhile , is modeled as a simple step function : = \begin{cases } p_1 & \text{if } n[t ] \leq n^{\mathrm{opt } } \\p_2 & \text{if } n[t ] > n^{\mathrm{opt } } \end{cases},\ ] ] assuming that and .power utilization for step pricing and agents with utility function described in this section .the green line shows the utilization for the basic model . for the simulations with a pricing ( red , brown )a much more stable behavior can be archived .however a dependence on the perceived neighborhood and system size remains . ] the cost for each user is then calculated by multiplying the consumption with the current price : = p[t ] p_i[t].\ ] ] the agent decision process itself is similar , but now , instead of optimizing the received power , the agents are now optimizing their benefit ( the utility minus the cost ) : = u_{\alpha}(p_i[t],\omega_i ) - c_i[t].\ ] ] the results using the proposed scheme can be viewed in fig .[ fig : utilii ] .once again we can see overall the results are much better than the one presented in section [ subsec : baseline ] .however , if we compare the case of a simple ring graph ( brown curve ) describing the neighborhood versus a more meshed topology from watts - strogatz graph ( green ) , we see that the communication network ( where local information about the neighbors are transmitted ) has still a big influence on the final outcome .this effect seems to be more pronounced for small networks .however , mid - sized systems , which are the most critical size when no signaling is considered , still experience stable outcomes .in this paper , we have demonstrated that it is possible to counter act the very unpredictable behavior that arises in the model proposed in when no global signaling is used .we analyzed here two strategies different strategies : one is the direct signal when the system is overusing state and the other is price .the first strategy is a rather simple way of providing each agent with global information about the system .this strategy enables a more stable outcome , which is much less influenced by other parameters of the simulation , like the communication network topology .the second strategy , in its turn , is to transform the original power optimization in to a cost - benefit optimization , where each agents aims to archive his personal optimal power usage considering price and utility value of the consumed power .while similar outputs could be demonstrated with this approach , it is necessary to point out that the choice of parameters to archive a stable outcome in the scenario is much more complicated . in the none trivial case , where the sum of the demanded power of all agents exceeds the maximum available power ,the outcome is highly dependent on the choice of not only the price levels but also the valuation of the power given by and .this fact suggests that real time pricing in combination with demand response , which is often proposed as way to utilize changes in available power , needs to be finely tuned in order to archive a stable outcome . in other words, pricing does not inherently guarantee optimal usage and stability .rather the outcome of the first strategy indicates that a direct response to capacity constraints without the intermediate pricing layer is much easier to implement in a stable manner . from these results, we plan to extend the present work by looking at the effects of the price parameters on the system dynamics .we also expect to build a signaling strategy that indicates different levels of power consumption in the netwowe would like to acknowledge the computing facilities of csc - it center for science ltd .( finland ) that was used to run the simulation scenarios .s. bera , s. misra , and j. j. rodrigues , `` cloud computing applications for smart grid : a survey , '' _ ieee transactions on parallel and distributed systems _ , vol . 9219 , no .c , pp . 11 , 2014 .[ online ] .available : http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6809180 f. khnlenz and p. h. j. nardelli , `` dynamics of complex systems built as coupled physical , communication and decision layers , '' _ plos one _ , vol .11 , no . 1, p. e0145135 , jan 2016 .[ online ] .available : http://dx.plos.org/10.1371/journal.pone.0145135 m. fahrioglu and f. l. alvarado , `` designing cost effective demand management contracts using game theory , '' in _ power engineering society 1999 winter meeting , ieee _ , vol .1.1em plus 0.5em minus 0.4emieee , 1999 , pp . 427432 .
|
this paper studies how the communication network affects the power utilization and fairness in a simplified power system model , composed by three coupled layers : physical , communication and regulatory . using an agent - based approach , we build a scenario where individuals may cooperate ( by removing a load ) or not ( by keeping their loads or adding one more ) . the agent decision reflects its desire of maximizing the delivered power based on its internal state , its global state perception , a randomized selfishness related to its willingness to follow demand - side control requests , and the state information received from its neighbors in the communication network . our focus is to understand how the network topology and errors in the communication layer affect the agents behavior , reflected in the power utilization and fairness for different demand - side policies . our results show that for optimal power utilization and fairness a global knowledge about the system is needed . we show that close to optimal results can be archived with either a demand control signal or a global pricing for energy . communication networks , multi - agent systems , network theory , smart grids
|
financial markets exibit a dynamic behaviour in the form of fluctuations , trends , and volatility .market regulations , globalization , changes in the interest rates , war conflicts , new technologies , social movements , news and housing are only a small sample of factors affecting the chaotic and complex structure of the financial markets . to assess all these interacting elements within a coherent theoretical economic framework to create a prediction model is , at least for now , a nearly impossible task .the behaviour of an economic system integrates a collection of emerging properties of chaotic and complex systems . from a deteministic approach ,the effort requiered to model and characterize a such system might be monumental .complex correlations in the fluctuations of financial and economic indices , the self - organization phenomena in market crashes , the sudden high growth or sharp fall in the stock market during periods of apparent stability , are only a few examples of situations that can be considered difficult to understand or represent mathematically .on one hand , from the stochastic - deterministic point of view of the physical statistics , the dynamic variation of prices in a financial market can be considered as a result of an enormous amount of interacting elements .for instance , stock prices are the result of multiple increments and decrements that result from a feedback response of every action defining the composition of an index , which result , at the same time , from decisions and flows of information that change from one moment to the next . for this reason , a detailed description of each trajectory , within the structure of a system ,would be almost imposible and futile : the series of events that gave birth to a specific trajectory might not repeat , and a detailed description would not have a predictive utility . on the other hand , by considering axiomatic that we are unable to reach a deterministic understanding of a system as whole , a statistical approach might be useful tu describe the uncertainty involved . at least we would be able to gain some insight about the expected behaviour , size of fluctuations , or the corresponding probabilities of rare events .this , with the final intention of doing forecasts on the process future behaviour .the statistical description may even predict in an essentially deterministic way , such as the diffusion equation which describe the density of particles , each one performing a random walk in a microscopic scale .according to web portal of mexican stock exchange , `` the prices and quotations index ( mexbol ) is the mexican market s main indicator , it expresses the stock market return according to the prices variations of a balanced , weighted and representative constituent list of the equities listed in the mexican stock exchange , in accordance to best practices internationally applied '' .the daily values of the mexbol or ipc index form a set of variables : the opening price , the closing price , the high an low values , the adjusted price and the transaction volume . the fig .1 show the variation in time of the closing price , .to perform this research , 23 years of observations from mexbol were used , from november the 8th , 1991 , to september the 5th , 2014 .this information is publicly available in several websites , see for instance the historical prices of ipc(^mxx ) or ipc index - mexico in .this paper presents an assessment of the dynamic behaviour of closing and opening princes of the ipc index from three different perspectives : a ) analysis of the stochastic properties of the random behaviour and fluctuations of returns of closing price .b ) estimation of the degree of fractality and long term memory through the rescaled range or analysis and the hurst exponent .c ) empirical autocorrelation function analysis . all the analysis andthe numeric and visual calculations of the ipc index properties were done using the gnu - r free software environment within an ubuntu - linux 14.04 work enviroment .the return values is a regular transformation used in economic data in order to standardize and remove the trending .thus , a simple and fast way to detrending the closing price serie , fig . [ ipc ] , is given by the transformation the mean value of this new series is , i.e. , the daily closing price has a rate of return average of 7.29 per 1000 units .the corresponding standard deviation is .a comparison between normalized logarithms of returns ( mean zero and variance 1 ) with the profile created by a gaussian white noise , also with mean zero and variance 1 , is shown in fig .2 . according to figs .[ returns ] and [ gauss ] , unlike the normalized gaussian white noise , the distribution of returns or logarithmic returns is more narrower with a higher concentration of values in from the mean but with a tail somewhat heavier . from a sample or time serie , an empirical probability density distribution is given by the superposition of normalizated kernels through the kernel density estimator or kde approximation where is the bandwidth or scale parameter and is a normalizated kernel function . using the library _ kedd _ in _ gnu - r _ , an empirical probability density distributions kde - based with an optimal bandwidth for both , normalized gaussian white noise and normalized logarithmic returns , are obtained . in the fig .4 the sharpe fall of respect the normalized gaussian white noise becomes evident .{fig4.pdf } & \end{array} ] ._ a comment _: assuming that is the density probability function of the closing prices , the transformation , , define a density probability for the logarithmic returns showing an asymptotic fall with a slightly higher heavy - tail such that . to concentrate on analyzing the _ financial noise _, another way to detrending a time series is to obtain the first differences or _ jumps _ to _ nearest neighbors_. so some examples of this transformation are : the first differences within the closing prices serie , or the first differences delayed one day between the opening and closing price series , both with . a graph with the differences between the opening prices of -th day and the -th closing price of the day beforecan be observed in fig . 5 in top . here ,much more than the logarithmic returns series the appearance of _ extreme events _ is more evident .the mean and standar deviation values for this serie are and , respectively .a special case of a transformation by differences are the first differences day by day between the closing and opening prices serie .particularly , this serie shows a symmetrical pattern around the mean value with a standard deviation of .also , although without apparent trend shows a high volatility increasing in time , see fig . 5 in bottom .{fig5.pdf } & \end{array} ] the progressive thinning of the autocorrelation function ( an biased estimator ) for a gaussian white noise shown in fig .8 is a _ border effect _ due to the scaling factor in ( [ autocorrelation ] ) .in fact for a sample gaussian white noise the distribution of the autocorrelation function for a maximum value is gaussian approximately with mean and standar deviation decaying as .nevertheless , unlike the gaussian white noise , the fluctuations of the autocorrelation function for the returns displays a strong correlation to first neighbors .moreover the maximum value in the autocorrelation function for the returns of any order occurs for a lag of . in other words , the transition of the returns to a symmetrical non gaussian autocorrelation function for , fig . 8 in bottom , suggest as a first approximation the following simple markovian random walk for the returns of closing prices where the noise is an random noise whose distribution has a stability parameter ( non gaussian ) , see table 1 . additionally the behavior of the autcorrelation function ( [ autocorrelation ] ) for the first differences of and series were estimated .the landscape of the autocorrelation function for the first differences show correlations more complex at different scales , fig . 7 on the left side . on the other hand ,the profile of the autocorrelation function for the differences are correlated beyond the first neighbors , fig . 7 on the right side .{fig7a.pdf } & \includegraphics[width = 2.3in]{fig7b.pdf } \end{array} ] a summary of some observed characteristics in ipc closing prices time series are shown in fig .10 . in this graphit is possible identified at least three `` bubble zones '' corresponding each one of them with a specific financial crash , `` the effect tequila '' of 1994 , the asian financial crisis from 1997 to 1998 and the recent global financial crisis of 2008 .the big fluctuations , argues , is an indicative of significant correlations effects of long - term , corroborated by the form of autocorrelation function between the opening price in the -th day and the closing price in the -th day , the differences .heavy tails in the series of the returns of opening or closing prices and in the delayed first differences between opening and closing prices , clearly indicates a non diffusive fluctuations dynamics or a non gaussian behavior for mexbol ipc index . at the same time , this allows the presence of long term correlations and memory effects not compatible with the concept of an efficent market . in other words, return values does not represent a simple random walk where jumps are independent with a finite variance . despite returns and the logarithmic returns of closing or opening prices , even the differences , exhibit a symmetric , stationary and homoscedastic behavior ( figs . 2 and 5 in top ) , the series of differences ( fig . 5 in bottom )show a growing volatility over time .this sistematic increment of fluctuations size , and the average difference between closing and opening prices create a clear large scale positive trend in the serie , see the fig .1 . as before , even though the average margin of closing prices is _ small _ in a daily scale , with a value slightly bigger than 1 , , it is _ big _ enough to create a global growth of closing prices .although the hurst exponent for the return of closing prices in the mexbol index given by the are very similar and close to ( an expected value for gaussian white noise ) , the strong difference between the time series of logarithmic returns and an empirical gaussian white noise , figs . 2 , 3 , and 4 , results from the presence at different time scales of recurrent large fluctuations in returns or logarithmic returns , a behavior that captures the underlying fractal nature of many financial time series .the ipc index for closing and opening prices analysis shows an incresing growth for long time period , it means that the mexican stock market and the economic stability is a good option to investment . in spite of the recurrent crisis in past such as the tequila effect , asian crisis and the recent global crisis , the results in this research show that the mexican economy shown robustness along the past 23 years .the authors would like to thanks conacyt , promep , paicyt and the universidad autnoma de nuevo len for support this research .dicle , mehmet and levendis , john ._ the impact of technological improvements on developing financial markets : the case of the johannesburg stock exchange _ , review of development finance , * 3 * , 204 - 213 , 2013 .pi , matija and antulov - fantulin , nino and novak , petra kralj and mozeti , igor and gr , miha and vodenska , irena and , tomislav ._ cohesiveness in financial news and its relation to market volatility _ , scientific reports , * 4 * , 1 - 8 , 2014 .holyst , j. a. and , m. and urbanowicz , k. _ observations of deterministic chaos in financial time series by recurrence plots , can one control chaotic economy ?_ , the european physical journal b , * 20 * , 531 - 535 , 2001 . preis , tobias and kenett , dror y. and stanley , h. eugene and helbing , dirk and ben - jacob , eshel _ quantifying the behavior of stock correlations under market stress _ , scientific reports , * 2 * , 1 - 5 , 2012 .wang , duan and podobnik , boris and horvati , davor and stanley , h. eugene . _ quantifying and modeling long - range cross correlations in multiple time series with applications to world stock indices _ , phys .e , * 83 * , 1 - 5 , 2011 .preis , tobias and reith , daniel and stanley , h. eugene _ complex dynamics of our economic life on different scales : insights from search engine query data _ , philosophical transactions of the royal society a , * 368 * , 5707 - 5719 , 2010 . martn - del - bro , bonifacio and serrano - cinca , carlos . _self - organizing neural networks for the analysis and representation of data : some financial cases _ , neuro computing & applications , * 1 * , 193 - 206 , 1993 .sornette , didier . _ predictability of catastrophic events : material rupture , earthquakes , turbulence , financial crashes , and human birth _ , proceedings of the national academy of sciences of the united states of america , * 99 * , 2522 - 2529 , 1999 .plerou , vasiliki and gopikrishnan , parameswaran and rosenow , bernd and amaral , luis a. n. and stanley , h. eugene ._ econophysics : financial time series from a statistical physics point of view _ , physica a , * 279 * , 443 - 456 , 2000 .sokolov , i. m. and klafter , j. _ from diffusion to anomalous diffusion : a century after einstein s brownian motion _ , chaos : an interdisciplinary journal of nonlinear science , * 15 * , 026103 - 1 - 026103 - 7 , 2005 .diethelm wuertz and martin maechler and rmetrics core team members ._ stabledist : stable distribution functions . _ , r package version 0.6 - 6 , url = cran.r-project.org/web/packages/stabledist/index.html , 2014 .gopikrishnan , parameswaran and plerou , vasiliki . and nunes amaral , lus a. and meyer , martin and stanley , h. eugene ._ scaling of the distribution of fluctuations of financial market indices _ , physical review e , * 60 * , 5305 - 5316 , 1999 .
|
the total value of domestic market capitalization of the mexican stock exchange was calculated at 520 billion of dollars by the end of november 2013 . to manage this system and make optimum capital investments , its dynamics needs to be predicted . however , randomness within the stock indexes makes forecasting a difficult task . to address this issue , in this work , trends and fractality were studied using over the opening and closing prices indexes over the past 23 years . returns , kernel density estimation , autocorrelation function and analysis and the hurst exponent were used in this research . as a result , it was found that the kernel estimation density and the autocorrelation function shown the presence of long - range memory effects . in a first approximation , the returns of closing prices seems to behave according to a markovian random walk with a length of step size given by an alpha - stable random process . for extreme values , returns decay asymptotically as a power law with a characteristic exponent approximately equal to 2.5 . * keywords * : _ financial time series , kernel density estimation , empirical autocorrelation function , analysis , hurst exponent , power law _ ; * pacs * : _ gnu - r , stabledist , fbasics , pracma _ ; * msc * : 91b84 , 62m10
|
some interpretational difficulties with the standard version led david bohm to develop in the 1950 s an alternative formulation of quantum mechanics . despite initial criticisms ,this theory has recently received much attention , having experimented in the past few years an important revitalization , supported by a new computationally oriented point of view . in this way , many interesting practical applications , including the analysis of the tunnelling mechanism , scattering processes , or the classical quantum correspondence , just to name a few , have been revisited using this novel point of view . also , the chaotic properties of these trajectories , or more fundamental issues , such as the extension to quantum field theory , or the dynamical origin of born s probability rule ( one of the most fundamental cornerstones of the quantum theory ) have been addressed within this framework .most interesting in bohmian mechanics is the fact that this theory is based on quantum trajectories , `` piloted '' by the de broglie s wave which creates a ( quantum ) potential term additional to the physical one derived from the actual forces existing in the system .this term brings into the theory interpretative capabilities in terms of intuitive concepts and ideas , which are naturally deduced due to fact that quantum trajectories provide causal connections between physical events well defined in configuration and time .once this ideas have been established as the basis of many numerical studies , it becomes , in our opinion , of great importance to provide firm dynamical grounds that can support the arguments based on quantum trajectories .for example , it has been recently discussed that the chaotic properties of quantum trajectories are critical for a deep understanding of born s probability quantum postulate , considering it as an emergent property .unfortunately very little progress , i.e. rigorous formally proved mathematical results , has taken place along this line due to the lack of a solid theory that can foster this possibility .moreover , there are cases in the literature clearly demonstrating the dangers of not proceeding in this way .one example can be found in ref . , where a chaotic character was ascribed to quantum trajectories for the quartic potential , supporting the argument solely on the fact that numerically computed neighboring pairs separate exponentially .this analysis was clearly done in a way in which the relative importance of the quantum effects could not be gauged .something even worse happened with the results reported in , that were subsequently proved to be wrong in a careful analysis of the trajectories .recently , some of the authors have made in refs . what we consider a relevant advance along the line proposed in this paper , by considering the relationship between the eventual chaotic nature of quantum trajectories and the vortices existing in the associated velocity field which is given by the quantum potential , a possibility that had been pointed out previously by frisk .vortices has always attracted the interest of scientists from many different fields .they are associated to singularities at which certain mathematical properties become infinity or abruptly change , and play a central role to explain many interesting phenomena both in classical and quantum physics . in these papersit was shown that quantum trajectories are , in general , intrinsically chaotic , being the motion of the velocity field vortices a sufficient mechanism to induce this complexity . in this way , the presence of a single moving vortex , in an otherwise classically integrable system , is enough to make quantum trajectories chaotic .when two or few vortices exist , the interaction among them may end up in the annihilation or creation of them in pairs with opposite vorticities .these phenomenon makes that the size of the regular regions in phase space grows as vortices disappear .finally , it has been shown that when a great number of vortices are present the previous conclusions also hold , and they statistically combine in such a way that they can be related with a suitably defined lyapunov exponent , as a global numerical indicator of chaos in the quantum trajectories .summarizing , this makes of chaos the general dynamical scenario for quantum trajectories , and this is due to the existence and motion of the vortices of the associated velocity field . in this paper, we extend and rigorously justify the numerical results in concerning the behavior of quantum trajectories and its structure by presenting the general analysis of a particular problem of general interest , namely a two dimensional harmonic oscillator , where chaos does not arise from classical reasons . in this way, we provide a systematic classification of all possible dynamical behaviors of the existing quantum trajectories , based on the application of dynamical systems theory .this classification provides a complete `` road map '' which makes possible a deep understanding , put on firm grounds , of the dynamical structure for the problem being addressed .the bohmian mechanics formalism of quantum trajectories starts from the suggestion made by madelung of writing the wave function in polar form where and are two real functions of position and time . for convenience, we set throughout the paper , and consider a particle of unit mass .substitution of this expression into the time - dependent schrdinger equation allows to recast the quantum theory into a `` hydrodynamical '' form , which is governed by , which are the continuity and the `` quantum '' hamilton - jacobi equations , respectively . the qualifying term in the last expression is customarily included since this equation contains an extra non - local contribution ( determined by the quantum state ) , , called the `` quantum '' potential .together with , this additional term determines the total forces acting on the system , and it is responsible for the so - called quantum effects in the dynamics of the system .similarly to what happens in the standard hamilton - jacobi theory , eqs . and allow to define , for spinless particles , quantum trajectories by integration of the differential equations system : .alternatively , one can consider the velocity vector field notice that , in general , this bohmian vector field is not hamiltonian , but it may nevertheless have some interesting properties .in particular , for the example considered in this paper it will be shown that it is time - reversible , this symmetry allowing the study of its dynamics in a systematic way .let us recall that a system , , is time - reversible if there exists an involution , , that is a change of variables satisfying and , such that the new system results in .one of the dynamical consequences of reversibility is that if is a solution , then so it is .this fact introduces symmetries in the system giving rise to relevant dynamical constraints .for example , let us assume that is a time - reversible symmetry ( see fig .[ fig : reversible ] ) .then any solution defines another solution given by .let us remark that this fact constraints the system dynamics since if , for example , crosses the symmetry axis ( is invariant under ) then the two solutions must coincide .we conclude this section by stressing that time - reversible systems generated a lot of interest during the 80 s due to the fact that they exhibit most of the properties of hamiltonian systems ( see ) .in particular , this type of systems can have quasi - periodic tori which are invariant under both the flow and the involution .that is , kam theory fully applies in this context .furthermore , some interesting results concerning the splitting of separatrices have been developed successfully for time - reversible systems , providing powerful tools for the study of homoclinic and heteroclinic chaos .the system that we choose to study is the two dimensional isotropic harmonic oscillator . without loss of generality ,the corresponding hamiltonian operator for can be written in the form in this paper , we consider the particular combination of eigenstates : with energy , and , with energy .it can be immediately checked that the time evolution of the resulting wave function is given by where may look arbitrary at this point , but it makes simpler the notation for the canonical form introduced in the next section . ] , and , subject to the usual normalization condition .in addition , we further assume the condition in order to ensure the existence of a unique vortex in the velocity field at any time .accordingly , the quantum trajectories associated to are solutions of the system of differential equations : where to integrate this equation a 7/8th order runge - kutta - fehlberg method has been used . moreover , since the vector field is periodic , the dynamics can be well monitored by using stroboscopic sections .in particular , we plot the solution at times for and for several initial conditions . in fig .[ fig : phase : noca ] we show the results of two such stroboscopic sections .as can be seen the left plot corresponds to completely integrable motions , whereas in the right one sizeable chaotic zones coexisting with stability islands , this strongly suggesting the applicability of the kam scenario .however , our vector field is neither hamiltonian nor time - reversible , and then the kam theory does not directly apply to this case .however , we will show how a suitable change of variables can be performed that unveils a time - reversible symmetry existing in our vector field .for this purpose , we first recall that the structure of gradient vector fields is preserved under orthogonal transformations . in this way , if we consider the transformation , with , applied to , we have that , being . in other words , any orthogonal transformation can be performed on the wave function instead of on the vector field .[ lem : cano ] if eq .satisfies the non - degeneracy condition , then there exist an orthogonal transformation and a time shift , such that the wave function takes the form where , , satisfy .we will refer to the wave function as the canonical form of , and the rest of the paper is devoted to the study of this case . for this reason , the hat in the coefficients will be omitted , since it is understood that . in table [ tab : prop ] we give the actual values the canonical coefficients after the transformation corresponding to the results in fig .[ fig : phase : noca ] . for convenience , we consider the complexified phase space , so that the wave function results in where , , and . then , it is easy to check that the vortex , i.e. the set of points where the wave function vanishes , has the following position with respect to time where , and .notice that the vortex is well defined thanks to the non - degeneracy assumption , and its trajectory follows an ellipse .this ellipse does not appear in the usual canonical form , but this can be made so by performing the rotation : and the time shift : . in this way where it is clear that by choosing and , the desired result is obtained .then , the corresponding wave function in these new coordinates is that can be further simplified since the factor plays no role in the bohmian equations for the quantum trajectories . finally , by recovering the coefficients in cartesian coordinates, one obtains , and , which renders eq . .throughout the rest of the paper we consider the wave function with , and .let us remark that by changing the time , if necessary , we can further restrict the study to the case .the corresponding quantum trajectories are then obtained from the vector field where . in these coordinates ,the only vortex of the system follows the trajectory given by which corresponds to an ellipse of semi - axes and , respectively . in fig .[ fig : phase1 ] we show some stroboscopic sections corresponding to this ( canonical ) velocity field for different values of the parameters and .as can be seen , a wide variety of dynamical behaviors , characteristics of a system with mixed dynamics , is found . in the left - top panel , which corresponds to the case in which ( vortex moving in a circle ), we have sections corresponding to a totally integrable case . as we move from left to right and top to bottom some of these toriare broken , and these areas of stochasticity coexist with others in which the motion is regular , this including different chains of islands. moreover , the size of the chaotic regions grows as the value of separates from that of .this variety of results can be well understood and rationalize by using some standard techniques of the field of dynamical systems , in the following way . although the vector field is not hamiltonian , it is time - reversible with respect to the involution .this result is very important for the purpose of the present paper , since it implies that the kam theory applies to our system if we are able to write down our vector field in the form , , being the dynamics corresponding to integrable and time - reversible .more specifically , let us assume that does not depend on and be -periodic with respect to .moreover , let us assume that for there exists a family of periodic orbits whose frequency varies along the family ( non - degeneracy condition ) .then , our result guarantees that when the effect of the perturbation is considered , most of the previous periodic orbits give rise to invariant tori of frequencies , where is the frequency of the unperturbed periodic orbit .of course , the persistence of these objects is conditioned to the fact that the vector satisfies certain arithmetic conditions ( see for details ) . since these arithmetic conditions are fulfilled for a big ( in the sense of the lebesgue measure ) set of the initial orbits , the important hypothesis that we have to check in order to ascertainthe applicability of the kam theory is the non - degeneracy of the frequency map . in our problem ,two such integrable cases exists .first , if the vortex is still at the origin and the time periodic part in the vector field disappears . as a consequence ,all the quantum orbits of the system appear as ellipses centered at the origin in the -plane .it will be shown in the next section that the corresponding frequency varies monotonically along the orbits .this case has not been explicitly included in fig .[ fig : phase1 ] due to its simplicity .second , and as will be analyzed in sect .[ ssec : nonautonomous ] , if , or equivalently , that is the vortex moves in a circle , the vector field is also integrable for any value of .the corresponding stroboscopic sections are shown in the top - left panel of fig .[ fig : phase1 ] ) . here , the structure of the phase space changes noticeably , since two new periodic orbits , one stable and the other unstable , appear .moreover , the obtained integrable vector field depends on .we will show that this time dependence can be eliminated by means of a suitable change of coordinates , showing that our problem remains in the context described in the previous paragraph .the rest of the panels in fig .[ fig : phase1 ] can be understood as the evolution of this structure as the perturbation , here represented by the difference between and , as dictated by the kam theorem . to conclude the paper ,let us now discuss in detail the two integrable cases in the next two sections .for , and it is easily seen that the vector field is integrable . actually , the orbits of the quantum trajectories in the -plane are ellipses around the origin ( position at which the vortex is fixed ) . also , the frequency of the corresponding trajectories approaches infinity as they get closer to the vortex position .let us now compute the frequency of these solutions .first , we introduce a new time variable , satisfying , and then solve the resulting system , thus obtaining where is the distance from the vortex .next , we recover the original time , , by solving the differential equation defining the previous change of variables whose solution is given by notice that this equation is invertible since , and then , being a -periodic function . finally , introducing this expression into, one can conclude that the solution has a frequency given by that varies monotonically with respect to the distance to the vortex .then , for , the existence of invariant tori around the vortex is guaranteed .let us consider now the case of non - vanishing values of for any . in this case, we have , and system can be written as where .this vector field corresponds to the following hamiltonian that it is actually integrable .[ lem : integra ] let us consider ( energy ) , the symplectic variable conjugate to , and define the autonomous hamiltonian , then we have that is a first integral of , in involution and functionally independent . as a consequence , if the system is completely integrable .it is straightforward to see that the poisson bracket with respect to the canonical form satisfies .moreover , does not depend on , so that it is an independent first integral .taking these results into account , one can completely understand the picture presented in the top - left plot of fig .[ fig : phase1 ] .since the system is integrable , it is foliated by invariant tori , despite the two periodic orbits that are created by a resonance introduced when parameter changes from to .next , we characterize these two periodic orbits : [ lem : periodic ] if , the system has two periodic orbits given by where the coefficients and are given by . moreover , the orbit is hyperbolic with characteristic exponents , and is elliptic with characteristic exponents . if the same result holds just switching the roles of and .it is known that if the sets are bounded differentiable submanifolds , their connected components carry quasi - periodic dynamics .moreover , the critical points of determine the periodic orbits of the system .therefore , these periodic orbits are given by expressions : , and , which can also be written as from which we obtain our two periodic orbits : , and , with .in addition , it is easy to check that and , respectively .finally , the stability of these orbits can be obtained by considering the following associated variational equations , solutions for this equation can be easily obtained by using the complex variable , and solving .we have the following set of fundamental solutions for the hyperbolic case , and for the elliptic one .finally , the corresponding characteristic exponents can be obtained by a straightforward computation of the monodromy matrix .notice that the chaotic sea observed in fig .[ fig : phase1 ] is associated to the intersection of the invariant manifolds of the hyperbolic periodic orbit that we have computed .now , and in order to apply kam theorem , we compute locally the frequency map of this unperturbed system around the vortex and the elliptic periodic orbit . to this end, we perform a symplectic change of coordinates in a neighborhood of these objects in order to obtain action - angle variables up to third order in the action . in general ,let be a hamiltonian that is -periodic with respect to and has a first integral , .let us consider the generating function , , determining a symplectic change of variables defined implicitly by where is also -periodic .this transformation is introduced in such a way that the new hamiltonian depends only on since a first integral of the system is known , we can define the corresponding action as from eq ., we obtain locally the equation , so that we have . introducing this expression into, we obtain the following equation for : and can conclude that , since must be -periodic with respect to , then has to satisfy where denotes average with respect to . finally , we notice that since is a first integral , we can define so that becomes -periodic .computations are simplified observing that the left hand side in eq .does not depend on ( we use the fact that is a first integral ) , so we can set . according to this , we have to solve , where and then we have to compute the average , where .first , let us consider a neighborhood of the vortex for . to this end, we introduce the new variables and , so that the hamiltonian and the first integral are [ prop : vor ] there exist a symplectic change of variables , with , setting the vortex at , such that the new hamiltonian becomes according to the above discussion , we have .then , by introducing this expression in ones obtains finally , we only have to compute the first terms in the expansion of obtaining and use that and .on the other hand , a neighborhood of the elliptic periodic orbit for can be studied by means of the variables and .one thus obtains that the hamiltonian and the first integral are given by where .[ prop : op ] there exist a symplectic change of variables , with , setting the periodic orbit at , such that the new hamiltonian becomes where we have introduced the notation and also as before , we consider a solution for the equation . for convenience ,we introduce the notation in order to set the periodic orbit at .then , it turns out that the expression approximates the previous equation for and that the following expansion in terms of holds , where hence , we have to compute the average of that follows from the fact that , and .these averages are computed easily by using the method of residues .in this paper we present an scheme to study in a systematic way the intrinsic stochasticity and general complexity of the quantum trajectories that are the basis of quantum mechanics in the formalism developed by bohm in the 1950 s . in our opinionthis approach , which based on the ideas and results of the dynamical systems theory , can seriously contribute to establish firm grounds that foster the importance of the conclusions of future studies relying on such trajectories , thus avoiding errors and ambiguities that has happened in the past .as an illustration we have considered the simplest , non trivial combination of eigenstates of the two dimensional isotropic harmonic oscillator .the corresponding velocity field is put in a so called canonical form , and the characteristics of the corresponding quantum trajectories studied in detail .it is proved that only one vortex and two periodic orbits , one elliptic and the other hyperbolic , organize the full dynamics of the system . in it , there exist invariant tori associated to the vortex and the elliptic periodic orbit .moreover , there is a chaotic sea associated to the hyperbolic periodic orbit .the kam theory has been applied to this scenario by resorting to a suitable time - reversible symmetry , that is directly observed in the canonical form for the velocity field determining the quantum trajectories of the system .it should be remarked that the results reported here concerning the hyperbolic periodic orbit constitute a generalization of those previously reported in , in the sense that here a more concise and constructive approach to the associated dynamics , is presented .we summarize the dynamical characteristics of the different possibilities arising from the canonical velocity field in table [ table : roadmap ] , that represents a true road map to navigate across the dynamical system , i.e. quantum trajectories , that are defined based on the pilot effect of the wave function .also , note that the generic model , i.e. when or do not vanish , does not satisfy any of the properties considered in the table .finally , the method presented here is , in principle , generalizable to other more complicated situations in which more vortex and effective dimensions exist .some methods have been described in the literature that can be applied to these situations .this will be the subject of future work .this work has been supported by the ministerio de educacin y ciencia ( spain ) under projects fpu ap2005 - 2950 , mtm2006 - 00478 , mtm200615533 and consolider 200632 ( i math ) , and the comunidad de madrid under project s0505/esp0158 ( simumat ) .the authors gratefully acknowledge useful discussions with carles sim , and to gemma huguet for encouragement . a.l . also thanks the hospitality of the departamento de qumica at uam during different stays along the development of this work .
|
vortices are known to play a key role in the dynamics of the quantum trajectories defined within the framework of the de broglie - bohm formalism of quantum mechanics . it has been rigourously proved that the motion of a vortex in the associated velocity field can induce chaos in these trajectories , and numerical studies have explored the rich variety of behaviors that due to their influence can be observed . in this paper , we go one step further and show how the theory of dynamical systems can be used to construct a general and systematic classification of such dynamical behaviors . this should contribute to establish some firm grounds on which the studies on the intrinsic stochasticity of bohm s quantum trajectories can be based . an application to the two dimensional isotropic harmonic oscillator is presented as an illustration .
|
for non - equilibrium phenomena , many relations have been proposed about the current fluctuations .the first relation being proposed is fluctuation theorem which was demonstrated experimentally and is shown to be consistent with green - kubo relations .fluctuations is closely related to probability distribution .the gaussian distribution has been observed in simulation and experiment .in fact , the gaussian distribution can be obtained by statistical analysis if it arises from the binomial distribution .when a weak external force is applied , the current fluctuations will be the same except that the mean value will shift by an amount proportional to the external force .one example is ionic conduction , see fig .[ ions ] . herethe external force is electric field . for a given electric field , the larger the electric current ,the larger the entropy production rate will be . thus the electric field always tends to increase the electric current in order to maximize the entropy production .this tendency must be counterbalanced .a competition has been identified between the entropy production rate and the current fluctuations .specifically , the competition is between environment entropy and system entropy which is to describe the intensity of the current fluctuations . in this paper, we study the ionic conduction with both statistical analysis and entropy competition to produce a fluctuation relation .a time parameter is introduced into the relation for the convenience of simulation and experiment .now let us derive the ionic electrical conductivity .we will do this in a new way different from the ways in work .[ ions ] shows a solid ionic conductor in the equilibrium state .the conductor has interstitial ions , which during a time interval make jumps , where is the ion number density , is the conductor volume and is the mean jump time .of all the jumps , of them are in the up ( + ) direction and are in the down ( - ) direction . here follows a binomial distribution , which , when , will become a normal distribution , for a given , there is a corresponding electric current , where is the ion charge and is the lattice constant .therefore , eq . ( [ pk ] ) can be recast as by introducing a variance , it can be written as if the total number of the microscopic states is ( for our example , ) , the number of microscopic states for a given is the corresponding entropy is where is a constant .we call this the system entropy . the system entropy is to describe the system in the equilibrium state .the system for this example consists of only those interstitial ions that are responsible for the electric current .when there exists an external electric field , an environment entropy also has to be introduced , where is the temperature and is a constant .the environment entropy is introduced by considering where is heat generated by the electric current in the form .the environment here includes everything except the system .it includes the surroundings and the lattice of the conductor .consequently , the overall entropy is therefore the probability distribution is which indicates that the most probable current is for a macroscopic system in linear regime ( ) , the most probable current is equal to the average current which is associated with electrical conductivity .therefore , the electrical conductivity is extracted as by eq .( [ lastone ] ) , it becomes .this is the ionic electrical conductivity that has been obtained in studies .but it has been derived here again in a new way , especially a parameter has been introduced .although the relation ( [ sigma_e ] ) has been derived theoretically from statistical analysis , it can be extended to be suitable for computer simulations .for that , we write which can be observed in simulations . consequently the electrical conductivity is in the form this is a novel relation for ionic conduction between electrical conductivity and electric current fluctuations .note that and are about the equilibrium state .also note that there is no requirement for .but it is safe to choose .for ionic conductivity , we have presented a relation between electrical conductivity and electric current fluctuations .electric current fluctuations is described by gaussian distribution , from which the system entropy is obtained .but when an electric field exists , an additional entropy environment entropy has to be introduced .the two entropies add .the total entropy gives the current probability distribution .a relation about the electrical conductivity is finally obtained .but note that the relation is obtained from the example of ionic conduction which has a unique property : an ion jumps from site to site , which is very different from the way that an ion move through an open space or a liquid .this means that the relation can not apply directly to ionic conduction in gas or liquid .99 u. m. b. marconi , a. puglisi , l. rondoni , a. vulpiani , phys .* 461 * ( 2008 ) 111 .sevick , r. prabhakar , stephen r. williams , and debra j. searles , annu .phys . chem . *59 * ( 2008 ) 603 .a. alemany , m. ribezzi and f. ritort , aip conf .1332 , 96 ( 2011 ) .denis j. evans , e. g. d. cohen , and g. p. morriss , phys . rev . lett . * 71 * ( 1993 ) 2401 .g. m. wang , e. m. sevick , emil mittag , debra j. searles , and denis j. evans , phys .lett . * 89 * ( 2002 ) 050601 debra j. searles and denis j. evans , j. chem. phys . * 112 * , ( 2000 ) 9727 .m. s. green , j. chem .phys * 22 * ( 1954 ) 398 .r. kubo , j. phys .* 12 * ( 1957 ) 570 .e. seitaridou , mi . inamdau , r. phillips , k. ghosh , k. dill .b. * 111 * ( 2007 ) 2288 .h. ziegler , c. wehrli , j. non - equilib .* 12 * ( 3 ) ( 1987 ) 229 .g. w. paltridge , quart .j. royal meteorol . soc .* 104 * ( 1978 ) 927. l. m. martyushev , v .d. seleznev , phys . rep . * 426 * ( 2006 ) 1 .g. p. beretta , int .* 5 * ( 2007 ) 249 .r. c. dewar , j. phys . a : math* 38 * ( 2005 ) l371 . y. j. zhang , physica a * 391 * ( 2012 ) 4470 .y. j. zhang , physica a * 396 * ( 2014 ) 88 .
|
this paper reports a relation between ionic conductivity and electric current fluctuations . the relation was derived using statistical analysis and entropy approach . the relation can be used to calculate ionic conductivity .
|
signal processing problems such as coding , restoration , classification , regression or synthesis greatly depend on an appropriate description of the underlying probability density function ( pdf ) . however , density estimation is a challenging problem when dealing with high - dimensional signals because direct sampling of the input space is not an easy task due to the curse of dimensionality . as a result, specific problem - oriented pdf models are typically developed to be used in the bayesian framework .the conventional approach is to transform data into a domain where _ interesting _ features can be easily ( i.e. marginally ) characterized . in that case , one can apply well - known marginal techniques to each feature independently and then obtain a description of the multidimensional pdf .the most popular approaches rely on linear models and statistical independence . however , they are usually too restrictive to describe general data distributions . for instance ,principal component analysis ( pca ) , that reduces to dct in many natural signals such as speech , images and video , assumes a gaussian source .more recently , linear ica , that reduces to wavelets in natural signals , assumes that observations come from the linear combination of independent non - gaussian sources . in general, these assumptions may not be completely correct , and residual dependencies still remain after the linear transform that looks for independence . as a result , a number of problem - oriented approaches have been developed in the last decade to either describe or remove the relations remaining in these linear domains .for example , parametric models based on joint statistics of wavelet coefficients have been successfully proposed for texture analysis and synthesis , image coding or image denoising .non - linear methods using non - explicit statistical models have been also proposed to this end in the denoising context and in the coding context . in function approximation and classification problems, a common approach is to first linearly transform the data , e.g. with the most relevant eigenvectors from pca , and then applying nonlinear methods such as artificial neural networks or support vector machines in the reduced dimensionality space . identifying the _ meaningful _ transform for an easier pdf description in the transformed domainstrongly depends on the problem at hand . in this workwe circumvent this constraint by looking for a transform such that the transformed pdf is known . even in the casethat this transform is qualitatively _ meaningless _ , being differentiable , allows us to estimate the pdf in the original domain .accordingly , in the proposed context , the role ( _ meaningfulness _ ) of the transform is not that relevant .actually , as we will see , an infinite family of transforms may be suitable to this end , so one has the freedom to choose the most convenient one . in this work ,we propose to use a unit covariance gaussian as target pdf in the transformed domain and iterative transforms based on arbitrary rotations .we do so because the match between spherical symmetry and rotations makes it possible to define a cost function ( negentropy ) with nice theoretical properties .the properties of negentropy allow us to show that one gaussianization transform is always found no matter the selected class of rotations .the remainder of the paper is organized as follows . in section [ motivation ]we present the underlying idea that motivates the proposed approach to gaussianization . in section [ ourapproach ] ,we give the formal definition of the rotation - based iterative gaussianization ( rbig ) , and show that the scheme is invertible , differentiable and it converges for a wide class of orthonormal transforms , even including random rotations .section [ relationspp ] discusses the similarities and differences of the proposed method and projection pursuit ( pp ) .links to other techniques ( such as single - step gaussianization transforms , one - class support vector domain descriptions ( svdd ) , and deep neural network architectures ) are also explored .section [ applications ] shows the experimental results .first , we experimentally show that the proposed scheme converges to an appropriate gaussianization transform for a wide class of rotations .then , we illustrate the usefulness of the method in a number of high - dimensional problems involving pdf estimation : image synthesis , classification , denoising and multi - information estimation . in all cases ,rbig is compared to related methods in each particular application .finally , section [ conclusions ] draws the conclusions of the work .this section considers a solution to the pdf estimation problem by using a differentiable transform to a domain with known pdf . in this setting , different approaches can be adopted which will motivate the proposed method .let be a -dimensional random variable with ( unknown ) pdf , .given some bijective , differentiable transform of into , , such that , the pdfs in the original and the transformed domains are related by : where is the determinant of the jacobian matrix .therefore , the unknown pdf in the original domain can be estimated from a transform of known jacobian leading to an appropriate ( known or straightforward to compute ) target pdf , .one could certainly try to figure out direct ( or even closed form ) procedures to transform particular pdf classes into a target pdf .however , in order to deal with any possible pdf , iterative methods seem to be a more reasonable approach . in this case, the initial data distribution should be iteratively transformed in such a way that the target pdf is progressively approached in each iteration . the appropriate transform in each iteration would be the one that maximizes a similarity measure between pdfs . a sensible cost function here is the kullback - leibler divergence ( kld ) between pdfs . in order to apply well - known properties of this measure , it is convenient to choose a unit covariance gaussian as target pdf , . with this choice , the cost function describing the divergence between the current data , , andthe unit covariance gaussian is the hereafter called negentropy and a multivariate gaussian of the same mean and covariance .however , note that this difference has no consequence assuming the appropriate input data standardization ( zero mean and unit covariance ) , which can be done without loss of generality . ] , = .negentropy can be decomposed as the sum of two non - negative quantities , the multi - information and the marginal negentropy : this can be readily derived from eq .( 5 ) in , by considering as contrast pdf .the multi - information is : multi - information measures statistical dependence , and it is zero if and only if the different components of are independent . the marginal negentropy is defined as : given a data distribution from the unknown pdf , in general both and will be non - zero . the decomposition in suggests two alternative approaches to reduce : 1 . _ reducing _ : this implies looking for interesting ( independent ) components . if one is able to obtain , then , and this reduces to solving a marginal problem .marginal negentropy can be set to zero with the appropriate set of dimension - wise gaussianization transforms , .this is easy as will be shown in the next section .+ however , this is an ambitious approach since looking for independent components is a non - trivial ( intrinsically multivariate and nonlinear ) problem . according to this, linear ica techniques will not succeed in completely removing the multi - information , and thus a nonlinear post - processing is required .reducing _ : as stated above , this univariate problem is easy to solve by using the appropriate . note that will remain constant since it is invariant under dimension - wise transforms . in this way, one ensures that the cost function is reduced by .then , a further processing has to be taken in order to come back to a situation in which one may have the opportunity to remove again .this additional transform may consist of applying a rotation to the data , as will be shown in the next section .the relevant difference between the approaches is that , in the first one , the important part is looking for the interesting representation ( multivariate problem ) , while in the second approach the important part is the univariate gaussianization . in this second case, the class of rotations has no special qualitative relevance : in fact , marginal gaussianization is the only part reducing the cost function . [ cols="^,^,^,^,^ " , ]in this work , we proposed an alternative solution to the pdf estimation problem by using a family of rotation - based iterative gaussianization ( rbig ) transforms .the proposed procedure looks for differentiable transforms to a gaussian so that the unknown pdf can be computed at any point of the original domain using the jacobian of the transform .the rbig transform consists of the iterative application of univariate marginal gaussianization followed by a rotation .we show that a wide class of orthonormal transforms ( including trivial random rotations ) is well suited to gaussianization purposes . the freedom to choose the most convenient rotation is the difference with formally similar techniques , such as projection pursuit , focused on looking for interesting projections ( which is an intrinsically more difficult problem ) . in this way , here we propose to shift the focus from ica to a wider class of rotations since interesting projections as found by ica are not critical to solve the pdf estimation problem in the original domain .the suitability of multiple rotations to solve the pdf estimation problem may help to revive the interest of classical iterative gaussianization in practical applications . as an illustration , we showed promising results in a number of multidimensional problems such as image synthesis , classification , denoising , and multi - information estimation .particular issues in each of the possible applications , such as stablishing a convenient family of rotations for a good jacobian or convenient criteria to ensure the generalization ability , are a matter for future research .the authors would like to thank matthias bethge for his constructive criticism to the work , and eero simoncelli for the estimulating discussion on ` meaningful - vs - meaningless transforms ' .athinodoros s. georghiades , peter n. belhumeur , and david j. kriegman , `` from few to many : illumination cone models for face recognition under variable lighting and pose , '' , vol .23 , pp . 643660 , 2001 .l. gmez - chova , d. fernndez - prieto , j. calpe , e. soria , j. vila - francs , and g. camps - valls , `` urban monitoring using multitemporal sar and multispectral data , '' , vol .4 , pp . 234243 , mar 2006 ,
|
most signal processing problems involve the challenging task of multidimensional probability density function ( pdf ) estimation . in this work , we propose a solution to this problem by using a family of rotation - based iterative gaussianization ( rbig ) transforms . the general framework consists of the sequential application of a univariate marginal gaussianization transform followed by an orthonormal transform . the proposed procedure looks for differentiable transforms to a known pdf so that the unknown pdf can be estimated at any point of the original domain . in particular , we aim at a zero mean unit covariance gaussian for convenience . rbig is formally similar to classical iterative projection pursuit ( pp ) algorithms . however , we show that , unlike in pp methods , the particular class of rotations used has no special qualitative relevance in this context , since looking for _ interestingness _ is not a critical issue for pdf estimation . the key difference is that our approach focuses on the univariate part ( marginal gaussianization ) of the problem rather than on the multivariate part ( rotation ) . this difference implies that one may select the most convenient rotation suited to each practical application . the differentiability , invertibility and convergence of rbig are theoretically and experimentally analyzed . relation to other methods , such as radial gaussianization ( rg ) , one - class support vector domain description ( svdd ) , and deep neural networks ( dnn ) is also pointed out . the practical performance of rbig is successfully illustrated in a number of multidimensional problems such as image synthesis , classification , denoising , and multi - information estimation . gaussianization , independent component analysis ( ica ) , principal component analysis ( pca ) , negentropy , multi - information , probability density estimation , projection pursuit .
|
in many multi - product inventory management systems several factors must be considered when designing ordering policies .first and foremost , when the demand is stochastic , the system is subject to shortages in any time interval between two successive replenishments , whereas high stocking levels may incur unnecessary inventory costs .the problem is intensified when there is also uncertainty in the replenishment times .when products are stored and sold from a common facility with limited capacity they compete for storage space , which in turn introduces dependencies in the inventory process even though the individual demand processes may be independent . on the other hand, there is often some degree of demand substitution between different products . the substitution possibility must generally be taken into account when designing effective inventory replenishment policies , as it could attenuate the constraints imposed by limited storage capacity .in this paper we study the interactions between demand and replenishment time uncertainty , constrained storage capacity and demand substitution , in a two - product joint replenishment system with lost sales and base - stock replenishment .the base - stock framework is appropriate when there are small or no fixed ordering costs .this is a reasonable assumption when replenishment epochs are exogenously planned and not determined by inventory levels , for example in perishable product settings , or when a supplier makes periodic deliveries according to her own schedule and retailers can purchase product to replenish their stock only at those times .we assume that the two products are purchased at fixed unit costs and sold at fixed retail prices , thus the shortage cost is manifested as foregone profit due to lost sales . on the other hand , to model inventory costs ,we impose a , generally product dependent , inventory holding cost for each unsold unit of product at the end of a replenishment cycle .the essential feature of our model is the two - way partial demand substitution .we assume that when a customer does not find the product he prefers in stock while the other product is available , he will buy it with a given probability . by indirectly introducing a degree of demand pooling between different products ,substitution helps deal with demand uncertainty , especially in situations where storage capacity is tight and wrong stocking decisions may lead to severe shortages in some products and at the same time unsold quantities in others . on the other hand ,accounting for substitution in designing inventory policies introduces substantial complications in the analysis .the main reason is that in general whether a customer will buy a substitute product depends on whether his original choice is available at the time of his arrival to the system .therefore , in general , sales , shortage and unsold quantities depend not only on the total demand of each product during a replenishment cycle , but also on the timing of arrivals of individual demands of each product . for this reason , a usual approximation in analyzing inventory problems under substitutionis to assume that the quantities demanded as substitutes are a fixed deterministic proportion of the demand of one product which is added to the total demand of the other product .the main contribution of our model is that it endogenizes the substitution process by modeling the demand arrivals for the two products as independent poisson processes .this allows for a more realistic assessment of the costs of a replenishment policy and , thus , of the resulting optimal policies . on the other hand , analytic expressions for the profit functionare no longer feasible .we instead prove structural properties which in turn enable a more efficient search for optimal policies .to model capacity or other restrictions on the replenishment policy , we impose a linear constraint on the order quantities . finally , we consider two cases for the replenishment epochs .first we assume that the replenishment cycles have fixed constant length .we then consider the case where replenishment epochs occur according to a poisson process , independent of the inventory levels .for both cases we develop a markov process model for the joint inventory levels , derive pertinent expressions for the profit function , and prove that it is submodular in the order quantities .the submodularity property implies a monotonicity structure for the optimal policy , which in turn allows for more efficient computations .in addition to the structural results , we conduct several numerical experiments in order to gain insights on the interactions between the various aspects of the model , and mainly between the limitations in storage capacity , the substitution effects and the uncertainty , or lack thereof , in the replenishment process .some of the insights obtained are the following .the benefit of demand substitutability is not significant when the available capacity is very low , and thus substantial shortages can not be avoided , as well as when the capacity is very high , and thus it does not impose severe restrictions on the order quantities .however it may become substantial in intermediate capacity levels , depending on the relative cost parameters of the two products . in terms of the effect on the optimal ordering policy ,this also depends on the similarity of the products in terms of their profit / cost parameters . in extreme cases where the two products are significantly different, the existence of substitution effects may result in complete abandonment of the less favorable product .finally , regarding the uncertainty in the replenishment cycle , its effect on the profit is generally detrimental .the effect on the optimal policy also depends on the economic parameters of the two products .specifically , when the shortage costs are significant compared to the holding costs , the effect of the replenishment time uncertainty is to increase the order quantities , in order to mitigate the increased risk of stockouts , while the opposite effect appears when the holding costs are more significant .many models of ordering and inventory management with various levels of substitution have been proposed and analyzed in the literature . in a significant number of themit is approximately assumed that the substituted demand of a product is a fixed fraction of the unsatisfied demand of that product . under no salvage value and lost sale penalties , develop an expression of the profit function , show that it is concave under appropriate conditions and derive optimality conditions for the ordering policy .assuming positive salvage values and shortage costs , also derive the profit function and establish upper and lower bounds for the optimal ordering quantities . and consider extensions of the ordering problem of substitutable products for stock - dependent demand . in the joint effect of demand stimulation and product substitution on inventory decisions by considering a single - period , stochastic demand setting . in demand of each product is a deterministic function of both inventory levels and two - way substitution is assumed . also assume deterministic substitution proportions and in addition to the centralized model they also analyze a competitive situation where the substitutable products are stored by different retailers .several approaches have also been developed to model the substitution process in more detail .one such approach employs two - stage stochastic programming , where production / stocking decisions are made in the first stage before demand is realized , where as in the second stage demand allocation to direct and substitute sales is performed as a recourse action . consider a single period , multi - product model with fixed production costs and one - way substitution .they develop a mixed integer two - stage stochastic programming model , exploit structural properties and develop several heuristics . also consider a multi - product setting with partial substitution possible between any two products .they propose linear and mixed - integer stochastic programming formulations and obtain several managerial insights from solving test problems .a different line of works more related to ours addresses the product choice of arriving customer in a dynamic fashion . in newsvendor setting is adopted , where heterogeneous customers dynamically substitute among available products , based on maximizing a utility function .a sample path gradient algorithm is applied to compute inventory levels and poisson customer arrivals are simplified by normal approximations . also allow dynamic customer choice , but they focus on the problem of maximizing profit through dynamic pricing in a newsvendor framework .they develop a dynamic programming model for the pricing problem and propose heuristic methods for the initial inventory decisions . and consider dynamic strategies of offering substitution to arriving customers . in demand classesare downward substitutable and customers can be upgraded by at most one level .the optimal substitution policy is shown to have a protection limit structure . consider a nonstationary poisson process and random batch sizes .they develop a stochastic dynamic programming model and prove threshold properties for the optimal substitution policy . recently , study the two - product joint replenishment model with substitution in an economic order quantity framework .the eoq assumptions allow to explicitly model the stock evolution prior to and after the depletion of one of the products .it is shown that including the substitution effect in the ordering problem may result in substantial improvements in the total cost compared to the ordinary joint replenishment model .finally , develop a discrete - time markov chain model for a two - product periodic review system with centrally controlled one way downward substitution .the cost benefit of the substitution policy compared to no - pooling or full - pooling approaches is assessed numerically and it is shown that substitution can outperform both .our paper contributes to the literature in three ways .first we develop markov chain models for a two - product system with partial two - way substitution , under fixed and random replenishment times , and develop analytic expressions for the cost as a function of the order quantities .second we establish the submodularity of the cost function which allows for a more efficient computation of the optimal ordering policy , and third , we obtain several managerial insights on the interaction of substitution with the inventory capacity constraint . the paper is structured as follows . in section[ model ] we present the two approaches according to the replenishment types . in section 3we derive the transient distribution of the number of remaining products after a deterministic replenishment time and we prove the submodularity of the profit function . in section 4we derive the stationary distribution of the number of remaining products when the replenishment time is exponentially distributed and we show that the profit function is also submodular . in section 5we describe an algorithm for the computation of the optimal ordering quantities . in section 6we present computational results and managerial insights and in section 7 conclusions and extensions .we consider a retailer who orders and sells two partially substitutable products .orders for both products are placed at replenishment epochs .the time between two successive replenishments is referred to as a period .let denote the unit retail price , wholesale price and inventory holding cost for quantities in inventory at the end of each period , respectively , for product .order sizes are denoted by .we assume that due to limited storage capacity or other constraints the order quantities are restricted by a linear inequality , where nonnegative constants .for example , for , the inequality may reflect a storage capacity constraint , and for , a purchasing budget constraint . between the two productsthere is two - way partial substitution .specifically , we assume that a customer who originally intends to buy product will switch to product with probability , if the original product is not available .the substitution probabilities are exogenous parameters .we next consider the demand process . because of the substitution possibility , the actual sales , unsold quantities and shortages of both products , and as a result the profits / costs depend not only on the demand of each , but also on the timing of individual customer arrivals .in fact this is one of the complicating factors in evaluating and optimizing ordering policies , and as such it has been approximated by various assumptions in the literature . in this paper we model the demand process explicitly , by assuming that the customers who originally intend to buy product , arrive according to a poisson process with rate , .when a demand of one unit of product comes and there is no product available , the customer accepts to buy one unit of product , , with probability , .the advantage of this modeling assumption is that the effect of the timing of arrivals on the substitution process is endogenized and thus represented more accurately . regarding the replenishment period , we consider two approaches , which lead to separate stochastic models . in the first approachwe assume that the duration of the period is deterministic and equal to .this corresponds to the more common situation where the replenishment occurs at fixed regular intervals , such as on a daily or weekly basis . in the second approach we consider the case where the replenishment time is exponentially distributed with parameter , independent of the demand process .the exponential distribution assumption on the one hand allows us to model the inventory process in a computationally tractable markovian setting . on the other handit captures situations where the replenishment process is not completely reliable and can occasionally be very late .our aim is to determine the quantities and that maximize the expected profit per unit time .to this end , we develop a stochastic model of the inventory process for each of the two approaches about the replenishment time , derive an analytic expression for the profit function for each model and show that it is submodular in the order quantities .the submodularity property leads to a faster algorithm for computing the optimal values of and .in the following we will refer to the interval between two successive replenishment epochs as a cycle or period .since both products are replenishable , the replenishment instants constitute regeneration epochs for the inventory process .therefore , the profits during successive cycles are independent and identically distributed , thus the expected profit per unit time can be expressed as the ratio of expected cycle profit over expected cycle length. specifically , let denote the inventory levels of the two products at the end of a cycle , right before the next replenishment occurs .then the quantities sold during the cycle are equal to and the cycle profit can be expressed as this expression is valid under both assumptions on cycle duration . however the probability distribution of is different between the two approaches , and as a result the expected profit per period has a different analytic expression . in order to distinguish the two cases and avoid confusion in the subsequent analysis , we will denote the inventory levels under fixed replenishment time as , and under random replenishment times as .using this notation , the expected profit per period under fixed replenishment time is equal to , \label{profdef1}\end{aligned}\ ] ] where denotes expectation under replenishment policy . for the case of random replenishment time, the expected cycle length is equal to , therefore the corresponding expression for the profit rate is . \label{profdef2}\end{aligned}\ ] ] under both approaches , it is desired to identify the optimal stocking levels that maximize the expected profit per unit time , subject to the stocking constraint , i.e. , for . in the remainder of this sectionwe develop the stochastic model that describes the inventory process and derive analytic expressions for the profit functions .first consider the case of fixed replenishment time .let denote the inventory levels at time .the stochastic process is a continuous - time markov process , however it is not stationary because the time parameter affects the time since the last replenishment . on the other hand , since the process regenerates at replenishment epochs and we are interested in the probabilistic behavior during a single cycle , it is sufficient to consider the probabilistic behavior of the system during the interval ] which represents the inventory levels during a cycle of fixed length . this allows to derive an analytic expression for the expected profit per period as a function of and .although this expression is complicated , it lends itself to numerical computations .furthermore , we prove that the profit function is submodular , which allows for a simplification in the computation of the optimal order quantities .we first consider the transient distribution . for notational simplicitywe suppress the dependence on and define , \nonumber\ ] ] as the transient probability function of the inventory process , given initial state . to compute the transient distribution we apply uniformization and define an equivalent markov chain where all transition times are exponentially distributed with a common rate and fictitious transitions back to the same state are allowed .the definition of the uniformized process as well as the detailed derivations are included in the appendix .let denote the total sales rate when both products , only product 1 , or only product 2 are available , respectively .the explicit form of the transient probability distribution is derived in theorem [ prop : transient_distr ] .the proof is in the appendix .[ prop : transient_distr ] the transient probability function , is equal to : setting in the above expressions , we obtain the joint distribution of the inventory levels at the end of a replenishment cycle .therefore the expected profit per unit time can now be computed from ( [ profdef1 ] ) with and although the resulting expression for the profit function is complicated and not immediately amenable to showing analytical properties , it can be used to determine the optimal stocking policy numerically .several numerical studies which lead to interesting managerial insights are performed in section [ sec : numerical - analysis ] . in the next theoremwe establish that the profit function is submodular .this property is useful both theoretically and computationally , since it implies that the the optimal order quantity for either product while keeping the order quantity of the other product fixed is nonincreasing in the order quantity of the other product .this is intuitive , since by increasing the stocking level of one product , the shortage risk of the other one is also alleviated , because of the substitution effect , thus making high stocking levels less necessary .[ thm : submodularity_fixed ] the expected profit is submodular in . we must show the inequality since , , it follows from ( [ profdef1 ] ) that it suffices to show the opposite inequality for the expected unsold quantities , i.e. , we will prove a sample - path - wise version of ( [ 3.21 ] ) , using coupling arguments we consider four versions of the process \} ] . *( ii ) * now assume that at time system a is in state and a demand of product 2 occurs which ( for systems b and d ) substitutes product 2 with product 1 .similarly it follows that at systems b , c and d are in states , and , respectively . at this time instant the four systemswill directly reach states , , and and continue from that point on as described above .it follows that ( [ 3.21a ] ) holds for during ] .equation ( [ 3.21 ] ) follows from equation ( [ 3.21a ] ) and the proof is complete .in this section we derive the stationary distribution of the markov chain which represents the inventory levels when the duration of a cycle is exponentially distributed . in analogy with the fixed replenishment time case , we also obtain an explicit expression for the expected profit per period and prove that it is a submodular function of .we first consider the stationary distribution in theorem [ stat - distr - random ] below . the proof is based on deriving appropriate recursive forms of the steady state equations and is included in the appendix .[ stat - distr - random ] the stationary distribution of the continuous time markov chain is given by the formulas : where and based on the stationary distribution , the expected profit per unit time can now be computed from ( [ profdef2 ] ) with and finally , we again prove the submodularity of the expected profit per period for the exponentially case in the following theorem . is submodular . in order to prove that is submodular we have to prove the inequality since , , in order to prove it suffices to show that and we will prove again with sample - path arguments the following equation we follow the same approach as in theorem [ thm : submodularity_fixed ] , i.e. , we consider four systems a , b , c , d with initial states , , and , respectively .these are coupled in demand arrivals , substitution requests and replenishment arrivals .let be the common first replenishment time .using exactly the same sample path arguments as in theorem 2 , it can be shown that ( [ 4.27 ] ) holds for and .thus , ( [ 4.26 ] ) holds and the proof is complete .in this section we construct an algorithm in order to determine the optimal quantities and that will be ordered at the beginning of each period . for the fixed replenishment case we must solve the following problem for fixed , denote by the largest value of at which the occurs .thus , therefore , can be written as and the problem takes the following form respectively , for exponentially distributed replenishment times , we must solve the problem for both cases , from the submodularity of , we immediately obtain the following lemma . the optimal order quantity is decreasing in . using the monotonicity of , we can construct an improved search algorithm to determine the optimal quantities and .the algorithm has the same form for both profit functions . *find , * for find , * find , such that this section we employ the algorithms developed previously in order to obtain insights on the effect of substitution on the ordering policies and the profit .we are specifically interested in two general questions .first , how are the ordering policies and the expected profit per period affected when there is substitution between the two products compared to the case when no substitution is possible , and second how does the randomness of the replenishment time affect the ordering policies and the profit .we first analyze the effect of substitution , under constant replenishment time .we consider two products for which the poisson demand rates are equal to units per unit time and normalize the length of the replenishment time to .we also assume , which reduces the ordering constraint to , thus represents the common storage capacity . regarding the economic parameters , we distinguish three representative cases corresponding to different product types .specifically , if the storage capacity is practically infinite and there is no substitution , then the ordering problem is decomposed and for each product the newsvendor rule applies : the optimal order quantity of product is the first so that the service level is at least equal to the critical ratio .based on this observation we consider the following scenarios : ( i ) both products have a relatively high critical ratio , ( ii ) one product has high and the other a low critical ratio and ( iii ) both critical ratios are low . for each of the three scenarios we compute the optimal ordering policy and the optimal expected profit per period for varying capacity levels under two options : first when no substitution between the two products is possible , and second when there is two - way substitution with .the results are presented in figures [ fig : tscen1 ] , [ fig : tscen2 ] and [ fig : tscen3 ] , respectively .several insights can be obtained from these results .first , the profit curves show that , as expected , the newsvendor is better off when there is substitutability between the two products .second , this is generally achieved by changing the mix of products ordered , in favor of the most profitable of the two .specifically , under scenario 1 where both products have a relatively high order quantity , the benefit of substitution is small to nonexistent when the capacity is either very low or very high . in the first case ,the order quantities are much smaller than those needed , and as a result both products run out together and fast , without giving substitution a chance to occur . in this casethe order quantities are also the same between the two possibilities . on the other handwhen the capacity is very high , substitution also offers a rather small benefit .it is possible to order a lower total quantity of both products without affecting the service level .this is so because substitution essentially allows to partially take advantage of the pooling effect between the two independent demands , thus decreasing the downside risk of unsold products .this beneficial behavior is even more pronounced when the capacity is low enough to be a restrictive factor for the profit , but high enough to allow substitution to take effect . in this intermediate rangethe relative profit increase due to substitution is highest . under scenario 2 ,where both products have a relatively low order quantity , we obtain qualitatively similar insights , as under scenario 1 . on the other hand ,when the two products differ significantly in terms of their critical ratios , which is the case under scenario 3 , then the effect of substitution can be quite pronounced to the extent that the product with the lower critical ratio may be completely dropped and its demand satisfied partially through substitution of the other product . in this case, the benefit of substitution in terms of profitability is persistent even when the storage capacity is nonbinding . ] ] ] in the second set of our numerical experiments , we aim to assess the impact of the variability in the replenishment time on the ordering policy and the profits under substitution . to this endwe first performed a set of experiments analogous to scenarios 1 - 3 above , but now with the replenishment time exponentially distributed with mean 1 .the behavior of the ordering policy and the effect of substitution were similar to those under fixed replenishment time , although the expected profit was uniformly lower , indicating that the variability in replenishment time has a detrimental effect . to further analyze the effect, we show in figures [ fig : repscen1 ] , [ fig : repscen2 ] and [ fig : repscen3 ] the comparison of the ordering policies and profits under substitution between the cases of fixed and exponentially distributed replenishment times . from the results it is verified that when the replenishment time is random there is a significant decrease in the profit . regarding the effect on the ordering policy , this depends on the product types .specifically , under scenario 1 , the upside risk of shortages is more costly than that of unsold products .therefore , in the random case the more detrimental event is that the replenishment time will be large and as a result there will be high shortages . to mitigate this possibilitythe order quantities are significantly higher than those in the fixed case . on the other hand , under scenario 2 , where the two critical ratios are low, the downside risk is more important and as a result the ordering policy gravitates towards lower quantities , to mitigate the possibility of short replenishment times and large quantities of unsold products . under scenario 3 ,the results are similar , since product 1 with the high critical ratio is ordered at higher quantities under the random case , while the second product with the low critical ratio is not ordered at all in both cases , as already discussed in the first experiment . ] ] ]in this paper we developed a stochastic model of joint replenishment in an inventory system with two products under poisson demand arrivals , limited storage capacity and partial two - way substitution .we have considered two cases of replenishment , periodic replenishment at fixed time intervals and random replenishment times . in both caseswe defined a two - dimensional markov chain model for the inventory evolution and derived analytic expressions for the cost function .we established that the expected cost is submodular in the order quantities and employed this property to derive an efficient algorithm for computing the optimal policy . in computational experiments we explored the interaction between the substitution and limited capacity , and assessed the combined impact on the cost .this research can be extended in several directions .first , in addition to single echelon systems , substitution also affects inventory management in a supply chain framework , both in terms of centralized optimal policies , as well as in the interactions between agents and the formation of contacting mechanisms ( cf . , who study the interaction between vmi policies and substitution in a single period model with stochastic demand ) .in particular , it would be interesting to study the formation of ordering policies under quantity discounts and product substitution in the context of the poisson demand model developed in this paper ( cf . for analysis of ordering policies under poisson demand and multiple quantity discounts ) . in the present paper we modeled limitations in the replenishment mechanism by a finite storage capacity .however in many cases inventory replenishment is limited by a finite production rate .when two or more products are produced by the same equipment with finite rate and possibly with significant production switching costs , immediate joint replenishment is not generally possible . in such situationsthe presence of substitution between products may mitigate the shortage effects and must be taken into account when designing ordering and production policies .a two - product system with product substitution could be defined as an extension of both our paper and , who analyze the single product case under finite replenishment rate and poisson demand arrivals .finally , in our model we have assumed that the demand process is completely known both in terms of the arrival rates and the substitution probabilities . if this is not the case ,there is a need to estimate the corresponding parameters , thus giving rise to the extension into adaptive estimation and ordering policies .the problem of parameter estimation in the present context is not straightforward . indeed ,if , for example , only sales and not actual demand is observed and at the end of a period one of the two products is out of stock , this may be due to a high demand rate of this product or alternatively of a high demand rate and a high substitution probability of the other product or both .the design of effective estimation methods combined with adaptive ordering policies is currently work in progress .* acknowledgment * this research has been co - financed by the european union ( european social fund - esf ) and greek national funds through the operational program education and lifelong learning of the national strategic reference framework ( nsrf)-research funding program : thales - athens university of economics and business - new methods in the analysis of market competition : oligopoly , networks and regulation .16 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 y. deflem and i. van nieuwenhuyse . a discrete time markov chain model for a periodic inventory system with one - way substitution ._ available at ssrn 1868085 _ , 2011 .l. dong , p. kouvelis , and z. tian .dynamic pricing and inventory control of substitute products ._ manufacturing & service operations management _ , 110 ( 2):0 317339 , 2009 .m. n. katehakis and l. c. smit . on computing optimal ( q , r ) replenishment policies under quantity discounts ._ annals of operations research _ , 2000 ( 1):0 279298 , 2012 .m. khouja , a. mehrez , and g. rabinowitz .a two - item newsboy problem with substitutability . _ international journal of production economics _ , 44 ( 3):0 267275 , 1996 .s. kraiselburd , v. narayanan , and a. raman .contracting in a supply chain with stochastic demand and substitute products . _ production and operations management _ , 130 ( 1):0 4662 , 2004 .i. krommyda , k. skouri , and i. konstantaras .optimal ordering quantities for substitutable products with stock - dependent demand ._ applied mathematical modelling _, 39:0 147164 , 2015 .s. mahajan and g. van ryzin .stocking retail assortments under dynamic consumer substitution ._ operations research _ , 490 ( 3):0 334351 , 2001 .s. netessine and n. rudi .centralized and competitive inventory models with demand substitution ._ operations research _ , 510 ( 2):0 329335 , 2003 . m. parlar and s. goyal .optimal ordering decisions for two substitutable products with stochastic demands ._ opsearch _ , 21:0 115 , 1984 .u. s. rao , j. m. swaminathan , and j. zhang .multi - product inventory planning with downward substitution , stochastic demand and setup costs ._ iie transactions _ , 360 ( 1):0 5971 , 2004 .m. salameh , a. yassine , b. maddah , and l. ghaddar .joint replenishment model with substitution . _ applied mathematical modelling _, 38:0 36623671 , 2014 .j. shi , m. n. katehakis , b. melamed , and y. xia .production - inventory systems with lost sales and compound poisson demands ._ operations research _ , 620 ( 5):0 10481063 , 2014 . r. a. shumsky and f. zhang .dynamic capacity management with substitution ._ operations research _ , 570 ( 3):0 671684 , 2009 . e. stavrulaki .inventory decisions for substitutable products with stock - depended demand ._ international journal of production economics _ , 129:0 6578 , 2011 .h. vaagen , s. w. wallace , and m. kaut .modelling consumer - directed substitution . _international journal of production economics _ ,1340 ( 2):0 388397 , 2011 .h. xu , d. d. yao , and s. zheng .optimal control of replenishment and substitution in an inventory system with nonstationary batch demand ._ production and operations management _ , 200 ( 5):0 727736 , 2011 .as we mentioned before , in this case the duration of a period is . fix a replenishment policy .the state of the vendor at time is represented by a pair , where denotes the inventory of product at time , , ] is a continuous time markov chain and infinitesimal generator matrix matrix is a stochastic matrix and can be considered as the one - step transition probability matrix of a discrete time markov chain with state space and transition as shown on the following diagram .as we mentioned before , we only need the elements of the last row of .so using , we only need the elements of the last row of . thus , transition probabilities , , , must be computed .we have the following lemma .we want to compute the probability that the discrete time markov chain makes a transition from state to state , , , in steps , .we can do this if we determine in how many ways this transition can be made and the probability of each way . in order to describe the transitions we will refer to a step from state to state , , as type-1 step and to a step from state to state , , as a type-2 step .now , if a transition from to can happen in exactly steps .more concretely , we must have type-1 steps and type-2 steps .that can be made in ways and each way has probability .thus , we obtain . in order to find the probability of a transition from to , , in steps we must condition on the number of steps until the first passage in the set of states . assuming that the number of these steps is , , we have that after steps the system is at state and the transition from to can happen in ( since the last step is of type-2 ) and each way has probability .then , the transition from to in steps can occur in ways , because we must have type-1 steps , and each way has probability .so , we obtain . for the random replenishment time case , our approach is to derive the rate matrix and consider directly the balance equations for the stationary distribution .fix a replenishment policy .
|
we consider a two - product inventory system with independent poisson demands , limited joint storage capacity and partial demand substitution . replenishment is performed simultaneously for both products and the replenishment time may be fixed or exponentially distributed . for both cases we develop a continuous time markov chain model for the inventory levels and derive expressions for the expected profit per unit time . we prove that the profit function is submodular in the order quantities , which allows for a more efficient algorithm to determine the optimal ordering policy . using computational experiments we assess the effect of substitution and replenishment time uncertainty on the order quantities and the profit as a function of the storage capacity .
|
an outstanding challenge in socio - economic systems is to disentangle the internal dynamics from the exogenous influence .it is obvious that any non - trivial system is both subject to external shocks as well as to internal organizational forces and feedback loops . in absence of external influences, many natural and social systems would regress or die , however the internal mechanisms are of no less importance and can either stabilize or destabilize the system .these systems are continuously subjected to external shocks , forces , noises and stimulations ; they propagate and process these inputs in a self - reflexive way .the stability ( or criticality ) of these dynamics is characterized by the relative strength of self - reinforcing mechanisms .for instance , the brain development and performance is given by both external stimuli and endogenous collective and interactive wiring between neurons .the normal regime of brain dynamics corresponds to asynchronous firing of neurons with relatively low coupling between individual neurons .however as the coupling strength increases , the internal feedback loops starts playing an increasingly important role in the dynamics , and the system moves towards the tipping point at which abnormal synchronous `` neuronal avalanches '' result in an epileptic seizure . as another example, financial systems are known to be driven by exogenous idiosyncratic news that are digested by investors and complemented with quasi - rational ( sometimes self - referential ) behavior . correlated over - expectations ( herding ) of investorscorrespond to the bubble phase that pushes the system towards criticality , where the crash may result as a bifurcation towards a distressed regime . in physical systems at thermodynamic equilibrium , the so - called fluctuation - dissipation theorem relates quantitatively the response of the system to an exogenous ( and instantaneous ) shock to the correlation structure of the spontaneous endogenous fluctuations . in out - of - equilibrium systems ,the existence of such relation is still an open question . in a given observation set, it seems in general hopeless to separate the contributions resulting from external perturbations and internal fluctuations and responses .however , one would like to understand the interplay between endogeneity and exogeneity ( the ` endo - exo ' problem , for short ) in order to characterize the reaction of a given system to external influences , to quantify its resilience , and explain its dynamics . using the class of self - exciting conditional poisson ( hawkes ) processes ,some progress has recently been made in this direction . in the modeling of complex point processes in natural and socio - economic systems ,the hawkes process has become the gold standard due to its simple construction and flexibility .nowadays , it is being successfully used for modeling sequences of triggered earthquakes ; genomic events along dna ; brain seizures ; spread of violence and crime across some regions ; extreme events in financial series and probabilities of credit defaults . in financial applications ,the hawkes processes are most actively used for modeling high frequency fluctuations of financial prices ( see for instance ) , however applications to lower frequency data , such as daily , are also possible ( see [ apx : financial ] ) .being closely related to branching processes , the hawkes model combines , in a natural and parsimonious way , exogenous influences with self - excited dynamics .it accounts simultaneously for the co - existence and interplay between the exogenous impact on the system and the endogenous mechanism where past events contribute to the probability of occurrence of future events .moreover , using the mapping of the hawkes process onto a branching structure , it is possible to construct a representation of the sequence of events according to a branching structure , with each event leading to a whole tree of offspring .the linear construction of the hawkes model allows one to separate exogenous events and develop a single parameter , the so - called `` branching ratio '' that directly measures the level of endogeneity in the system .the branching ratio can be interpreted as the fraction of endogenous events within the whole population of events .the branching ratio provides a simple and illuminating characterization of the system , in particular with respect to its fragility and susceptibility to shocks . for , on average , the proportion of events arrive to the system externally , while the proportion of events can be traced back to the influence of past dynamics .as approaches from below , the system becomes `` critical '' , in the sense that its activity is mostly endogenous or self - fulfilling .more precisely , its activity becomes hyperbolically sensitive to external influences .the regime corresponds to the occurrence of an unbounded explosion of activity nucleated by just a few external events ( e.g. , news ) with non - zero probability . in any realistic case ,when present , this explosion will be observable in finite time .not only does the hawkes model provide this valuable parameter , but it also amenable to an easy and transparent estimation by maximum likelihood without requiring stochastic declustering , which is essential in the branching processes framework but has several limitations .however , the hawkes model is not the only model that describes self - excitation in point processes . in particular , the autoregressive conditional duration ( acd ) model and the autoregressive conditional intensity ( aci ) model been introduced and successfully used in econometric applications .a similar concept was used in the so - called autoregressive conditional hazard ( ach ) model .these processes were designed to mimic properties of the famous autoregressive conditional heteroskedasticity ( arch ) model and generalized autoregressive conditional heteroskedasticity ( garch ) model that successfully account for volatility clustering and self - excitation in price time series .some other modifications of acd models such as fractionally integrated acd ( fiacd ) or augmented acd ( aacd ) were introduced to account for additional effects ( such as long memory ) or to increase the flexibility of the model ( for a more detailed review , see and references therein ) . in general , all approaches to modeling self - excited point processes can be separated into the classes of duration - based ( represented by the acd model and its derivations ) and intensity - based approaches ( hawkes , ach , aci , and so on ) , which define a stochastic expression for inter - event durations and intensity respectively .of all the models , as discussed above , the hawkes process dominates by far in the class of intensity - based model , and the acd model a direct offspring of the garch - family is the most used duration - based model . despite belonging to different classes ,both models describe the same phenomena and exhibit similar mathematical properties . in this article, we aim to establish a link between the acd and hawkes models .we show that , despite the fact that the acd model can not be directly mapped onto a branching structure , and thus the branching ratio for this model can not be derived , it is possible to introduce a parameter ] in the framework of the hawkes model .namely , both and characterize stationarity properties of the models , and provide an effective transformation of the exogenous excitation of the system onto its total activity . by numerical simulations , we show that there exists a monotonous relationship between the parameter of the acd model and the branching ratio of the corresponding hawkes model . in particular , the purely exogenous case ( ) and the critical state ( ) are exactly mapped to the corresponding values and .we validate our results by goodness - of - fit tests and show that our findings are robust with respect to the specification of the memory kernel of the hawkes model .the article is structured as follows . in section [ sec : models ] , we introduce the hawkes and acd models and briefly discuss their properties .section [ sec : branching ] introduces the branching ratio and relates it to the measure of endogeneity within the framework of the hawkes model . in section [ sec : endo_acd ] , we discuss similarities between the hawkes and acd models , and identify a parameter in the acd model that can be treated as an effective degree of endogeneity .we support our thesis with extensive numerical simulations and goodness - of - fit tests . in section[ sec : conclusion ] , we conclude .let us define a _ univariate point process _ of event times ( for ) with the _ counting process _ , and the _ duration process _ of inter - event times .properties of the point process are usually described with the _ ( unconditional ) intensity process _ ] , which is adapted to the natural _ filtration _ representing the history of the process .the well - known _ poisson point process _ is defined as the point process whose conditional intensity does not depend on the history of the process and is constant : the _ non - homogenous poisson process _ extends expression to account for time - dependence of both conditional and unconditional intensity functions : .both homogeneous and non - homogeneous poisson processes are completely memoryless , which means that the durations are independent from each other and are completely determined by the exogenous parameter ( function ) .the _ self - excited hawkes process _ and _ autoregressive conditional durations ( acd ) _ model , which are described in this article , extend the concept of the poisson point processes by adding path dependence and non - trivial correlation structures .these models represent two different approaches in modelling point processes with memory : the so called _ intensity - based _ and _ duration - based _ approaches .as follows from their names , the first approach focuses on models for the conditional intensity function and the second considers models of the durations . for example , in the context of the intensity - based approach , the poisson process is defined by equation . in the context of the duration - based approach, the poisson process is defined as the point process whose durations are independent and identically distributed ( iid ) random variables with exponential probability distribution function .the linear hawkes process , which belongs to the class of intensity - based models , has its conditional intensity being a stochastic process of the following general form : where is the _ background intensity _ , which is a deterministic function of time that accounts for the intensity of arrival of _ exogenous _ events ( not dependent on history ) .a deterministic _ kernel function _ , which should satisfy causality ( for ) , models the _ endogenous _ feedback mechanism ( memory of the process ) . given that each event arrives instantaneously , the differential of the counting process can be represented in the form of a sum of delta - functions , allowing to be rewritten in the following form : it can be shown ( and we will discuss this point in the following section ) that the stationarity of the process requires that the shape of the kernel function defines the correlation properties of the process . in particular , the geophysical applications of the hawkes model , or more precisely of its spatio - temporal extension called the _ epidemic - type aftershock sequence ( etas ) _ , assume in general a power law time - dependence of the kernel : that describes the modified omori - utsu law of aftershock rates .financial applications traditionally use an exponential kernel which has been originally suggested by and ensures markovian properties of the model . in both cases ,a heaviside function ensures the validity of the causality principle .the stationarity condition requires for the power law kernel and for the exponential kernel . in the present work ,we focus on the hawkes model with an exponential kernel and background intensity that does not depend on time : . we introduce a new dimensionless parameter , , which will be discussed in detail later , which allows us to write the final expression for the conditional intensity as follows : then , the stationarity condition reads . in order to check the robustness of the results presented below , in particular with respect to the choice of the memory kernel , we have also considered a power law kernel with time - independent background intensity .similarly to the exponential kernel , the integral from to of the memory kernel defines the dimensionless parameter , which allows us to rewrite the hawkes model with power law kernel as : again , the stationarity condition reads . the class of _ autoregressive conditional durations _ ( acd )models has been introduced by in the field of econometrics to model financial data at the transaction level .the acd model applies the ideas of the autoregressive conditional heteroskedasticity ( arch ) model , which separates the dynamics of a stationary random process into a multiplicative random error term and a dynamical variance that regresses the past values of the process . in the spirit of the arch , the acd modelis represented by the duration process in the form where defines an iid random non - negative variable with unit mean =1 ] . here, represents the set of parameters of the model . from expression, one can simply derive the conditional intensity of the process : where represents the intensity function of the noise term , .assuming to be iid exponentially distributed , one can call this model an _ exponential acd _ model .the conditional expected duration of the acd(, ) model , where denotes the order of the model , is defined as an autoregressive function of the past observed durations and the conditional durations themselves : where , and are parameters of the model that constitute the set .the stationarity condition for the acd model has the form : in the simple acd(1,1 ) case that is considered in the present article , equation is reduced to : similarly , the conditional intensity of the exponential acd(1,1 ) has the form : and the stationarity condition reduces to .the linear structure of the hawkes process with identical functional form of summands , that depend only on arrival time of a single event , allows one to consider it as a cluster process in which the random process of cluster centers is the poisson process with rate .all clusters associated with centers are mutually independent by construction and can be considered as a _ generalized branching process _ , illustrated in figure [ fig : branching ] . [ insert figure [ fig : branching ] here ] in this context , each event can be either an _ immigrant _ or a _descendant_. the rate of immigration is determined by the background intensity and results in an exogenous random process .once an immigrant event occurs , it generates a whole cluster of events .namely , a zeroth - order event ( which we will call the _ mother event _ ) can trigger one or more first - order events ( _ daughter events _ ) .each of these daughters , in turn , may trigger several second - order events ( the grand - daughters of the initial mother ) , and so on .all first- , second- and higher - order events form a cluster and are called descendants ( or _ aftershocks _ ) and represent endogenously driven events that appear due to internal feedback mechanisms in the system .it should be noted that this mapping of the hawkes process onto the branching structure ( figure [ fig : branching ] ) is possible due to the linearity of the model , and is not valid for nonlinear self - excited point processes , such as the class of nonlinear mutually excited point processes , of which the multifractal stress activation model is a particular implementation .the crucial parameter of the branching process is the _ branching ratio _ ( ) , which is defined as the average number of daughter events per mother event .depending on the branching ratio , there are three regimes : ( i ) _ sub - critical _ ( ) , ( ii ) _ critical _ ( ) and ( iii ) _ super - critical _ or explosive ( ) .starting from a single mother event ( or immigrant ) at time , the process dies out with probability in the sub - critical and critical regimes and has a finite probability to explode to an infinite number of events in the super - critical regime .the critical regime for separates the two main regimes and is characterized by power law statistics of the number of events and in the number of generations before extinction . for ,the process is stationary in the presence of a poissonian or more generally stationary flux of immigrants .being the parameter that describes the clustering structure of the branching process , the branching ratio defines the relative proportion of exogenous events ( immigrants ) and endogenous events ( descendants or aftershocks ) .moreover , in the sub - critical regime , in the case of a constant background intensity ( ) , the branching ratio is exactly equal to the fraction of the average number of descendants in the whole population . in other words ,the branching ratio is equal to the proportion of the average number of endogenously generated events among all events and can be considered as an effective measure of endogeneity of the system . to see this ,let us count separately the rates of exogenous and endogenous events .the rate of exogenous immigrants ( zeroth - order events ) is equal to the background activity rate : .each immigrant independently gives birth , on average , to daughters and thus the rate of first - order events is equal to . in turn , each first - order event produces , on average , second - order events , whose rate is equal to .continuing this process ad infinitum and summing over all generations , we obtain the rate of all endogenous descendants : which is finite for .the global rate is the sum of the rates of immigrants and descendants and equal to and the proportion of descendants ( endogenously driven events ) in the whole system is equal to the branching ratio : calibrating on the data therefore provides a direct quantitative estimate of the degree of endogeneity . in the framework of the hawkes model with ,the branching ratio is easily defined via the kernel : for the exponential parametrization , the branching ratio , , is equal to a dimensionless parameter previously introduced .the hawkes framework provides a convenient way of estimating the branching ratio , , from the observations , using the maximum likelihood method , which benefits from the fact that the log - likelihood function is known for hawkes processes .the calibration of the model and estimation of the branching ratio can then be performed with the numerical maximization of log - likelihood function in the parameter space for the exponential kernel and for the power law model . despite being a relatively straightforward calibration procedure ,special care should be taken with respect to data processing , choice of the kernel , robustness of numerical methods and stationarity tests as discussed in details in .note that the acd(, ) and hawkes models operate on different variables with inverse dimensions : duration for the acd(, ) model and conditional intensity for the hawkes model , which is of the order of the inverse of the duration . as a consequence , equations and apply to different statistics ( average durations ] ) .moreover , the acd model can not be exactly mapped onto a branching structure whereas the hawkes process can .indeed , the branching structure requires that the conditional probability for an event to occur within the infinitely small interval ( which is the conditional intensity ) should be decomposed into a sum of ( 1 ) a ( deterministic or stochastic ) function of time that represents the immigration intensity and ( 2 ) the contributions from each past event that satisfy the following conditions : ( i ) these contributions should depend only on and be independent from all other events ; ( ii ) these contributions should exhibit identical structure for all events ; and ( iii ) they should satisfy the causality principle .thus , in its general form , a conditional poisson process that can be mapped on ( multiple ) branching structures if it is described by the following conditional intensity : where is some deterministic function , and is a unit step ( heaviside function ) . in the context of autoregressive models ( such as acd ) , the expected waiting time at a given time is defined as a regressive sum of past durations , which means that the contribution of each event to the intensity at time depends on all the events that happened after it ( ) .this violates the first principle for a branching processes of the independence of distinct branches . for the acd model ,the analysis is also complicated by the structure of the conditional intensity function , where past history is influencing the intensity both in a multiplicative way and with a shift in the baseline intensity .one should note that autoregressive intensity models ( such as aci ) in general also do not have a branching structure representation due to the problem discussed above . despite the differences in their definition and the impossibility of developing an exact mapping onto a branching structure , the acd model shares many similarities with the hawkes model and their point processes exhibit similar degrees of clustering .in particular , for the acd defined by expression , the combined parameter , plays a similar role to the parameter of the hawkes process with an exponential kernel .the similarities start with the stationarity conditions and , which require for the hawkes model and for the acd , but go much deeper than the simple idea of `` effective distance '' to a non - stationary regime .as we have seen in the previous section , defines the effective degree of endogeneity that translates the exogenous rate into the total rate .similarly , let us study the role of endogenous feedback in the acd model .for , the acd(0,0 ) model , reduces to a simple poisson process with durations having an average value of =\omega ] and=\mathrm{e}[\psi_{i}] ] at 40 equidistant points .for each of the 40 values of , we have generated 100 realizations of the corresponding exponential acd(1,1 ) process .each realization of 3500 events was generated by a recursive algorithm using eq . .in order to minimize the impact of edge effects that can bias the estimation of the branching ratio , the first 500 points of each realization were discarded .then , the hawkes model was calibrated on these synthetic datasets . for each calibration, we have performed a goodness - of - fit test based on residual analysis , which consists of studying the so - called residual process defined as the nonparametric transformation of the initial time - series into where is the conditional intensity of the hawkes process estimated with the maximum likelihood method . under the null hypothesis that the data has been generated by the hawkes process ,the residual process should be poisson with unit intensity .visual analysis involves studying the cusum plot or q - q plot and may be complemented with rigorous statistical tests . under the null hypothesis( poisson statistics of the residual process ) , the inter - event times in the residual process , , should be exponentially distributed with cdf .thus , the random variables should be uniformly distributed in ] , fixing other parameters to and .we generated 100 realizations of the hawkes process of size 3500 each .to reduce the edge effects of the thinning algorithm , we discarded the first 500 points of each realization and afterwards calibrated the parameters of the hawkes model on these realizations of length 3000 . figure [ fig : bias ] illustrates the bias and efficiency of the maximum likelihood estimator in our framework .the definition of the hawkes model ( [ eq : hawkes_discr ] ) requires the kernel to be always positive .this implies , so the estimation of is expected to have positive bias for small values , as seen in figure [ fig : bias ] . on the other hand ,when approaches the critical value of 1 from below , the memory of the system increases dramatically and , for critical state of , the memory becomes infinite .thus , for a realization of limited length , the finite size will play a very important role and will result in a systematic negative bias for .this reasoning is supported by the evidence presented in figure [ fig : bias ] , where one observes large systematic bias for . for values of the branching ratio not too close to or ,the bias is very small for almost all reasonable realization lengths ( longer than 200 to 400 points ) .we also find that the bias for close to strongly depends on the realization length .finally , figure [ fig : bias ] illustrates the high efficiency of the maximum likelihood estimator : for values of , the estimation error measured with the 90% quantile ranges does not exceed .guttorp , p. , & thorarinsdottir , t. l. ( 2012 ) . .in e. porcu , j. m. montero , & m. schlather ( eds . ) _ advances and challenges in space - time modelling of natural events _ , ( pp . 79102 ) .berlin , heidelberg : springer berlin heidelberg . , ) process ( left column ) , and hawkes process ( right column ) simulated with parameters obtained by calibrating to the realization of the acd process . parameters of the acd process and estimated parameters of the hawkes model are the following : ( a ) , , ( b ) , , ( c ) , and ( d ) , . ] , ) realizations. estimated ( a ) background intensity , ( b ) branching ratio , and ( c ) characteristic time of the kernel .panel ( d ) shows the p - value from the goodness - of - fit test , where the dashed line indicates the the 10% level .( i ) , ( ii ) , ( iii ) , ( iv ) and ( v ) .the black line corresponds to the mean p - value for case ( i ) ( ) , the shaded area to the 95% quantile range for case ( i ) , and the dotted lines depict mean p - values for cases ( ii ) , ( iii ) , ( iv ) and ( v ) . ] of the hawkes model calibrated to acd(, ) realizations for a grid of values and with , corrected for the finite sample estimation bias determined in the [ apx : bias ] .the dashed line delineates the region where the goodness - of - fit tests rejects the null hypothesis ( see text).,scaledwidth=70.0% ] and the true value used for the generation of the time series ; panel ( b ) : difference between the estimates of the branching ratio and the true value ; panel ( c ) : difference between the estimates of the characteristic time of the kernel and the true value .panel ( d ) shows the p - value of the kolmogorov smirnov test for standard uniformity of the transformed durations of the residual process . in all panels ,the black lines correspond to the mean , and the shaded areas to 90% , 50% , and 10% quantile ranges . ]
|
in order to disentangle the internal dynamics from exogenous factors within the autoregressive conditional duration ( acd ) model , we present an effective measure of endogeneity . inspired from the hawkes model , this measure is defined as the average fraction of events that are triggered due to internal feedback mechanisms within the total population . we provide a direct comparison of the hawkes and acd models based on numerical simulations and show that our effective measure of endogeneity for the acd can be mapped onto the `` branching ratio '' of the hawkes model .
|
there are two basic categories of discrete - time controlled markov processes that deal with random temporal horizons .the first is the well - known _ optimal stopping problem _ , in which the random horizon arises from some dynamic optimization protocol based on the past history of the process .the random ` stopping time ' thus generated is regarded as a decision variable .this problem arises in , among other areas , stochastic analysis , mathematical statistics , mathematical finance , and financial engineering ; see the comprehensive monograph for details and further references .the second is relatively less common , and is characterized by the fact that the random horizon arises as a result of an endogenous event of the stochastic process , e.g. , the process hitting a particular subset of the state - space , variations in the process paths crossing a certain threshold .this problem arises in , among others , optimization of target - level criteria , optimal control of retirement investment funds , minimization of ruin probabilities in insurance funds , ` satisfaction of needs ' problems in economics , risk minimizing stopping problems , attainability problems under stochastic perturbations , and optimal control of markov control processes up to an exit time .the problem treated in this article falls under the second category above . in broad strokes ,we consider a discrete - time markov control process with borel state and action spaces .we assume that there is a certain target set located inside a safe region , the latter being a subset of the state - space .the problem is to maximize the probability of attaining the target set before exiting the safe set ( or equivalently , hitting the cemetery set or unsafe region ) .this ` reach a good set while avoiding a bad set ' formulation arises in , e.g. , air traffic control , where aircraft try to reach their destination while avoiding collision with other aircraft or the ground despite uncertain weather conditions .it also arises in portfolio optimization , where it is desired to reach a target level of wealth without falling below a certain baseline capital with high probability . finally , it forms the core of the computation of safe sets for hybrid systems where the ` good ' and the ` bad ' sets represent states from which a discrete transition into the unsafe set is possible .special cases of this problem have been investigated in , e.g. , in the context of air traffic applications , in the context of probabilistic safety , in the context of maximizing the probability of attaining a preassigned comfort level of retirement investment funds .it is clear from the description of our problem in the preceding paragraph that there are two random times involved , namely , the hitting times of the target and the cemetery sets . in this articlewe formulate our problem as the maximization of an expected total reward accumulated up to the minimum of these two hitting times . as such, this formulation falls under the broad framework of optimal control of markov control processes up to an exit time , which has a long and rich history .it has mostly been studied as the minimization of an expected total cost until the first time that the state enters a given target set , see e.g. , ( * ? ? ?* chapter ii ) , ( * ? ? ?* chapter 8) , and the references therein .in particular , if a unit cost is incurred as long as the state is outside the target set , then the problem of minimizing the cost accumulated until the state enters the target is known variously as the _ pursuit problem _ , _ transient programming _ , the _ first passage problem _ , the _ stochastic shortest path problem _ , and _ control up to an exit time _ . herewe exploit certain additional structures of our problem in the dynamic programming equations that we derive leading to methods fine - tuned to the particular problem at hand .our main results center around the assertion that there exists a deterministic stationary policy that maximizes the probability of hitting the target set before the cemetery set .this maximal probability as a function of the initial state is the optimal value function for our problem .we obtain a bellman equation for our problem which is solved by the optimal value function .furthermore , we provide martingale - theoretic conditions characterizing ` thrifty ' , ` equalizing ' , and optimal policies via methods derived from ; see also and the references therein for martingale characterization of average optimality .the principal techniques employed in this article are similar to the ones in , where the authors studied optimal control of a markov control process up its first entry time to a safe set . in we developed a recovery strategy to enter a given target set from its exterior while minimizing a discounted cost .the problem was posed as one of minimizing the sum of a discounted cost - per - stage function up to the first entry time to a target set , namely , minimize ] is a discount factor .here we extend this approach to problems with two sets , a target and a cemetery , and the case of .this article unfolds as follows .the main results are stated in [ s : results ] . in [ s : prelims ] we define the general setting of the problem , namely , markov control processes on polish spaces , their transition kernels , and the admissible control strategies . in [ s : mainres ] we present our main theorem [ t : exist ] which guarantees the existence of a deterministic stationary policy that leads to the maximal probability of hitting the target set while avoiding the specified dangerous set , and also provides a bellman equation that the value function must satisfy . in [ s : martchar ] we look at a martingale characterization of the optimal control problem ; thrifty and equalizing policies are defined in the context of our problem , and we establish necessary and sufficient conditions for optimality in terms of thrifty and equalizing policies in theorem [ t : martcharpolicy ] .we discuss related reward - per - stage functions and their relationships to our problem and treat several examples in [ s : disc ] .proofs of the main results appear in [ s : proof ] .the article concludes in [ s : concl ] with a discussion of future work .our main results are stated in this section after some preliminary definitions and conventions .we employ the following standard notations .let denote the natural numbers and denote the nonnegative integers .let be the usual indicator function of a set , i.e. , and otherwise . for real numbers and let .a function restricted to is depicted as .given a nonempty borel set ( i.e. , a borel subset of a polish space ) , its borel -algebra is denoted by . by convention , when referring to sets or functions , `` measurable '' means `` borel - measurable . ''if and are nonempty borel spaces , a _ stochastic kernel _ on given is a function such that is a probability measure on for each fixed , and is a measurable function on for each fixed .we briefly recall some standard definitions below , see , e.g. , for further details .a _ markov control model _ is a five - tuple consisting of a nonempty borel space called the _ state - space _ , a nonempty borel space called the _ control _ or _ action set _ , a family of nonempty measurable subsets of , where denotes the set of _ feasible controls _ or _actions _ when the system is in state and with the property that the set of feasible state - action pairs is a measurable subset of , a stochastic kernel on given called the _ transition law _ , and a measurable function called the _ reward - per - stage function_. [ a : basic ] the set of feasible state - action pairs contains the graph of a measurable function from to .consider the markov model , and for each define the space of _ admissible histories _ up to time as and .a generic element of , which is called an admissible -history , or simply -history , is a vector of the form , with for , and .hereafter we let the -algebra generated by the history be denoted by , .recall that a _ policy _ is a sequence of stochastic kernels on the control set given satisfying the constraint .the set of all policies is denoted by .let be the measurable space consisting of the ( canonical ) sample space and let be the corresponding product -algebra .the elements of are sequences of the form with and for all ; the projections and from to the sets and are called _ state _ and _ control _ ( or _ action _ ) variables , respectively .let be an arbitrary control policy , and let be an arbitrary probability measure on , referred to as the initial distribution . by a theorem of ionescu - tulcea (* chapter 3 , 4 , theorem 5 ) , there exists a unique probability measure on supported on , such that for all , , , , we have , [ e : probmeasure ] [ d : mcp ] the stochastic process is called a discrete - time _ markov controlprocess_. we note that the markov control process in definition [ d : mcp ] is not necessarily markovian in the usual sense due to the dependence on the entire history in ; however , it is well - known ( * ? ? ?* proposition 2.3.5 ) that if is restricted to a suitable subclass of policies , then is a markov process .let denote the set of stochastic kernels on given such that for all , and let denote the set of all measurable functions satisfying for all . the functions in are called _ measurable selectors _ of the set - valued mapping .recall that a policy is said to be _randomized markov _ if there exists a sequence of stochastic kernels such that ; _ deterministic markov _ if there exists a sequence of functions such that ; _ deterministic stationary _ if there exists a function such that . as usuallet , , , and denote the set of all randomized history - dependent , randomized markov , deterministic markov , and deterministic stationary policies , respectively .the transition kernel in under a policy is given by , which is defined as the transition kernel .occasionally we suppress the dependence of on and write in place of , and .we simply write for a policy .let and be two nonempty measurable subsets of with .let be the first hitting times of the above sets .. ] these random times are stopping times with respect to the filtration .suppose that the objective is to maximize the probability that the state hits the set before exiting the set ; in symbols the objective is to attain where the is taken over a class of admissible policies .[ pgr : policies ] _ admissible policies ._ it is clear at once that the class of admissible policies for the problem is different from the classes considered in [ s : prelims ] .indeed , since the process is killed at the stopping time , it follows that the class of admissible policies should also be truncated at the stage . for a given stage define the -th policy element only on the set .note that with this definition becomes a -measurable randomized control .it is also immediate from the definitions of and that if the initial condition , then the set of admissible policies is empty in the sense that there is nothing to do by definition .indeed , in this case and no control is needed .we are thus interested only in , for otherwise the problem is trivial .in other words , the domain of is contained in the ` spatial ' region .equivalently , in view of the definitions of the ` temporal ' elements and , is well - defined on the set .we re - define , and also let to be the set of measurable selectors of the set - valued map ._ throughout this subsection we shall denote by the class of markov policies such that if , then is defined on for each . _[ pgr : recalldef ] recall that a transition kernel on a measurable space given another measurable space is said to be _ strongly feller _ if the mapping is continuous and bounded for every measurable and bounded function .a function is _ upper semicontinuous _( u.s.c . )if for every sequence converging to , we have ; or , equivalently , if for every , the set is closed in . a set - valued map between topological spaces is _ upper hemicontinuous at a point _ if for every neighborhood of there exists a neighborhood of such that implies that ; is _ upper hemicontinuous _ if it is upper hemicontinuous at every in its domain . if is equipped with a -algebra , then the set - valued map is called _ weakly measurable _ if for every open , where is the lower inverse of , defined by .see , e.g. , ( * ? ? ?* chapters 17 - 18 ) for further details on set - valued maps . whenever is a nonempty measurable set and we are concerned with any set - valued map , we let be equipped with the trace of on .let denote the convex cone of nonnegative , bounded , and measurable real - valued functions on , and we define .[ a : key ] in addition to assumption [ a : basic ] , we stipulate that 1 .the set - valued map is compact - valued , upper hemicontinuous , and weakly measurable ; 2 .the transition kernel on given is strongly feller , i.e. , the mapping is continuous and bounded for all bounded and measurable functions .the following theorem gives basic existence results for the problem ; a proof is presented in [ s : mainproof ] .[ t : exist ] suppose that assumption holds , and that is finite for every policy in .then : 1 .the value function is a pointwise bounded and measurable solution to the _ bellman equation _ in : moreover , is minimal in .2 . there exists a measurable selector such that attains the maximum in for each , which satisfies[co : exist:2 ] where is as defined in .moreover , the deterministic stationary policy is optimal .conversely , if is optimal , then it satisfies .[ pgr : altrep ] as a matter of notation we shall henceforth represent the functional equation with the less formal version : note that the measure is not well - defined for for in view of the definition in paragraph [ pgr : policies ] .as such the integral is undefined for . however , to preserve the form of and simplify notation , we shall stick to the representation by defining any object that is written as an integral of a bounded measurable function with respect to the measure to be whenever and ._ we now return to the more general class of all possible policies ( not just markovian ) , denoted by ._ fix an initial state and a policy . for each define the random variable .let us consider the process defined by we follow the basic framework of .a policy is called _ thrifty at _ if , and _ equalizing at _ if .the action , defined on , is said to _conserve at _ if .connections between equalizing , thrifty , and optimal policies for our problem are established by the following [ t : martcharpolicy ] a policy is * equalizing at if and only if = 0;\ ] ] * optimal at if and only if is both thrifty and equalizing . a connection between thrifty policies , the process defined in , and actions conserving the optimal value function is established by the following [ t : thriftychar ] for a given policy and an initial state the following are equivalent : 1 . is trifty at ; 2 . is a -martingale under ; 3 .-almost everywhere on the action conserves .it is possible to make a connection , relying purely on martingale - theoretic arguments , between the process and the value function corresponding to an optimal policy .this is the content of the following theorem , which may be viewed as a partial converse to theorem [ t : thriftychar ] .[ t : vprimechar ] suppose that either one of the stopping times and defined in is finite for every policy in .let be a nonnegative measurable function such that , , and bounded above by elsewhere .for a policy define the process as where is as in .if for some policy the process is a -martingale under , then .proofs of the above results are presented in [ s : martproofs ] .let us look at the stopped process .it is clear that in this case whenever and whenever for all policies in ; otherwise for we have since the -th term on the right - hand side is ] over a class of policies .this corresponds to maximization of the reward until exit from the set .the value - iteration functions corresponding to this problem can be written down readily : for and let .\end{aligned}\ ] ] our problem corresponds to the case of .modulo the additional technical complications involving integrability of the value - iteration functions at each stage and the total reward corresponding to initial conditions being well - defined real numbers , the analysis of this more general problem can be carried out in exactly the same way as we do below for the problem . while the above more general problem treats both the target set and the cemetery state equally , the bias towards the target set is provided in our problem by the special structure of the reward . from the general framework it is not difficult to arrive at reward - per - stage functions that are meaningful in the context of reachability , avoidance , and safety .for the sake of simplicity , till the end of this subsubsection we suppose that for all initial conditions and admissible policies the stopping times and are finite -almost surely . with this assumption in place ,let us look at some examples : * consider a discounted version of our problem , namely , let ,\ ] ] where , 1[ ] .we can write = ( 1-\alpha)^{-1}{\ensuremath{\mathsf{e}}}^{\pi , \tilde\tau}_x\bigl[{\ensuremath{\boldsymbol{1}_{o}}}(x_{\tilde\tau}){\ensuremath{\boldsymbol{1}_{\{\tilde\tau { \ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\bigr].\ ] ] in view of the definitions of and we get ] .indeed , we have = { \ensuremath{\mathsf{e}}}^{\pi , \tilde\tau}_x\biggl[\sum_{t=0}^{\tilde\tau}{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr]\\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{n=0}^\infty \alpha^n(1-\alpha)\sum_{t=0}^n { \ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr]\\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{n=0}^\infty \sum_{t=0}^n \alpha^n{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\ } } } } - \sum_{n=0}^\infty\sum_{t=0}^n\alpha^{n+1}{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr]\\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^\infty \sum_{n = t}^\infty \alpha^n{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\ } } } } - \sum_{t=0}^\infty\sum_{n = t}^\infty\alpha^{n+1}{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr]\\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^\infty \frac{\alpha^t}{1-\alpha}{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\ } } } } - \sum_{t=0}^\infty\frac{\alpha^{t+1}}{1-\alpha}{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr]\\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^\infty\alpha^t{\ensuremath{\boldsymbol{1}_{o}}}(x_t){\ensuremath{\boldsymbol{1}_{\{t{\ensuremath{\leqslant}}\tau{\ensuremath{\wedge}}\tau'\}}}}\biggr ] = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^{\tau{\ensuremath{\wedge}}\tau'}\alpha^t{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\biggr ] = v^{(1)}(\pi , x ) . \end{aligned}\ ] ] in this setting we do not have the factor outside the expectation as in the second version of above , and it demonstrates that maximizing over admissible policies leads to maximizing the probability of the event , where controls the values of as before . *consider the reward - per - stage function .under integrability assumption on under all admissible policies , we have \\ & = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^{\tau{\ensuremath{\wedge}}\tau'}\bigl({\ensuremath{\boldsymbol{1}_{o}}}(x_t ) - { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_t ) - {\ensuremath{\boldsymbol{1}_{x{\ensuremath{\smallsetminus}}k}}}(x_t)\bigr)\biggr]\\ & = { \ensuremath{\mathsf{p}}}^\pi_x(\tau < \tau ' ) - { \ensuremath{\mathsf{p}}}^\pi_x(\tau ' < \tau ) - { \ensuremath{\mathsf{e}}}^\pi_x[\tau{\ensuremath{\wedge}}\tau ' ] . \end{aligned}\ ] ] clearly , maximization of over admissible policies leads to both the maximal enlargement of the set and minimization of the hitting time on this set . *consider .this leads to the expected total reward until escape from as[page : v3 ] = { \ensuremath{\mathsf{p}}}^\pi_x(\tau < \tau ' ) - { \ensuremath{\mathsf{p}}}^\pi_x(\tau ' < \tau).\ ] ] since , maximization of over admissible policies maximizes the probability of the event .thus , maximizing over is a different formulation of the objective of our problem .the above analysis also shows that the same objective results if we take the reward - per - stage function to be for any . *suppose that is integrable for all admissible policies and consider the reward - per - stage .let .\ ] ] maximization of over admissible policies leads to large values of on an average .this is a form of safety problem , the state stays inside for as long as possible on an average .* suppose that is integrable for all admissible policies and consider for .consider ,\ ] ] we see that ] yields a policy that as we have seen before ( this is above ) . on the other hand , maximizing ] yields a policy that maximizes .however , maximizing ] , where is the action or control variable .let , be subsets of with . since is finite , assumption [ a : key ] is satisfied .consider the problem in the context of this markov chain initialized at some . by theorem [ t : exist ]the optimal value function must satisfy the equation for all .if the control actions are finite in number , searching for a maximizer over an enumerated list all control actions corresponding to each of the states may be possible if the state and action spaces are not too large .however , the memory requirement for storing such enumerated lists clearly increases exponentially with the dimension of the state and action spaces if the markov chain is extracted by a discretization procedure based on a grid on the state - space of a discrete - time markov process evolving , for example , on a subset of euclidean space . as an alternative ,it is possible to search for a maximizer from a parametrized family of functions ( vectors ) by applying well - known suboptimal control strategies ( * ? ? ?* chapter 6 ) , .note that in the case of an uncontrolled markov chain the equation above reduces to , and can be solved as a linear equation on for the vector .thus , solving for yields a method of calculating the probability of hitting before hitting in uncontrolled markov chains , and can serve as a verification tool . in certain cases of uncountable state - space markov chainsthe policies and value functions corresponding to maximization of can be explicitly calculated for small values of . as an illustration ,consider a scalar linear controlled system here is the state of the system at time , is the action or control at time taking values in ] , safe set is ] , where is the cumulative distribution function of the standard normal random variable .the function can be expressed in terms of the complementary error function , where is the standard error function . ] as , and }g(x , a) ] , we have the constrained maximizer as , where is the standard saturation function .equals if , if and otherwise . ] in other words , we get a bang - bang controller since on the interior of .it is easy to discern the maximizer from the accompanying figure .the corresponding maximal probability is found by substituting the above optimizer back into the dynamic programming equation , and this yields .for it turns out that we can no longer compute the optimizer corresponding to the first stage in closed form ; the optimizer for the second stage is , of course , calculated above .it is also evident from the accompanying figure that even in this simple example there will arise nontrivial issues with nonconvexity for .so far in our discussion we have not addressed the issue of uniqueness of the optimal policy in our problem .( theorem [ t : exist ] shows that an optimal policy exists , so the uniqueness question is meaningful . ) it becomes clear from considerations of the geometry of the sets and in simple examples that the optimal controller in theorem [ t : exist]_[co : exist:2 ] _ is nonunique in general .for instance , consider the linear system considered in above with initial condition , and let - 2 , -1[\;\cup\;]1 , 2[ ] . since the noise is symmetric about the origin , from symmetry considerations it immediately follows that the optimal controller is nonunique at the origin .note that is , of course , defined on .let us digress a little and consider the following probabilistic safety problem : maximize the probability that the state remains inside a safe set for stages , beginning from an initial condition .this , as mentioned earlier , is the probabilistic safety problem addressed in .of course the probability of staying inside for the first stages is given by ] . therefore , in this particular problem there is no difference between the maximal values of ] . however , the policies arising from the two different maximizations are quite unlike each other . indeed , whereas the former yields a deterministic markov policy whose every element is defined on all of , the stopping time version yields a deterministic markov policy whose -th element is defined on the set , just as discussed in paragraph [ pgr : policies ] . on the one hand note that the reward in the former case is not affected by further application of the control actions once the state has exited the safe set ; the policy resulting from this formulation , however , dictates that the control actions are carried out until ( and including ) the -th stage nonetheless . on the other hand , the reward in the latter stopping time version saturates at the stage the state leaves and future control actions are not defined .it is interesting to note that the bellman equation developed for probabilistic safety and reachability in may be obtained as a special case of in theorem [ t : exist ] above .this comes as no surprise .the problem of maximizing the probability of staying inside a ( measurable ) safe set for steps is given by the maximization of ] such that pointwise on . by definition of have { \ensuremath{\leqslant}}\sup_{\pi\in\pi_m}{\ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^{(n-1){\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'}{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\biggr ] = v_n(x),\ ] ] and the monotone convergence theorem shows that = { \ensuremath{\mathsf{e}}}^\pi_x\biggl[\sum_{t=0}^{\tau{\ensuremath{\wedge}}\tau'}{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\biggr ] = v(\pi , x).\ ] ] taking the supremum over on the right - hand side shows that pointwise on .note that and for all ; therefore and .let us define the maps ,\\ \mathbb k\ni ( x , a){\ensuremath{\longmapsto}}t'v^\star(x , a ) & { \mathrel{\mathop:\!\!=}}\int_{k}q({\ensuremath{\mathrm{d}}}y|x , a ) v^\star(y)\in[0 , 1 ] .\end{aligned}\ ] ] we note that the transition kernel is strongly feller by assumption [ a : key ] , and therefore and are continuous functions on .moreover , for all we define since pointwise on , it follows from the definitions above and the monotone convergence theorem that for all and fix .since and are continuous functions on , for each both and are attained on . from the definition of inwe have for all .also , is a nondecreasing sequence of numbers bounded above by , and therefore it attains a limit .if this limit is strictly less than , standard easy arguments may be invoked to show that the sequence of continuous functions can not converge pointwise to on , which contradicts .it follows that whenever , together with this shows that satisfies the bellman equation pointwise on , i.e. , .we have already seen above that pointwise on . since , the reverse inequality follows from lemma [ l : dominate ] . therefore, we conclude that identically on .[ l : dsstrategy ] let be a deterministic stationary policy. then we have for the assertions are trivial .fix . from the definition of we have \nonumber\\ & = { \ensuremath{\mathsf{e}}}^{f^\infty}\biggl[{\ensuremath{\boldsymbol{1}_{o}}}(x_0){\ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' = 0\ } }} } + { \ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' > 0\}}}}\sum_{t=1}^{\tau{\ensuremath{\wedge}}\tau'}{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\,\bigg|\,x_0 = x\biggr]\nonumber\\ & = { \ensuremath{\boldsymbol{1}_{o}}}(x ) + { \ensuremath{\mathsf{e}}}^{f^\infty}\biggl[{\ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' > 0\}}}}\sum_{t=1}^{\tau{\ensuremath{\wedge}}\tau'}{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\,\bigg|\ , x_0 = x\biggr ] .\end{aligned}\ ] ] since and this event is -measurable , = { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x){\ensuremath{\mathsf{e}}}^{f^\infty}\biggl[\sum_{t=1}^{\tau{\ensuremath{\wedge}}\tau'}{\ensuremath{\boldsymbol{1}_{o}}}(x_t)\,\bigg|\,x_0 = x\biggr].\ ] ] therefore , \\ & = { \ensuremath{\boldsymbol{1}_{o}}}(x ) + { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x){\ensuremath{\mathsf{e}}}^{f^\infty}\biggl[\sum_{t=1}^{\tau}{\ensuremath{\boldsymbol{1}_{o}}}(x_{t{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})\,\bigg|\,x_0 = x\biggr ] .\end{aligned}\ ] ] considering the fact that for by definition , the markov property shows that the second term on the right - hand side above equals \,\bigg|\,x_0 = x\biggr]\\ & = { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x ) \int_k q({\ensuremath{\mathrm{d}}}y|x , f){\ensuremath{\mathsf{e}}}^{f^\infty}\biggl[\sum_{t=1}^{\tau}{\ensuremath{\boldsymbol{1}_{o}}}(x_{t{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})\,\bigg|\,x_{1{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau ' } = y\biggr]\\ & = { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x ) \int_k q({\ensuremath{\mathrm{d}}}y|x , f ) v(f^\infty , y ) .\end{aligned}\ ] ] collecting the above equations we obtain , and this completes the proof .we are now ready for the proof of the first main result .\(i ) note that by definition is nonnegative .the fact that satisfies the bellman equation follows from lemma [ l : vstarsatisfy ] . in view of the definition of in theorem [ t : exist ] and lemma [ l : vstarsatisfy ] we conclude that is minimal in because pointwise on implies that pointwise on for any .\(ii ) lemma [ l : l0tol0 ] guarantees the existence of a selector such that holds .iterating the equality ( or ) it follows as in the proof of lemma [ l : dominate ] that for , + { \ensuremath{\mathsf{e}}}^{f_\star^\infty}_x\bigl[{\ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{(n-1){\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})({\ensuremath{\boldsymbol{1}_{k } } } v^\star)(x_{n{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})\bigr].\ ] ] taking limits as on the right , the monotone and dominated convergence theorems give .since is arbitrary , on and that is an optimal policy .conversely , by lemma [ l : dsstrategy ] it follows that under the stationary deterministic strategy we have with in place of , which is identical to . for the purposes of this subsection we let denote the set of admissible policies such that is defined on whenever .[ l : bothsupmart ] for every policy and initial state the processes and are both nonnegative - supermartingales under .it is clear that both processes are nonnegative and -adapted .fix , an initial state , a policy , and on the event fix a history .let on .then since , we have since , it follows that therefore , keeping in mind the definition of above , & = w_n(\pi , x ) + { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{(n-1){\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau ' } ) t'v^\star(x_{n{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau ' } , a_n)\nonumber\\ & { \ensuremath{\leqslant}}w_n(\pi , x ) + { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{(n-1){\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})v^\star(x_{n{\ensuremath{\wedge}}\tau{\ensuremath{\wedge}}\tau'})\\ & = \zeta_n,\nonumber \end{aligned}\ ] ] where the inequality holds -almost surely .therefore , the process is a nonnegative - supermartingale , and hence also a - supermartingale . considering that the sequence is nondecreasing , from the definitions in and the fact that the process is a - supermartingale we see that the process is also a - supermartingale under .lemma [ l : bothsupmart ] confirms that both of the two adapted processes and converge almost surely and are nonincreasing in expectation , both under . let ] is nonincreasing with it follows that = { \ensuremath{\mathsf{e}}}^\pi_x[\zeta_n ] = \ldots = { \ensuremath{\mathsf{e}}}^\pi_x[\zeta_0 ] = v^\star(x) ] . as a resultwe have , and ( i ) follows .it follows readily from the definition of the stopping times and that the process defined in is a bounded process , and by assumption it is a -martingale under .doob s optional sampling theorem ( * ? ? ?* theorem 2 , p. 422 ) applied to at the stopping time gives us = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[\zeta'_0\bigr ] = v'(x),\ ] ] where the last equality follows from the definition of . fromwe get & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[w_{\tau{\ensuremath{\wedge}}\tau'-1}(\pi^\star , x ) + { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{\tau{\ensuremath{\wedge}}\tau'-1})\bigl({\ensuremath{\boldsymbol{1}_{k}}}\cdot v'\bigr)(x_{\tau{\ensuremath{\wedge}}\tau'})\bigr]\\ & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\biggl[\sum_{t=0}^{\tau{\ensuremath{\wedge}}\tau'-1 } { \ensuremath{\boldsymbol{1}_{o}}}(x_t ) + { \ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{\tau{\ensuremath{\wedge}}\tau'-1})\bigl({\ensuremath{\boldsymbol{1}_{k}}}\cdot v'\bigr)(x_{\tau{\ensuremath{\wedge}}\tau'})\biggr]\\ & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[{\ensuremath{\boldsymbol{1}_{k{\ensuremath{\smallsetminus}}o}}}(x_{\tau{\ensuremath{\wedge}}\tau'-1})\bigl({\ensuremath{\boldsymbol{1}_{k}}}\cdot v'\bigr)(x_{\tau{\ensuremath{\wedge}}\tau'})\bigr ] .\end{aligned}\ ] ] by definition of and , equals on , and by our hypotheses the set is a -full - measure set . continuing from the last equality above we arrive at & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[{\ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' < \infty\}}}}\bigl({\ensuremath{\boldsymbol{1}_{k}}}\cdot v'\bigr)(x_{\tau{\ensuremath{\wedge}}\tau'})\bigr]\nonumber\\ & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[{\ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' < \infty\}}}}\bigl({\ensuremath{\boldsymbol{1}_{\{\tau < \tau'\}}}}{\ensuremath{\boldsymbol{1}_{k}}}(x_\tau)v'(x_{\tau } ) + { \ensuremath{\boldsymbol{1}_{\{\tau > \tau'\}}}}{\ensuremath{\boldsymbol{1}_{k}}}(x_{\tau'})v'(x_{\tau'})\bigr)\bigr]\nonumber\\ & = { \ensuremath{\mathsf{e}}}^{\pi^\star}_x\bigl[{\ensuremath{\boldsymbol{1}_{\{\tau{\ensuremath{\wedge}}\tau ' < \infty\}}}}{\ensuremath{\boldsymbol{1}_{\{\tau < \tau'\}}}}\bigr]\label{e : bydef}\\ & = { \ensuremath{\mathsf{p}}}^{\pi^\star}_x\bigl(\tau < \tau ' , \tau < \infty\bigr),\nonumber \end{aligned}\ ] ] where the equality in follows from the assumptions on and the definitions of and . collecting the equations abovewe get as asserted .it is of interest to note that the hypotheses of theorem [ t : vprimechar ] requires at least one of the stopping times or to be finite .let us examine the case of being on a set of positive probability .following the proof of theorem [ t : vprimechar ] , we see that in this case we have to agree on the value of on .if exists , then we can always let take this value on the set .however , the context of the problem offers another alternative , namely , to set on . this is because if for all , then the value of is of no consequence at all .the purpose of this article was to present a dynamic programming based solution to the problem of maximizing the probability of attaining a target set before hitting a cemetery set , and furnish an alternative martingale characterization of optimality in terms of thrifty and equalizing policies .several related problems of interest were sketched in [ s : disc : ss : gensetting ] .some of these problems do not admit an immediate solution in the dynamic programming framework we established here because of our central assumption that the cost - per - stage function is nonnegative .this issue deserves further investigation .the results in this article also provide clear indications to the possibility of developing verification tools for probabilistic computation tree logic in terms of dynamic programming operators .this matter is under investigation and will be reported in .implementation of the dynamic - programming algorithm in this article is challenging due to integration over subsets of the state - space , and suboptimal policies are needed . in this context development of a possible connection with ` greedy - time - optimal ' policies ( * ? ? ?* chapters 4 , 7 ) , originally proposed as a tractable alternative to optimal policies in demand - driven large - scale production systems , is being sought .the authors thank onsimo hernndez - lerma for helpful suggestions and pointers to references , and sean summers for posing the problem .bertsekas , d. p. , shreve , s. e. , 1978 .stochastic optimal control : the discrete - time case .139 of mathematics in science and engineering. academic press inc .[ harcourt brace jovanovich publishers ] , new york . karatzas , i. , sudderth , w. , 2009 .two characterizations of optimality in dynamic programming . applied mathematics & optimization ; http://www.springerlink.com / content/340m82862p817446/?p = f54a99eb8ef3432% cb45724c3d5ee8baa&pi=0[http://www.springerlink.com / content/340m82862p817446/?p = f54a99eb8ef3432% cb45724c3d5ee8baa&pi=0 ] .prandini , m. , hu , j. , 2006 . a stochastic approximation method for reachability computations . in : stochastic hybrid systems .337 of lecture notes in control and information sciences .springer , berlin , pp .107139 .watkins , o. , lygeros , j. , 2003 .stochastic reachability for discrete - time systems : an application to aircraft collision avoidance . in : 42nd ieee conference on decision and control .vol . 5 .53145319 .zhu , q. , guo , x. , 2006 . a semimartingale characterization of average optimal stationary policies for markov decision processes . journal of applied mathematics and stochastic analysis , art .i d 81593.http://www.hindawi.com / getarticle.aspx?doi=10.1155/jamsa/2006/815% 93[http://www.hindawi.com / getarticle.aspx?doi=10.1155/jamsa/2006/815% 93 ] .
|
we present a dynamic programming - based solution to the problem of maximizing the probability of attaining a target set before hitting a cemetery set for a discrete - time markov control process . under mild hypotheses we establish that there exists a deterministic stationary policy that achieves the maximum value of this probability . we demonstrate how the maximization of this probability can be computed through the maximization of an expected total reward until the first hitting time to either the target or the cemetery set . martingale characterizations of thrifty , equalizing , and optimal policies in the context of our problem are also established .
|
neural networks consist of many nonlinear components ( neurons that beyond a threshold emit action potentials ) which are interdependent ( the output of a neuron is the input of another neuron in the network ) and form a complex system with new emergent properties that are not hold by each individual item in the system alone .the emergent property of this dynamical system is that a set of neurons will synchronize and fire impulses simultaneously . in the context of neuroscience , this emergent property is used to implement quite sophisticated and highly specialized `` logical '' functionalities such as memorization with hebbian learning , and recognition of patterns ( or memories ) .there at least two mechanisms by which such synchronous oscillations can take place : * synchrony can be a consequence of a common input produced by an oscillating neuron ( or set of neurons ) ( pacemakers ) .* synchrony can also be a consequence of an emergent population oscillation within a network of cells .there is no external coordination when this oscillation is built up , it is self - organized and its properties must be related to the structural way in which the networks are connected .each neuron can receive input from neighboring neurons of the network and from external impulses .these neural networks are also excited by electrical noise which is ubiquitous in the neuronal system and seem to be able to operate in such a noisy environment in a robust way .the main sources of noise are related to the synaptic connections and voltage - gated channels .the role of noise in the functioning of the non - linear nervous system is poorly understood but there are evidences of positive interactions of noise and nonlinearity in neuronal systems . in the present work we will investigate the relevance of noise in the synchronization of a neural network . in order to do so we will integrate numerically the fokker - planck equation associated to the stochastic system .this has the advantage of giving better accuracy in the tails of the distribution than solving the stochastic differential equations .integrating numerically this equation one can obtain a quick overview of the system dynamics and the time evolution of the probability density .we will show that noise can help the system building a synchronous oscillation .the starting point is the equation of motion of the single neuron , see section [ 1 ] .then , in section [ 2 ] , we describe the ensemble of neurons excited by noise using the fokker - planck equation , which we integrate numerically in different scenarios .we first show a noise induced transition in the distribution for a certain range of noise intensity ( see subsection [ 2 - 1 ] ) .next we introduce different situations where synchrony is attained in the presence of noise : first exciting the ensemble of neurons with an oscillating external input ( see subsection [ 2 - 2 ] ) , then introducing only a feedback term ( see subsection [ 2 - 3 ] ) and finally reinforcing the external excitation with the internal feedback ( see subsection [ 2 - 4 ] ) . in all these scenarios there exist a defined range of noise intensity where the synchronization is maximized .the impulse transmission in a neuron can be essentially described with the hodgkin - huxley model , .the fitzhugh - nagumo ( fhn ) model ( ) is a simplified variant of the previous model accounting for the essentials of the regenerative firing mechanism in an excitable nerve cell , namely , it has a stable rest state , and with an adequate amount of disturbance it generates a pulse with a characteristic magnitude of height and width . in the fhn modelall dynamical variables of the neuron are reduced to two quantities : the voltage of the membrane ( fast variable ) and the recovery variable which corresponds to the refractory properties of the membrane ( slow variable ) : the neuron is excited by an external input .here we have added white gaussian noise to the standard fhn equation obtaining a stochastic differential equation ( sde ) .for all the cases described below we will study this system for and .the neuron is said to `` fire '' or emit a `` spike '' when .this impulse is then transmitted along the axon and elicits the emission of neurotransmitters which in turn excite other neurons .let us first consider the deterministic system ( ) with constant excitation .a typical feature of a neuron system is that the information is encoded in the firing rate . for constant inputthere is a threshold amplitude ( in this case ) beyond which the output is a periodic oscillating system ( a train of spikes with ) , whose frequency ( and not the amplitude ) depends on the amplitude of the exciting force . in fig .[ detfhn]-left we show the projection onto the phase space of two of these attractors excited by different inputs of constant amplitude .all the orbits are attracted to this periodic attractor .such attractors will also be seen later in the extension to a noisy system .if the excitation is below the threshold then the system goes to the stable rest state ( there is no oscillation in this case ) .we study now the response of the system to a time varying force of the type , with , and various noise intensities . for a given initial condition ( )=(0,0 ), we perform a monte carlo simulation : integrate the sde for different noise realizations and average over the ensemble to obtain .we can evaluate the spectrum and the signal - to - noise - ratio ( snr ) .the system shows a maximal susceptibility to the external periodic forcing for , see fig .[ stofhn ] .this feature will also survive when we consider an ensemble of neurons and will be decisive for the behavior of the network .the enhancement of the system response due to noise is a characteristic trace of stochastic resonance ( for a comprehensive review on prominent references see ) .noise can induce hopping from the resting state to excited state and vice - versa .the mean time that it takes a neuron to move from one state to the other for a given noise intensity is called mean scape time .stochastic resonance takes place when there exists some kind of synchronization between the forcing time and the mean scape time , i.e. when there exists a time - scale resonance condition .for instance , for the double - well potential , this happens when the mean scape time is comparable with half the period of the periodic forcing .in the previous section we have introduced the behavior of a single neuron described by an fhn oscillator excited by constant force or by a periodic force plus noise .but what happens when we consider an ensemble of such excitable elements ? .in order to describe this set of oscillators we will use the fokker - planck ( fp ) equation , which is a partial differential equation , describing the time evolution of the probability density associated to a sde .if one needs to obtain global information from a system of independent neurons excited by noise , one can either run a monte carlo simulation for the sde for all the initial conditions ( one per neuron of the network ) and group the solution in histograms or solve the fp equation for the density in phase space with a given initial condition . integrating the fp equationwe get better accuracy in the tails , where the probability is very low , than running the monte carlo simulation . these tails will be particularly important in the fhn stochastic equation since the supra - threshold excursions in the attractor region , that are relevant for the network excitation , are of low probability .we integrate numerically the fokker - planck equation of the previously described fhn system : using an alternating semi - implicit scheme , to obtain the time evolution of the probability density for any given initial condition .this equation gives the distribution of the state of our network of neurons , the neurons can be excited or at rest depending on the input current and noise .if we monitor the activity of this network from outside , as it is done when performing an electro - encephalograph , we measure the average response of this network .we will see that this value can be at rest or oscillate with different frequencies depending on the input current , with white gaussian noise . the connectivity of the network is implicitly given in the structure of . for the rest of the work we will integrate this partial differential equation with the following values : , , absorbing boundaries at and for various noise intensities .our initial condition is a gaussian distribution centered close to the equilibrium point ( non firing condition ) , representing a network with all the neurons at rest .the gaussian initial condition has mean values and and variances .let us first consider the ensemble of independent neurons when the only input current is the noisy term . for low noise intensities, , after a transient time , the probability of these large excursions is almost non - existent .[ fhn_fp_a=0d_005]-left where the density is peaked on top of the stable fixed point . for bigger noise intensities , , after the transient time , there is a certain probability for a neuron to be excited , even in the absence of external forcing . in fig .[ fhn_fp_a=0d_005]-right we show the stationary distribution for that has non - vanishing tails on the supra - threshold area .we can talk of noise induced transition in the density distribution in the sense that the stationary distribution changes with noise .we will see that this strong dependence on the noise intensity will survive if we take into account additional influences , such as external driving or feedback , and will be determinant for tuning the system response .next we study the system excited by noise for , and an external periodic force ( induced for instance by an external pacemaker ) with frequency ( period ) , and .after the transient time the density moves along the phase space plane ( over a slightly different attractor ) in a periodic way , with the period of the exciting force , see fig .[ fhn_fp_a=0.15 ] .this periodic oscillation is sustained as long as it is excited by the external periodic force .we have studied the time evolution of the 1st moment for different noise intensities when the pseudo - stationary solution is reached as in the single neuron case introduced in section [ 2 - 1 ] , we also find here the stochastic resonance phenomenon in the signal to noise ratio of the spectral analysis , see fig .[ snr_a_15_d ] . what happens when our neural network has no external driving force but instead is coupled internally via some kind of feedback term ?we will see that this network can exhibit autonomous stochastic resonance : for a certain range of noise intensities a self - organized and robust coherent oscillation appears in the absence of external periodic forcing .let us first find the simplest possible model which includes the basic features of a neural network in the brain .the current flowing into each neuron is due to the interactions with other cells and the response of the neuron is expected to depend on the sum of the active synapses ( sum of firing neurons where the dendrites are connected to ) .we simulate now this input current as the sum of two terms .the first one is proportional to the number of excited neurons in the network , and is the same for each neuron ( mean field approximation ) .this term is evaluated at time but it acts at time .we are tacitly assuming that our neural network is fully connected , and that there is some transmission delay ( synaptic communications between neurons depend on propagation of action potentials often over appreciable distances ) , where the delay corresponds to the time that this process takes before acting back on the network .the second term is stochastic , as above , and represents the deviations with respect to this global term for each neuron input .finally we have a network of neurons described as an ensemble of fhn oscillators coupled via a nonlocal feedback term .we have mentioned before that the neuron dynamics is such that , for constant input above the threshold of excitation the system oscillates whereas for input below the threshold the system is at rest .we first study this system for and a strong feedback term . in fig .[ feedbackfhn]-left we show the response of the system for different noise intensities . for low noise intensities ( )the feedback term alone is below the threshold of excitation .the response of the system is enhanced by stochastic resonance , reaching the maximal amplitude and level of synchronization for . beyond this noise intensitythe response to the feedback is lower but still beyond the level of excitation so that a smaller amplitude oscillation is sustained for .the maximum value of , , gives a measurement of the synchronization of the network , for the cases shown for , for and for .if the amplitude of feedback excitation is weaker , such as , the enhancement effect is even clearer , see fig .[ feedbackfhn]-right .the feedback is only able to overcome the threshold of oscillation in a narrow region of the noise intensity , where stochastic resonance is at its maximum .notice that not every single neuron is necessary oscillating with the same phase since the density distribution is diffused along the attractor , see the density plots at different times in fig .[ feedbackfhnrho ] .next we keep the noisy input constant and increase the delay time up to of the neuron cycle .we calculate the fraction of the population that is excited at a given time , shown in fig .[ u_excited_deltat ] .as we increase the delay the synchronization level decreases significatively , the global excitement of the network is lower and the feedback input is lower . because of the frequency dependence of the response of the fhn equation on the input , the frequency of oscillation decreases ( up to ) with increasing delay . following this modelwe would expect that two equal networks , strongly connected through paths with long conduction delays , would oscillate with lower frequencies and less synchrony than those connected with short conduction delays .this is indeed found in the brain , where the oscillatory activity of connected networks oscillates with lower frequencies in the case of connections that go along longer distances .for instance while local sensory integration evolves with a fast gamma ( 25 - 70 hz ) dynamics , multisensory integration evolves with an intermediate beta ( 12 - 18hz ) dynamics , and long - range integration during top - down processing evolves with a temporal dynamics in a low theta / alpha ( 4 - 12hz ) frequency range .many authors have studied the role of time - delays and nonlocal excitatory coupling in the generation of synchronous rhythm in the brain , although the detailed mechanisms behind this fact are still under investigation .if we study the system excited by noise with ,the feedback term described above with and an external periodic force with frequency ( period ) , we observe that after the transient time the density moves along the attractor in the phase space plane in a periodic way , with the period of the exciting force .we can see that two different behaviors coexist in a single network : an almost gaussian distribution is moving along the attractor trajectory , whereas a secondary maximum is stable and localized at the non - firing position .the fraction of neurons above the threshold , , oscillates now between and , the remaining is at rest whereas without the feedback term the maximum is . in this casethe feedback term and the noise helps the network to synchronize giving a very strong response to a signal , in other words : such feedback structures helps the system in amplifying a signal .these recurrent neuronal circuits where synaptic output is fed back as part of the input stream , exist naturally in cerebral structures as amplifier structures .neural connectivity is highly recurrent ( for instance layer iv is believed to be the cortical signal amplifier of thalamic signals ) .the fokker - planck equation has been used to explore the time evolution of the probability density of a neural network excited by noise .integrating numerically this equation one can obtain a quick overview of the system dynamics , its moments and spectral density , the attractors and most probable states of the neuron ensemble .we can explore different basic mechanisms that show new emergent properties depending on the connectivity of the network and the noise intensity .an intermediate level of noise alone , combined with nonlocal excitatory interactions , can give rise to coherent , strongly synchronized , oscillations .there are several competing mechanisms : noise , dispersion , nonlinearity and nonlocal interactions ( the feedback term ) .the balanced combination of all of them leads to the formation of a global oscillatory and robust state .this dynamics also helps the system to follow any external weak excitation , such as a periodic force . if the parameters are properly chosen as much as of the neural population can be synchronized improving the sensibility of the system to weak external signals . if the delay time or if the noise is out of the adequate range the global oscillations process does not take place .m. rudolph , a. destexhe ._ correlation detection and resonance in neural systems with distributed noise sources _ physical review letters . 2001 .86 , no.16 : 3662 - 3665 . b. l. sabatini , k. svoboda ._ analysis of calcium channels in single spines using optical fluctuation analysis .vol . 408 : 589 - 593 .white , j.t .rubinstein , a.r ._ channel noise in neurons ._ trends in neuroscience3 : 131 - 137 e. simonotto , m. riani , c. seife , m. roberts , j. twitty , f. moss ._ visual perception of stochastic resonance ._ physical review letters78 , no . 6 : 1186 - 1189 .freund , j. kienert , l. schimansky - geier , b. beisner , a. neiman , d. f. russell , t. yakusheva , f. moss ._ behavioral stochastic resonance : how a noisy arm betrays its outpost .physical review e. vol .63 : 031910 - 1 - 11 k.a .richardson , t.t .imhoff , p. grigg , j.j .collins . _ using electrical noise to enhance the ability of humans to detect subthreshold mechanical cutaneous stimuli ._ chaos,1998 . 8 : 599 - 603. i. hidaka , d. nozaki , y. yamamoto ._ functional stochastic resonance in the human brain : noise induced sensitization of baroflex system . _ physical review letters . 2000 .17 : 3740 - 3743 .b. j. gluckman , p.so . _ stochastic resonance in mammalian neuronal networks .vol.8 no.3 , 588 - 598 .d. nozaki , d.j .mar , p. grigg , j.j .collins._effects of colored noise on stochastic resonance in sensory neurons ._ physical review lettersno.11 , 2402 - 2405 .r. fitzhugh ._ impulse and physiological states in models of nerve membrane _biophysics j.,1961 . vol . 1 : 445 - 466 .nagumo , s. arimoto , s. yoshizawa . _ an active pulse transmission line stimulating nerve axon . _ .ire 1962 : vol.50 : 2061 - 2070 l. gammaitoni , p. haenggi , p. jung , f. marchesoni._stochastic resonance ._ review of modern physics.1998 .223 - 287 . v. s. anishchenko , a.b .neiman , f. moss , l. schimansky - geier._stochastic resonance : noise - enhanced order ._ reviews of topical problems .physics - uspekhi .n1.p7 - 36 .a. von stein , j. sarnthein ._ different frequencies for different scales of cortical integration : from local gamma to long range alpha / theta synchronization. _ international journal of physchophysiology 38 ( 2000 ) 301 - 313 .
|
the presence of noise in non linear dynamical systems can play a constructive role , increasing the degree of order and coherence or evoking improvements in the performance of the system . an example of this positive influence in a biological system is the impulse transmission in neurons and the synchronization of a neural network . integrating numerically the fokker - planck equation we show a self - induced synchronized oscillation . such an oscillatory state appears in a neural network coupled with a feedback term , when this system is excited by noise and the noise strength is within a certain range .
|
quantum theory ( qt ) is in conflict with the assumption that measurement outcomes correspond to preexisting properties that are not affected by compatible measurements .this conflict is behind the power of quantum computation and quantum secure communication , and can be experimentally tested through the violation of noncontextuality ( nc ) inequalities .these nc inequalities allow the observation of different forms of contextuality that can not be revealed through bell s inequalities : contextuality with qutrits , quantum - state - independent contextuality , contextuality needed for universal fault - tolerant quantum computation with magic states , and absolute maximal contextuality are just some examples .this variety of forms of contextuality leads to the question of how we can explore them theoretically and experimentally and , more precisely , to the following questions : ( i ) is there a systematic way to explore all forms of quantum contextuality ?( ii ) what is the simplest way to experimentally test them ?recently , there has been great progress towards solving both problems .on one hand , it has been shown that there is a one - to - one correspondence between graphs and quantum contextuality : the figure of merit of any nc inequality can be converted into a positive combination of correlations to which one can ascribe a graph , the so - called exclusivity graph of .the maximum value of for noncontextual hidden variable theories ( nchvts ) is given by a characteristic number of , the independence number .the maximum in qt ( or an upper bound to it ) is given by another characteristic number of , the lovsz number , which has the advantage of being easy to compute .more interestingly in connection to question ( i ) is that , reciprocally , for any graph there is always a quantum experiment such that its maximum for nchvts is and its tight maximum in qt is .this provides a possible approach to solve problem ( i ) , as it shows that all possible forms of quantum contextuality are encoded in graphs , so , by systematically studying these graphs , we can study all forms of quantum contextuality .in particular , by identifying graphs with specific properties , we can single out experiments with the corresponding quantum contextuality . furthermore , other characteristic numbers of are associated with properties such as whether the quantum violation is state independent .problem ( ii ) is also considered in ref . , where it is proven that orthogonal unit vectors can be assigned to adjacent vertices of , satisfying that , for a particular unit vector .the vectors provide a so - called lovsz optimum orthonormal representation of the complement of with handle .this representation shows that the maximum quantum value of can be achieved by preparing the system in the quantum state and projecting it on the different . however , a nontrivial problem remains , namely , how to carry out an experiment involving only compatible measurements and revealing the contextuality given by the graph .a solution to this problem has been recently presented in ref .there , it is shown that , for any , there is always an experiment involving only compatible observables whose noncontextual and quantum limits are equal to the corresponding characteristic numbers of .the proposed solution presents an additional advantage that is connected to problem ( ii ) : it only requires testing two - point correlations .this suggests that it is possible to develop a new generation of contextuality tests , with a higher control of the experimental imperfections , and achieve conditions much closer to the ideal ones than those achieved in previous tests based on three - point correlations .the aim of this work is to show that we can combine the theoretical results of refs . , with state - of - the - art experimental techniques for preparing and measuring high - dimensional photonic quantum systems into a method capable of systematically exploring all possible forms of quantum contextuality .the method presented here is general . however , in order to show its power , we will focus on solving a particular problem : identifying and experimentally testing the simplest scenario in which the maximum quantum contextuality is larger than in any simpler previously studied scenario . for identifying it, we consider a specific measure of contextuality introduced in ref . , which is specially useful when using graphs , namely , the ratio .then , we study all graphs with a fixed number of vertices . for each , we identify the graph with the largest . for that, we benefit from the exhaustive database developed in ref . .we observe that , for ( the minimum for which quantum contextuality exists ) , the maximum of is and corresponds to a well - studied case , the maximum quantum violation of the klyachko - can - biniciolu - shumovsky inequality ( kcbs ) , which is the simplest nc inequality violated by qutrits . for and ,the maximum of is still .this shows that the maximum quantum contextuality for these values of is just a variant of the one in the kcbs inequality . for ,the maximum of is and also corresponds to a well - known case , the maximum quantum violation of the clauser - horne - shimony - holt ( chsh ) bell inequality , which has been recently reached in experiments .the fact that , by using this measure of contextuality and considering an increasing , we have recovered the two most emblematic examples of quantum contextuality confirms the interest of the problem of identifying the graphs with maximum for fixed . as grows , the number of nonisomorphic graphs grows enormously and the exhaustive study of all of them becomes increasingly difficult . to our knowledge , such a comprehensive study of and has been achieved only up to .similar explorations suggest that this approach might be feasible up to .interestingly , we have found that , for , the maximum of is already larger than the one in the chsh inequality and that higher values of do not improve this maximum substantially . for ,this maximum is and only occurs for one graph , the graph in fig .[ fig1 ] . we will call this graph `` fisher 9 , '' , since some of its properties were first pointed out in ref .this graph is also mentioned in refs .however , to our knowledge , has not been mentioned in relation with qt . to identify the minimum quantum dimension , the initial state , and the measurements needed to obtain the maximum quantum contextuality associated with , we have to find a lovsz - optimum orthonormal representation of the complement of with the smallest possible dimension .no such representation exists in hilbert spaces of dimension smaller than four .we have found one which is particularly simple in dimension four and only contains states of the canonical basis and hardy states .this representation is the following : the next step is to identify an experimentally testable nc inequality containing only correlations among compatible measurements and such that its noncontextual bound is and its maximum quantum violation is , and is achievable with the state and measuring the projectors . for that, we use the result in ref . , according to which one inequality with those properties is the following equation : where is the vertex set of , is the edge set of , and is the joint probability of obtaining outcomes and when we measure and .one can check that , if we prepare the quantum state and measure and , then .in order to test inequality ( [ main ] ) , we use the linear transverse momentum of single photons .this approach has been successfully used to produce and manipulate high - dimensional photonic quantum systems .the setup used in our experiment is depicted in fig .[ fig2 ] and exploits the idea in ref . for testing two - point correlations using quantum systems . to obtain two - point correlation probabilities needed to test inequality ( [ main ] ) ,we first perform a measurement of on a system prepared in state . if the result is , we then prepare a new system in the state and perform the measurement of .if the initial result of is , we do not need to perform , in principle , any further measurement since we just need and to test inequality ( [ main ] ) . however , the assumption of noncontextuality leading to inequality ( [ main ] ) is legitimate insofar as the statistics of the measurement outcomes are not perturbed by previous measurements , and it is then important to test that this condition is achieved in our experiment .this test requires that one also measures .thus , if the initial result of measurement is , we also prepare ( defined next ) , which is the state obtained after a projective measurement of with outcome on a system initially prepared in state .we then measure . consequently , our experimental setup consists of two parts : the state preparation ( sp ) stage and the measurement stage . at the sp stagethe single - photon regime is achieved by heavily attenuating optical pulses , which are generated with an acousto - optical modulator ( aom ) placed at the output of a continuous - wave laser operating at 690 nm .well - calibrated attenuators are used to set the average number of photons per pulse to . in this case , the probability of having non - null pulses , i.e. , of having pulses containing at least one photon , is .pulses containing only one photon are the vast majority of the non - null pulses generated and account for 92.2 of the experimental runs .the probability of multiphoton events is negligible as it is 1.2 .therefore , our source can be seen as a good approximation to a nondeterministic single - photon source , which is commonly adopted in quantum key distribution . in order to prepare the -dimensional quantum states we employ the linear transverse momentum of single photons .the generated photons are sent through diffractive apertures addressed in spatial light modulators ( slms ) , and the four - dimensional state required in the experiment is defined by addressing four parallel slits in the slms for the photon transmission .all slits in each modulator have the same physical dimension , that is , each has a width of 96 m and an equal center - to - center separation . in this case , the state of the transmitted photons is given by represents the state of a photon transmitted by the slit . ( ) is the transmissivity ( phase ) defined for each slit and the normalization constant .two slms are used at each stage . in the sp stagethe first slm controls the real part of the coefficients of the generated states , while the second slm their phases . sets of lenses are employed to ensure that each slm is placed on the image plane of the next one . in the measurement stagethe state projection is performed using a second pair of slms and a pointlike avalanche photo - detector ( apd ) .after the last modulator , the attenuated laser beam is focused at the detection plane .the pointlike detector is constructed with a small circular pinhole ( 10 m diameter ) , followed by a silicon single - photon avalanche photodetector ( apd ) , which is then positioned at the center of the interference pattern . in this configurationthe detection probability is proportional to , where is the state at the measurement stage ( see ref . for details ) .to properly determine the experimental probabilities required to test inequality ( [ main ] ) we have extended into a larger graph in which every vertex belongs to exactly one clique of size four ( i.e. , an orthogonal basis ) . the extended graph is shown in fig . [ fig ] , and the new vertices correspond to the following states : notice that these states allow us to measure each observable using always the same orthogonal basis independently of the context .for instance , the probabilities and are calculated from the experimental data as follows : where is the number of counts corresponding to outcome and is an orthogonal basis .as we mentioned above , in the sp stage we prepare the state and then project it on at the measurement stage .if the outcome is 1 , we prepare the state and now the measured projector is . on the other hand , if the output is 0 , the state is prepared and projected on .the states are defined by ^{\frac{1}{2}}},\ ] ] where denotes the four - dimensional identity matrix .more specifically , in our experiment these states are given by during the measurement procedure , the sp and measurement stages run in an automated fashion , controlled and synchronized by the two fpga electronic modules at a rate of 30 hz .one module is placed at the sp stage while the other one controls the measurement stage , reads the apd output and sends the results to a personal computer for further processing . from the recorded data ,we extract the probabilities to test the violation of inequality ( [ main ] ) and also to verify that there is no signaling between the first measurement associated to and the second measurement associated to .the experimental value of is depicted in fig .[ fig3 ] and shows a violation of inequality ( [ main ] ) by over 30 standard deviations .the maximum quantum value of is not reached due to intrinsic experimental misalignments and the detector s dark counts that produce , e.g. , nonzero values for the probabilities .nevertheless , notice that our experimental value corresponds to a degree of contextuality that surpasses the maximum attainable through the quantum violation of the kcbs or chsh inequalities . in order to test that there is no signaling between measurements and , we have used to measure how the first measurement , , affects the statistics of the second measurement , .our purpose is to certify that , for all and in inequality ( [ main ] ) , the experimental values of and are compatible with zero and have the same error as the experimental quantities which , according to causality , are zero , but whose error gives the experimental precision with which we can determine a zero within our experiment .the idea of this approach is to show that in our work there is the same signaling between past and future measurements than between future and past measurements .if we assume that the latter is zero , from the obtained results we can conclude that the experimental data are compatible with the assumption that the former is zero . to obtain , and we use that the experimental values for , , , and for all pairs of measurements used to test inequality ( [ main ] ) are shown in fig .they show that in our experiment the influences of the first measurements on the second ones are negligible .being so fundamental for quantum theory , quantum computation , and quantum secure communication , it is surprising how little effort has been made to experimentally investigate quantum contextuality beyond bell s inequalities . herewe have demonstrated a tool for exploring , theoretically and experimentally , quantum contextuality in all its forms .we have described all the steps of a method to , first , identify interesting forms of contextuality and , then , to design and perform precise experiments to reveal them .our approach is universal and can be applied to study any form of quantum contextuality . in particular, it opens the possibility of experimentally testing the contextuality needed for quantum computation .in addition , we have shown that the approach is useful in itself , since it is capable of revealing interesting cases unnoticed before .its only limitations are our ability to explore large graphs or perform experiments requiring a large number of two - point correlations .moreover , we have seen that this approach leads to photonic tests allowing a better control of the imperfections and higher - quality results ( closer to the predictions of quantum theory under ideal conditions ) than previous experiments .in particular , we have verified that it allows for experiments in which the signaling between past and future measurements is negligible . in summary , although there is still work to be done for closing loopholes and improving the analysis of the experimental data , our results indicate that we already have powerful tools for exploring a fundamental part of quantum theory .we thank a. j. lpez - tarrida and j. r. portillo for discussions .this work was supported by fondecyt grants no . 1160400 , no .11150324 , no .11150325 , and no. 1150101 , milenio grant no .rc130001 , pia - conicyt grant no .pfb0824 , the fqxi large grant project `` the nature of information in sequential quantum measurements , '' projectfis2014 - 60843-p `` advanced quantum information '' ( mineco , spain ) with feder funds , and the project `` photonic quantum information '' ( knut and alice wallenberg foundation , sweden ) . j.c . and j.f.b .acknowledge the support of conicyt .thanks the cefop for its hospitality .e. p. specker , http://dx.doi.org/10.1111/j.1746-8361.1960.tb00422.x[dialectica * 14 * , 239 ( 1960 ) ] ( english translation http://arxiv.org/abs/1103.4537 [ ] ) .j. b. bell , http://dx.doi.org/10.1103/revmodphys.38.447[rev .phys . * 38 * , 447 ( 1966 ) . ] m. arias , g. caas , e. s. gmez , j. f. barra , g. b. xavier , g. lima , v. dambrosio , f. baccari , f. sciarrino , and a. cabello , http://dx.doi.org/10.1103/physreva.92.032126[phys .a * 92 * , 032126 ( 2015 ) . ]r. r. rubalcaba , fractional domination , fractional packings , and fractional isomorphism of graphs , ph.d .thesis , auburn university , al , 2005 , fig .e. r. scheinerman and d. h. ullman , _ fractional graph theory _( dover , new york , 2013 ) , fig . 7.2 .l. hardy , http://dx.doi.org/10.1103/physrevlett.71.1665[phys .lett . * 71 * , 1665 ( 1993 ) . ]
|
we report a method that exploits a connection between quantum contextuality and graph theory to reveal any form of quantum contextuality in high - precision experiments . we use this technique to identify a graph which corresponds to an extreme form of quantum contextuality unnoticed before and test it using high - dimensional quantum states encoded in the linear transverse momentum of single photons . our results open the door to the experimental exploration of quantum contextuality in all its forms , including those needed for quantum computation .
|
independent subspace analysis ( isa ) , also known as multidimensional independent component analysis , is a generalization of independent component analysis ( ica ) .isa assumes that certain sources depend on each other , but the dependent groups of sources are still independent of each other , i.e. , the independent groups are multidimensional .the isa task has been subject of extensive research . in this case, one assumes that the hidden sources are independent and identically distributed ( i.i.d . ) in time .temporal independence is , however , a gross oversimplification of real sources including acoustic or biomedical data .one may try to overcome this problem , by assuming that hidden processes are , e.g. , autoregressive ( ar ) processes .then we arrive to the ar independent process analysis ( ar - ipa ) task .another method to weaken the i.i.d .assumption is to assume moving averaging ( ma ) .this direction is called blind source deconvolution ( bsd ) , in this case the observation is a temporal mixture of the i.i.d . components .the ar and ma models can be generalized and one may assume arma sources instead of i.i.d . ones .as an additional step , the method can be extended to non - stationary integrated arma ( arima ) processes , which are important , e.g. , for modelling economic processes . in this paper, we formulate the ar- , ma- , arma- , arima - ipa generalization of the isa tasks , when ( i ) one allows for multidimensional hidden components and ( ii ) the dimensions of the hidden processes are not known .we show that in the undercomplete case , when the number of ` sensors ' is larger than the number of ` sources ' , these tasks can be reduced to the isa task .the isa task can be formalized as follows : \in{\ensuremath{\mathbb{r}}}^{d_e } \label{eq : e_concat}\end{aligned}\ ] ] and is a vector concatenated of components .the total dimension of the components is .we assume that for a given , is i.i.d . in time , and sources jointly independent , i.e. , , where denotes the mutual information ( mi ) of the arguments .the dimension of observation is .assume that , and is of full column rank .under these conditions , one may assume without any loss of generality that both the observed ( ) and the hidden ( ) signals are white .for example , one may apply principal component analysis ( pca ) as a preprocessing stage .then the ambiguities of the isa task are as follows : sources can be determined up to permutation and up to orthogonal transformations within the subspaces .we are to uncover the independent subspaces .our task is to find a matrix such that , ] , with the condition that components are independent . here, ( i ) denotes the coordinate of the estimated subspace , and ( ii ) can be chosen to be orthogonal because of the whitening assumption .this task can be solved by means of cost function that aims to minimize the mutual information between components : one can rewrite as follows : the first term of the r.h.s . is the ica cost function ; it aims to minimize mutual information for all coordinates .the other term is a kind of _ anti - ica _ term ; it aims to maximize mutual information within the subspaces .one may try to apply a heuristics and to optimize in order : ( 1 ) start by any infomax ica algorithm and minimize the first term of the r.h.s . in .( 2 ) apply only permutations to the coordinates such that they optimize the second term . in this second stepcoordinates are not changed , but may decrease further . surprisingly , this heuristics leads to the global minimum of in many cases .in other words , in many cases , ica that minimizes the first term of the r.h.s .of solves the isa task apart from the grouping of the coordinates into subspaces .this feature was observed by cardoso , first .the extent of this feature is still an open issue .nonetheless , we call it ` _ _ separation theorem _ _ ' , because for elliptically symmetric sources and for some other distribution types one can prove that it is rigorously true .( see also , the result concerning local minimum points ) .although there is no proof for general sources as of yet , a number of algorithms applies this heuristics with success .another issue concerns the computation of the second term of .if the dimensions of subspaces are known then one might rely on multi - dimensional entropy estimations , but these are computationally expensive .other methods deal with implicit or explicit pair - wise dependency estimations .interestingly , if the observations are indeed from an ica generative model , then the minimization of the pair - wise dependencies is sufficient to get the solution of the ica task according to the darmois - skitovich theorem .this is not the case for the isa task , however .there are isa tasks , where the estimation of pair - wise dependencies is insufficient for recovering the hidden subspaces .nonetheless , such algorithms seem to work nicely in many practical cases .a further complication arises if the dimensions of subspaces are not known .then the dimension of the entropy estimation becomes uncertain .methods that try to apply pair - wise dependencies were proposed to this task .one can find a block - diagonalization method in , whereas makes use of kernel estimations of the mutual information .here we shall assume that the separation theorem is satisfied .we shall apply ica preprocessing .this step will be followed by the estimation of the pair - wise mutual information of the ica coordinates .these quantities will be considered as the weights of a weighted graph , the vertices of the graph being the ica coordinates .we shall search for clusters of this graph . in our numerical studies, we make use of kernel canonical correlation analysis for the mi estimation .a variant of the ncut algorithm is applied for clustering . as a result, the mutual information within ( between ) cluster(s ) becomes large ( small ) .the problem is that this isa method requires i.i.d .hidden sources .below , we show how to generalize the isa task to more realistic sources .finally , we solve this more general problem when the dimension of the subspaces is not known .we need the following notations : let stand for the time - shift operation , that is .the n order polynomials of matrices are denoted as .let :=({\mathbf}{i}-{\mathbf}{i}z)^r ] , , and :={\mathbf}{i}_{d_s}-\sum_{i=1}^{p}{\mathbf}{p}_{i}z^{i}\in { \ensuremath{\mathbb{r}}}[z]_{p}^{d_s\times d_s} ] is stable , that is \neq 0) ] , where =\sum_{j=0}^{q}{\mathbf}{q}_j z^j \in { \ensuremath{\mathbb{r}}}[z]_{q}^{d_x\times d_e} ] . here\in { \ensuremath{\mathbb{r}}}[z]_{p}^{d_s\times d_s} ] .we assumed that ] is stable , ( ii ) the mixing matrix is of full column rank , and ( iii ) ] .that under mild conditions ] is drawn from a continuous distribution . ]the route of the solution is elaborated here .let us note that differentiating the observation of the arima - ipa task in eq .in order , and making use of the relation , the following holds : {\mathbf}{x}={\mathbf}{a}\left(\nabla^r[z]{\mathbf}{s}\right ) , \text { and } { \mathbf}{p}[z]\left(\nabla^r[z]{\mathbf}{s}\right)&={\mathbf}{q}[z]{\mathbf}{e}.\label{eq : obs2}\end{aligned}\ ] ] that is taking {\mathbf}{x} ] , where ] .steps of the proof : 1 . in the uarma - ipa task the following equations hold : {\mathbf}{s}&={\mathbf}{q}[z]{\mathbf}{e } , \label{eq : hidden1b}\\ { \mathbf}{x}&={\mathbf}{a}{\mathbf}{s } , \end{aligned}\ ] ] or equivalently non - degenerate linear transformation of an arma process is also arma .thus , observation process is an arma process . formally : substituting of eq . into eq . and then using the pseudoinverse of matrix and expression that follows from eq ., we have process is i.i.d , so the process is arma .we assumed that ] and in the uarma - ipa task implies that is of full column rank and thus the resulting isa task is well defined .] , which can be reduced to a complete isa ( ) using pca .finally , the solution can be finished by any isa procedure .the reduction procedure implies that hidden components can be recovered only up to the ambiguities of the isa task : components of ( identical dimensions ) can be recovered only up to permutations . within each subspaces, unambiguity is warranted only up to orthogonal transformations .the steps of our algorithm are summarized in table [ tab : pseudocode ] ..pseudocode of the undercomplete arima - ipa algorithm [ cols= " < " , ]in this section we demonstrate the theoretical results by numerical simulations .we created a database for the demonstration : hidden sources are 4 pieces of 2d , 3 pieces of 3d , 2 pieces of 4d and 1 piece of 5d stochastic variables , i.e. , .these stochastic variables are independent , but the coordinates of each stochastic variable depend on each other .they form a 30 dimensional space together . for the sake of illustration ,3d ( 2d ) sources emit random samples of uniform distributions defined on different 3d geometrical forms ( letters of the alphabet ) .the distributions are depicted in fig .[ fig : datbase3d ] ( fig .[ fig : datbase2d ] ) .30,000 samples were drawn from the sources and they were used to drive an arima(2,1,6 ) process defined by .matrix was randomly generated and orthogonal .we also generated polynomial \in { \ensuremath{\mathbb{r}}}[z]_5^{60 \times 30} ] randomly .the visualization of the 60 dimensional process is hard to illustrate : a typical 3d projection is shown in fig .[ fig : obs3d ] .the task is to estimate original sources using these non - stationary observations .-order differencing of the observed arima process gives rise to an arma process .typical 3d projection of this arma process is shown fig .[ fig : diff_obs3d ] .now , one can execute the other steps of table [ tab : pseudocode ] and these steps provide the estimations of the hidden components .estimations of the 3d ( 2d ) components are provided in fig .[ fig : est3d ] ( fig .[ fig : est2d ] ) . in the ideal case ,the product of matrix and the matrices provided by pca and isa , i.e. , is a block permutation matrix made of blocks .this is shown in fig .[ fig : hinton ] .+ we have generated another database using the facegen animation software . in our databasewe had 800 different front view faces with the 6 basic facial expressions .we had thus 4,800 images in total .all images were sized to pixel .figure [ fig : facedatabase ] shows samples of the database .a large matrix was compiled ; rows of this matrix were 1600 dimensional vectors formed by the pixel values of the individual images .the _ columns _ of this matrix were considered as mixed signals .this treatment replicates the experiments in : bartlett et al . ,have shown that in such cases , undercomplete ica finds components resembling to what humans consider facial components .we were interested in seeing the components grouped by undercomplete isa algorithm .the observed 4800 dimensional signals were compressed by pca to dimensions and we searched for 4 pieces of isa subspaces using the algorithm detailed in table [ tab : pseudocode ] .the 4 subspaces that our algorithm found are shown in fig .[ fig : kernelmi ] .as it can be seen , the 4 subspaces embrace facial components which correspond mostly to mouth , eye brushes , facial profiles , and eyes , respectively .we have extended the isa task to problems where the hidden components can be ar , ma , arma , or arima processes .we showed an algorithm that can identify the hidden subspaces under certain conditions .the algorithm does not require previous knowledge about the dimensions of the subspaces .the working of the algorithm was demonstrated on an artificially generated arima process , as well as on a database of facial expressions .z. szab , b. pczos , and a. lrincz .separation theorem for -independent subspace analysis with sufficient conditions . technical report , etvs lornd university , budapest , 2006 .
|
recently , several algorithms have been proposed for independent subspace analysis where hidden variables are i.i.d . processes . we show that these methods can be extended to certain ar , ma , arma and arima tasks . central to our paper is that we introduce a cascade of algorithms , which aims to solve these tasks without previous knowledge about the number and the dimensions of the hidden processes . our claim is supported by numerical simulations . as a particular application , we search for subspaces of facial components .
|
a dynamic network of many nodes connected by weighted communication lines models a great variety of situations in physics , biology , chemistry and sociology , and it has a wide range of technological applications as well ; see , for instance , cnn1,marrob , newmannets , weight1,cp2005,nets05,weight2 . examples of weighted networks are the metabolic and food webs , which connect chains of different intensity , the internet , the world wide web and other social networks , in which agents may interchange different amounts of information or money , the transport nets , whose connections differ in capacity , number of transits and/or passengers , spin glasses and reaction diffusion systems , in which diffusion , local rearrangements and reactions may vary the effective ionic interactions , and the immune system , the central nervous system and the brain , e.g. , high level functions in the latter case seem to rely on synaptic changes .a rather general feature in these systems is that the nodes are not fully synchronized when performing a given task which may be either a matter of economy or else , perhaps more frequently , a necessary condition for efficient performance .even though this is as evident as the fact that links are seldom homogeneous , studies of partly synchronized networks are rare . furthermore , the relevant literature is dispersed , as it was generated in various distant fields , and a broad coherent description is lacking .in particular , related studies often disregard an important general property , namely , that the systems of interest are out of equilibrium .that is , they can not settle down into an equilibrium state but the network typically keeps wandering in a complex space of fixed points or , in one of the simplest cases , it reaches a _nonequilibrium _ steady state whose nature depends on dynamic details .this results in a complex landscape of emergent properties whose relation with the network details is poorly understood . in this paper , as a new effort aimed at methodizing somewhat the picture , we present some related exact results , together with illustrative monte carlo simulations , which apply to a rather general class of partly synchronized heterogeneous or weighted networks .it follows , as a first application , examples of itinerancy and constructive chaos which mimic recent experimental observations .consider a network , with a processor , neuron , spin or , simply , variable at each node , and define the sets of node activities , and communication line weights , where each node is acted on by a local field which is induced by the weighted action of the other , nodes .we also define an additional , operational set of binary indexes , time evolution proceeds according to a generalized cellular automaton strategy .that is , at each time unit , one simultaneously modifies the activity of variables , and the probability of the network state evolves in discrete time , according to the ( microscopic ) transition rate: , ] which is rather customary as a case that satisfies detailed balance .notice , however , that , in general , detailed balance is not fulfilled by our basic equation ( [ meq ] ) nor by the superposition ( [ rate ] ) as far as consequently , in general , our system can not be described by gibbs ensemble theory .we shall further assume that the fields satisfy .\label{hi}\]]we are assuming here a set of given _ patterns _ , namely , different realizations of the network set of activities , to be denoted as with and where the product measures the _ overlap _ of the current state with pattern for and finite i.e. , in the limit from ( [ meq])([hi ] ) , the mesoscopic time evolution equation \right\ } + \left ( 1-\rho \right ) \pi _ { t}^{\mu } \left ( \mathbf{\sigma } \right ) \label{mt}\]]follows for any the details of the derivation , as well as some possible generalizations of this result , will be published elsewhere cortesnew .one of the simplest realizations of the above is the hopfield network hopf1,hopf2,perettob . in this case , the communication line weights are heterogeneous but fixed according to the hebb ( _ learning _ ) prescription and the local fields are these choices satisfy condition ( [ hi ] ) which also holds for other non linear _ learning _ rules ; in any case , one may easily generalize ( [ mt ] ) to include other interesting cases ( see , for instance , ) which do not precisely conform to ( [ hi ] ) .the original hopfield model evolves by glauber processes , namely , by attempting a single variable change , at each unit time e.g ., the monte carlo step with probability the symmetry and detailed balance then guarantee asymptotic convergence to _ equilibrium _ , i.e. , computational efficiency has sometimes motivated to induce time evolution of the hopfield network by the little strategy , i.e. , in our formulation .this is known to drive the system to a full _ nonequilibrium _ situation , in general . the local rule and other details of dynamicsare then essential in determining the emergent behavior .not only the time evolution may vary but also the nature of the resulting asymptotic state , perhaps including morphology , phase diagram , universality class , etc . ; see refs. for some outstanding examples of this assertion . for completeness, we mention that the hopfield network , i.e. , will also correspond , in general , to a nonequilibrium situation when implemented ( unlike in the original proposal ) with asymmetric or time evolving weights or with a dynamic rule lacking detailed balance .there is some chance that an _ effective hamiltonian _ _ _ , _ _ such that can then be defined , however .when this is the case , one may often apply equilibrium methods , with the result of relatively simple emergent properties torresjpa , cortesnc .concerning our proposal ( [ mt ] ) , we first mention that , assuming fields that conform to the hebb prescription with static weights , the hopfield property of _ associative memory _ is recovered for as expected . that is , for high enough ( which means below certain stochasticity ) the patterns may be attractors of dynamics .consequently , an initial state resembling one of the patterns , e.g. , a degraded picture will converge towards the original one , which mimics simple recognition .we checked too that , in agreement with some previous indications , implementing the hopfield hebb network with produces the behavior that characterizes the familiar case including associative memory , even though equilibrium is precluded , e.g. , in general , no effective hamiltonian is predicted to exist for any . excluding these hopfield hebb versions , our model exhibits a complex behavior which depends dramatically on the value of this is a consequence of changes with in the stability associated with ( [ mt ] ) , as we show next .the local fields may be determined according to various criteria , depending on the specific application of interest .that is , one may investigate the consequences of equation ( [ mt ] ) and associated stability for different relations between the fields and the weights and between these and other network properties .we shall be mostly concerned in the rest of this paper with a specific _ neural automaton _ as a working example . in this case, the above assumption of static line weights happens to be rather unrealistic . as a matter of fact ,one is eager to admit , concerning different contexts , that the communication line weights may change with the nodes activity , and even that they may loose some competence after a time interval of heavy work .this seems confirmed in the case of the brain where the transmission of information and many computations are strongly correlated with activity induced fast fluctuations of the synaptic intensities , namely , our s . furthermore , assuming the experimental observation that synaptic changes may induce _ depression _depre0 seems to have important consequences .that is , a repeated presynaptic activation may decrease the neurotransmitter release , which will depress the postsynaptic response and , in turn , affect noticeably the system behavior . for concreteness , motivated by these facts , we shall adopt here the proposal in refs. .this amounts to assume a simple generalization of the hebb prescription which is in accordance with condition ( [ hi ] ) , namely, n^{-1}\sum_{\mu = 1}^{m}\xi _ { i}^{\mu } \xi _ { j}^{\mu } , \label{wnew}\]]where depends on the set of stored patterns .the hopfield case discussed above is recovered for while other values of this parameter correspond to fluctuations which induce depression of synapses by a factor on the average .the choice ( [ wnew ] ) happens to importantly modify the network behavior , even for a single _ stored _ pattern , the stationary , solution of ( [ mt ] ) is then with \right\ } + \left ( 1-\rho \right ) \pi , \label{efe}\]]and local stability requires that the fixed point is therefore \right\ } , $ ] independent of while stability crucially depends on the limiting condition corresponds to a steady state bifurcation .this implies for that independent of both and non trivial solutions in this case require that which includes the hopfield case .the other limiting condition corresponds to a period doubling bifurcation .it follows from this that local stability requires with -\beta + 1\right\ } ^{-1}. \label{pc}\]]it is to be remarked that this condition can not be fulfilled in the hopfield , case , for which one obtains from ( [ pc ] ) the nonsense solution the resulting behavior is illustrated in figure [ figure1 ] .this shows , for the onset of chaos at in the saddle point map ( [ mt ] ) and , accurately fitting this , in monte carlo simulations .the behavior shown in the top graph of figure [ figure1 ] , which is for is likely to characterize any as well .this behavior does not occur for the singular hopfield case with static synapses , for which the stability of is independent of the bottom graph in figure [ figure1 ] illustrates that one has for regimes of regular oscillations among the attractors ( i.e. , the given pattern and its negative in this case with which are eventually interrupted as one varies even slightly , by eventual chaotic jumping .the critical value of the synchronization parameter for the emergence of chaos , due to local instabilities around the steady solution may be estimated in figure [ figure1 ] as for this is precisely the value that one obtains from equation ( [ pc ] ) for , and figure [ figure2 ] confirms that this behavior occurs also for stored patterns , and figure [ figure3 ] shows the detail during the _ stationary _ part of typical runs for representative values of in particular , figure [ figure3 ] illustrates qualitatively different types of time series our model exhibits , namely , from bottom to top : ( _ i _ ) convergence towards one of the attractorsin fact , to one of the _ antipatterns_ for ( _ ii _ ) chaos , i.e. , fully irregular behavior with a positive lyapunov exponent for ( _ iii _ ) a perfectly regular oscillation between one of the attractors and its negative for ( _ iv _ ) onset of chaotic oscillations as is increased somewhat ; and ( _ v _ ) very rapid and completely ordered and periodic oscillations between one pattern and its antipattern when all the nodes evolve synchronized with each other at each time step .the cases ( _ ii _ ) and , less markedly , ( _ iv _ ) are nice examples of instability induced switching phenomena .that is , as suggested also in experiments on biological systems ( see next section ) , the network describes heteroclinic paths among the stored patterns , remaining for different time intervals in the neighborhood of different attractors , the choice of attractor being at random .in summary , we report in this paper on a class of homogeneous or weighted networks in which the density , or the number of variables that are synchronously updated at a time may be varied .this is remotely related to _dynamics , _ _ block sequential , _ and associated algorithms , which aim at more efficient computations , and it generalizes some previous proposals rosspra , herzpre , jcortes .we describe in detail the behavior of a particular realization of the class , namely , a _ neural automaton _ which is motivated by recent neurobiological experiments and related theoretical analysis . different realizations of the class correspond to different choices of the local fields that act upon the stochastic variables at the ( neural ) nodes . for certain values of these fields , which amount to fix the ( synaptic ) connections at some constant values , e.g. , according to the hebb prescription ,one recovers the equilibrium hopfield network .the parameter is then irrelevant concerning most of the system properties .our model also admits simple extensions , corresponding to other values of the local fields , that one may characterize by a complex _ effective temperature _ .the hopfield picture has severe limitations concerning its practical usefulness , and some of these limitations may be overcome by producing a full nonequilibrium condition .it is sensible to expect that will then transform into a relevant parameter .this is , in fact , the situation in which deals with a modification of the hebb prescription which includes multiple interactions , random dilution and a gaussian noise .this implies a choice for the fields that even precludes the existence of an effective temperature , and chaotic neural activity ensues . our biologically motivated choice , namely , hopfield local fields with the simple prescription ( [ wnew ] ) , which is interpreted as a consequence of depressing synaptic fluctuations , induces a full nonequilibrium condition even for in this limit, the system was recently shown to exhibit enhancement of the network sensitivity to external stimuli .this happens to be a feature of the system for any this interesting behavior , which we illustrate in figure [ figure4 ] , corresponds to a type of instability which is known to occur in nature , as discussed below .this is associated here to a modification of the topology of the space of fixed points due to the action of the involved noise .figure [ figure4 ] is a model remake of experiments on the odor response of the ( projection ) neurons in the locust antennal lobe .our simulation illustrates two time series ( with different colors ) for the mean firing rate , in a system with six stored patterns which is exposed to two different stimuli of the same intensity and duration ( between 3000 and 4000 time steps each step corresponding here to trials ) .each pattern consists of a string of binary variables ; three of them are generated at random with , respectively , 40% , 50% and 60% of the variables set equal to 1 ( the rest are set equal to while the other three have the 1s at the first 70% , 50% and 20% positions in the string , respectively .the bottom graph shows with horizontal lines the baseline activity without stimulus ( bs ) and the network activity level in the presence of the stimulus ( sa1 ) and ( sa2 ) , which correspond to two of the random stored patterns .the conclusion is that the stimulus destabilizes the system activity as in the laboratory experiments .note , however , that this occurs for i.e. , in the absence of chaotic behavior . as a matter of fact, the behavior in figure [ figure4 ] will also be exhibited in the limit which does not show any irregular behavior .on the other hand , the suggestion that fluctuating connections may induce fractal or _strange _ attractors as far as jcortes , marropre is confirmed here . as illustrated in figure [ figure3 ] , our system may exhibit both static and kind of dynamic associative memory in this case .that is , the network state either will go to one of the attractors ( corresponding to one of the given patterns _ stored _ in the connecting synapses ) or else , for will forever remain visiting several or perhaps all the possible attractors .furthermore , the inspection rounds may abruptly become irregular and even chaotic as the density of synchronized neurons varies slightly .it follows , in particular , that the most interesting , oscillatory behavior requires synchronization of a minimum of variables , and also that occurring chaotic jumps between attractors requires some careful tuning of in fact , as illustrated in the bottom graph of figure [ figure1 ] , once the critical condition is fulfilled a complex situation ensues where it seems difficult to predict the resulting behavior for slight variations of the chaotic behavior is further illustrated in figure [ figure5 ] .this shows trajectories among the three stored patterns , namely , and these are designed , respectively , as a homogeneous string of 1s , a string with the first 50% positions set to 1 and the rest to and as a string with only the first 20% positions set to 1 .we observe many jumps between two close ( more correlated ) patterns and , eventually , a jump to the most distant pattern .it seems sensible to comment on this behavior at the light of the growing evidence of chaotic behavior in the nervous system .we have shown ( e.g. , figure [ figure4 ] ) that chaos is not needed to have efficient adaptation to a changing environment .however , one may argue caos1 , for instance , that the instability inherent to chaotic motions facilitates this and , in particular , the system ability to move to any pattern at any time .this behavior , which has been described for the activity of the olfactory bulb and other cases , is nicely illustrated in figures [ figure3 ] and [ figure5 ] .our network thus mimics the observed correlation between chaotic neuron activity and states of attention in the brain , as well as other cases of constructive chaos in biology attent1,attent2,attent2bis , attent3 .chaos has been reported in other interesting networks ( e.g. , ref .domin ) but , to our knowledge , never in such a general setting as here . as a matter of fact, the present model allows for some natural generalizations and , in particular , suggests a great interest for a more detailed study of the apparently unpredictable behavior it exhibits for this teaches us that varying is a simple method to control chaos in networks , and that this may also help in determining efficient computation strategies . concerning the latter ,the model behavior may be relevant , for instance , when judging on the best procedure for specific data mining and for the control of different activities on a multiprocessor system , and deciding on whether to implement sequential or parallel programming in some extreme cases .our findings here may also help one in interpreting recent experimental evidence of parallel processing in laminar neocortex microcircuits .that is , a comparison between the model behavior and experimental results may shed light on the dynamics of these circuits and their mutual interactions .we thank i. erchova , p.l . garrido and h.j .kappen for very useful comments .this work was financed by _ feder _ , _ meyc _ and _ ja _ under projects fis2005 - 00791 and fqm165 .jmc also acknowledges financial support from the epsrc - funded colamn project ref .ep / co 10841/1 .
|
we present exact results , as well as some illustrative monte carlo simulations , concerning a stochastic network with weighted connections in which the fraction of nodes that are dynamically synchronized , , $ ] is a parameter . this allows one to describe from single node kinetics to simultaneous updating of all the variables at each time unit an example of the former limit is the well known sequential updating of spins in kinetic magnetic models whereas the latter limit is common for updating complex cellular automata . the emergent behavior changes dramatically as is varied . for small values of we observe relaxation towards one of the attractors and a great sensibility to external stimuli and , for itinerancy as in heteroclinic paths among attractors ; tuning in this regime , the oscillations with time may abruptly change from regular to chaotic and vice versa . we show how these observations , which may be relevant concerning computational strategies , closely resemble some actual situations related to both searching and states of attention in the brain . pacs : 02.50.ey ; 05.45.gg ; 05.70.ln ; 87.18.sn ; 89.20.-a
|
historically , the constraint - programming ( cp ) community has focused on developing open , extensible optimization tools , where the modeling and the search procedure can be specialized to the problem at hand .this focus stems partially from the roots of cp in programming languages and partly from the rich modeling language typically found in cp systems .while this flexibility is appealing for experts in the field , it places significant burden on practitioners , reducing its acceptance across the wide spectrum of potential users . in recent years however , the constraint - programming community devoted increasing attention to the development of black - box constraint solvers .this new focus was motivated by the success of mixed - integer programming ( mip ) and sat solvers on a variety of problem classes .mip and sat solvers are typically black - box systems with automatic model reformulations and general - purpose search procedures .as such , they allow practitioners to focus on modeling aspects and may reduce the time to solution significantly .this research is concerned with one important aspect of black - box solvers : the implementation of a robust search procedure . in recent years, various proposals have addressed this issue . impact - based search( ibs ) is motivated by concepts found in mip solvers such as strong branching and pseudo costs .subsequent work about solution counting can be seen as an alternative to impacts that exploits the structure of cp constraints .the weighted - degree heuristic ( wdeg ) is a direct adaptation of the sat heuristic vsids to csps that relies on information collected from failures to define the variable ordering .this paper proposes activity - based search ( abs ) , a search heuristic that recognizes the central role of constraint propagation in constraint - programming systems .its key idea is to associate with each variable a counter which measures the activity of a variable during propagation .this measure is updated systematically during search and initialized by a probing process .abs has a number of advantages compared to earlier proposals .first , it does not deal explicitly with variable domains which complicates the implementation and runtime requirements of ibs .second , it does not instrument constraints which is a significant burden in solution - counting heuristics .third , it naturally deals with global constraints , which is not the case of wdeg since all variables in a failed constraint receive the same weight contribution although only a subset of them is relevant to the conflict .abs was compared experimentally to ibs and wdeg on a variety of benchmarks .the results show that abs is the most robust heuristic and can produce significant improvements in performance over ibs and wdeg , especially when the problem complexity increases .the rest of the paper is organized as follows .sections [ sec : impact ] and [ sec : wdeg ] review the ibs and wdeg heuristics .section [ sec : activity ] presents abs .section [ sec : experimental ] presents the experimental results and section [ sec : ccl ] concludes the paper .impact - based search was motivated by the concept of pseudo - cost in mip solvers .the idea is to associate with a branching decision a measure of how effectively it shrinks the search space .this measure is called the _impact _ of the branching decision .[ [ formalization ] ] formalization + + + + + + + + + + + + + let be a csp defined over variables , domains , and constraints .let denote the domain of variable and denote the size of this domain .a trivial upper - bound on the size of the search space of is given by the product of the domain sizes : at node , the search procedure receives a csp , where and is the constraint posted at node . labeling a variable with value adds a constraint to to produce , after propagation , the csp .the contraction of the search space induced by a labeling is defined as when the assignment produces a failure since and whenever , i.e. , whenever there is almost no domain reduction . an _estimate _ of the impact of the labeling constraint over a set of search tree nodes can then be defined as actual implementations ( e.g. , ) rely instead on where is a parameter of the engine and the subscripts in and denote the impact before and after the update .clearly , yields a forgetful strategy , gives a running average that progressively decays past impacts , while a choice favors past information over most recent observations .the ( approximate ) impact of a variable at node is defined as to obtain suitable estimates of the assignment and variable impacts at the root node , ibs simulates all the possible assignments . for large domains ,domain values are partitioned in blocks .namely , for a variable , let with .the impact of a value ( ) is then set to . with partitioning , the initialization costs drop from propagations to propagations ( one per block ) .the space requirement for ibs is , since it stores the impacts of all variable / value pairs .[ [ the - search - procedure ] ] the search procedure + + + + + + + + + + + + + + + + + + + + ibs defines a variable and a value selection heuristic .ibs first selects a variable with the largest impact , i.e. , .it then selects a value with the least impact , i.e. , .neither nor are guaranteed to be a singleton and , in case of ties , ibs breaks the ties uniformly at random . as any randomized search procedure, ibs can be augmented with a restart strategy .a simple restarting scheme limits the number of failures in round to and increases the limit between rounds to where .wdeg maintains , for each constraint , a counter ( weight ) representing the number of times a variable appears in a failed constraint , i.e. , a constraint whose propagation removes all values in the domain of a variable .the weighted degree of variable is defined as \mbox { s.t . }x \in vars(c ) x \wedge |futvars(c)|>1\ ] ] where is the set of uninstantiated variables in .wdeg only defines a variable selection heuristic : it first selects a variable with the smallest ratio .all the weights are initialized to 1 and , when a constraint fails , its weight is incremented .the space overhead of wdeg is for a csp .abs is motivated by the key role of propagation in constraint - programming solvers .contrary to sat solvers , cp uses sophisticated filtering algorithms to prune the search space by removing values that can not appear in solutions .abs exploits this filtering information and maintains , for each variable , a measure of how often the domain of is reduced during the search .the space requirement for this statistic is .abs can _ optionally _ maintain a measure of how much activity can be imputed to each assignments in order to drive a value - selection heuristic .if such a measure is maintained , the space requirement is proportional to the number of distinct assignments performed during the search and is bounded by formalization + + + + + + + + + + + + + given a csp , a cp solver applies a constraint - propagation algorithm after a labeling decision . produces a new domain store enforcing the required level of consistency . applying to identifies a subset of affected variables defined by the _ activity _ of , denoted by , is updated at each node of the search tree by the following two rules : where is the subset of affected variables and be an age decay parameter satisfying .the aging only affects free variables since otherwise it would quickly erase the activity of variables labeled early in the search .the activity of an assignment at a search node is defined as the number of affected variables in when applying on , i.e. , as for impacts , the activity of over the entire tree can be estimated by an average over all the tree nodes seen so far , i.e. , over the set of nodes .the estimation is thus defined as once again , it is simpler to favor a weighted sum instead where the subscripts on capture the estimate before and after the update . when the value heuristic is not used , it is not necessary to maintain _ which reduces the space requirements without affecting variable activities . _[ [ the - search - procedure-1 ] ] the search procedure + + + + + + + + + + + + + + + + + + + + abs defines a variable ordering and possibly a value ordering .it selects the variable with the largest ratio , i.e. , the most active variable per domain value .ties are broken uniformly at random .when a value heuristic is used , abs selects a value with the least activity , i.e. , .the search procedure can be augmented with restarts .the activities can be used `` as - is '' to guide the search after a restart .it is also possible to reinitialize activities in various ways , but this option was not explored so far in the experimentations . [[ initializing - activities ] ] initializing activities + + + + + + + + + + + + + + + + + + + + + + + abs uses probing to initialize the activities .consider a path going from the root to a leaf node in a search tree for the csp .this path corresponds to a sequence of labeling decisions in which the decision labels variable with .if is the subset of variables whose domains are filtered as a result of applying after decision , the activity of variable along path is defined as where if was never involved in any propagation along and if the domain of was filtered by each labeling decision in .also , when ( no aging ) and path is followed .now let us now denote the set of all paths in some search tree of .each such path defines an activity for each variable .ideally , we would want to initialize the activities of as the average over all paths in , i.e. , abs initializes the variables activities by sampling to obtain an estimate of the mean activity from a sample .more precisely , abs repeatedly draws paths from .these paths are called _ probes _ and the assignment in a probe is selected uniformly at random as follows : ( 1 ) is a free variable and ( 2 ) value is picked from . during the probe execution ,variable activities are updated normally but no aging is applied in order to ensure that all probes contribute equally to .observe that some probes may terminate prematurely since a failure may be encountered ; others may actually find a solution if they reach a leaf node .moreover , if if a failure is discovered at the root node , singleton arc - consistency has been established and the value is removed from the domain permanently . the number of probes is chosen to provide a good estimate of the mean activity over the paths .the probing process delivers an empirical distribution of the activity of each variable with mean and standard deviation .since the probes are iid , the distribution can be approximated by a normal distribution and the probing process is terminated when the 95% confidence interval of the t - distribution , i.e. , when \ ] ] is sufficiently small ( e.g. , within of the empirical mean ) for all variables with being the number of probes , observe that this process does not require a separate instrumentation .it uses the traditional activity machinery with .in addition , the probing process does not add any space requirement : the sample mean and the sample standard deviation are computed incrementally , including the activity vector for each probe as it is completed .if a value heuristic is used the sampling process also maintains for every labeling decision attempted during the probes .[ [ the - configurations ] ] the configurations + + + + + + + + + + + + + + + + + + all the experiments were done on a macbook pro with a core i7 at 2.66ghz running macos 10.6.7 .ibs , wdeg , and abs were were all implemented in the comet system . since the search algorithmsare in general randomized , the empirical results are based on 50 runs and the tables report the average ( ) and the standard deviation of the running times in seconds . unless specified otherwise , a timeout of 5 minutes was used and runs that timeout were assigned a runtime .the following parameter values were used for the experimental results : in both ibs and abs , ( slow aging ) , and .experimental results on the sensitivities of these parameters will also be reported .for every heuristic , the results are given for three strategies : no restarts ( ) , fast restarting ( ) or slow restarting ( ) .depending on which strategy is best across the board .the initial failure limit is set to .[ [ search - algorithms ] ] search algorithms + + + + + + + + + + + + + + + + + the search algorithms were run on the exact same models , with a single line changed to select the search procedure . in our experiments , ibs does _ not _ partition the domains when initializing the impacts and _ always _ computes the impacts exactly .both the variable and value heuristics break ties randomly . in wdeg , no value heuristic is used : the values are tried in the sequential order of the domain .ties in the variable selection are broken randomly .all the instances are solved using the same parameter values as explained earlier .no comparison with model - counting heuristic is provided , since these are not available in publicly available cp solvers .[ [ benchmarks ] ] benchmarks + + + + + + + + + + the experimental evaluation uses benchmarks that have been widely studied , often by different communities .the multi - knapsack and magic square problems both come from the ibs paper .the progressive party has been a standard benchmark in the local search , mathematical - programming , and constraint - programming communities , and captures a complex , multi - period allocation problem .the nurse rostering problem originated from a mathematical - programming paper and constraint programming was shown to be a highly effective and scalable approach .the radiation problem is taken from the 2008 minizinc challenge and has also been heavily studied .these benchmarks typically exploit many features of constraint - programming systems including numerical , logical , reified , element , and global constraints .[ [ multi - knapsack ] ] multi - knapsack + + + + + + + + + + + + + + .experimental results on multi - knapsack .[ cols= " < ,< , > , > , > , > , > , > " , ] [ [ sensitivity - to - the - sample - size ] ] sensitivity to the sample size + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ expe : ci ] illustrates graphically the sensitivy of abs to the confidence interval parameter used to control the number of probes in the initialization process .the statistics are based on 50 runs of the non - restarting strategy .the boxplots show the four main quartiles for the running time ( in seconds ) of abs with ranging from 0.8 down to 0.05 .the blue line connects the medians whereas the red line connects the means .the circles beyond the extreme quartiles are outliers .the left boxplot shows results on ` msq-10 ` while the right one shows results on the optimization version of ` knap1 - 4 ` .the results show that , as the number of probes increases ( i.e. , becomes smaller ) , the robustness of the search heuristic improves and the median and the mean tend to converge .this is especially true on ` knap1 - 4 ` , while ` msq-10 ` still exhibits some variance when .also , the mean decreases with more probes on ` msq-10 ` , while the mean increases on ` knap1 - 4 ` as the probe time becomes more important .the value used in the core experiments seem to be a reasonable compromise ..,title="fig : " ] .,title="fig : " ] [ [ sensitivity - to - gamma - aging ] ] sensitivity to ( aging ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ expe : gamma ] illustrates the sensitivity to the aging parameter .the two boxplots are once again showing the distribution of running times in seconds for 50 runs of ` msq-10 ` ( left ) and ` knap1 - 4 ` ( right ) .what is not immediately visible on the figure is that the number of timeouts for ` msq-10 ` increases from for to for .overall , the results seem to indicate that the slow aging process is desirable ..,title="fig : " ] .,title="fig : " ] figure [ expe : relspeed ] depicts the behavior of abs and ibs on ` radiation # 9 ` under all three restarting strategies .the axis is the running time in a logarithmic scale and the axis is the objective value each time a new upper bound is found .the three red curves depict the performance of activity - based search , while the three blue curves correspond to impact - based search .what is striking here is the difference in behavior between the two searches .abs quickly dives to the optimal solution and spends the remaining time proving optimality . without restarts ,activity - based search hits the optimum within 3 seconds . with restarts, it finds the optimal solution within one second and the proof of optimality is faster too .ibs slowly reaches the optimal solution but then proves optimality quickly .restarts have a negative effect on ibs .we conjecture that the reduction of large domains may not be such a good indicator of progress and may bias the search toward certain variables .figure [ expe : histo ] provide interesting data about activities on ` radiation # 9 ` .in particular , figure [ expe : histo ] gives the frequencies of activity levels at the root , and plots the activity levels for all the variables .( only the variables not fixed by singleton arc - consistency are shown in the figures ) .the two figures highlight that the probing process has isolated a small subset of the variables with very high activity levels , indicating that , on this benchmark , there are relatively few very active variables .it is tempting to conjecture that this benchmark has backdoors or good cycle - cutsets and that activity - based search was able to discover them , but more experiments are needed to confirm or disprove this conjecture .robust search procedures is a central component in the design of black - box constraint - programming solvers .this paper proposed activity - based search , the idea of using the activity of variables during propagation to guide the search .a variable activity is incremented every time the propagation step filters its domain and is aged otherwise .a sampling process is used to initialize the variable activities prior to search .activity - based search was compared experimentally to the ibs and wdeg heuristics on a variety of benchmarks .the experimental results have shown that abs was significantly more robust than both ibs and wdeg on these classes of benchmarks and often produces significant performance improvements .m. w. moskewicz , c. f. madigan , y. zhao , l. zhang , and s. malik .chaff : engineering an efficient sat solver . in _ proceedings of the 38th annual design automation conference _ , dac 01 , pages 530535 , new york , ny , usa , 2001 .n. nethercote , p. j. stuckey , r. becket , s. brand , g. j. duck , and g. tack .minizinc : towards a standard cp modelling language . in _ in : proc . of 13th international conference on principles and practice of constraint programming _ , pages 529543 .springer , 2007 .p. schaus , p. van hentenryck , and j .- c .scalable load balancing in nurse to patient assignment problems . in w .-j . van hoeve and j. hooker , editors , _ integration of ai and or techniques in constraint programming for combinatorial optimization problems _ , volume 5547 of _ lecture notes in computer science _ , pages 248262 .springer berlin / heidelberg , 2009 .r. williams , c. p. gomes , and b. selman .backdoors to typical case complexity . in _ proceedings of the 18th international joint conference on artificial intelligence _ , 11731178 , san francisco , ca , usa , 2003 .
|
robust search procedures are a central component in the design of black - box constraint - programming solvers . this paper proposes activity - based search , the idea of using the activity of variables during propagation to guide the search . activity - based search was compared experimentally to impact - based search and the wdeg heuristics . experimental results on a variety of benchmarks show that activity - based search is more robust than other heuristics and may produce significant improvements in performance .
|
lattice networks are frequently employed to describe the mechanical response of materials and structures that are discrete by nature at one or more length scales , such as 3d - printed structures , woven textiles , paper , or foams . for lattice networks representing fibrous microstructures for instance , individual fibrescan be identified with one - dimensional springs or beams .further examples are the models of e.g. , , , and .the reason why lattice models may be preferred over conventional continuum theories and finite element ( fe ) approaches , is twofold .first , the meaning and significance of the physical parameters associated with individual interactions in the lattice networks is easy to understand , whereas the parameters in constitutive continuum models represent the small - scale mechanics only in a phenomenological manner .an example is the young s modulus or ultimate strength of a spring or beam ( fibre or yarn ) versus that of the network .second , the formulation and implementation of lattice models is generally significantly easier compared to that of alternative continuum models .large deformations , large yarn re - orientations , and fracture are for instance easier to formulate and implement ( cf .e.g. the continuum model of that deals with large yarn re - orientations ) .thanks to the simplicity and versatility of lattice networks , they are furthermore used for the description of heterogeneous cohesive - frictional materials .the reason is that discrete models can realistically represent distributed microcracking with gradual softening , implement material structure with inhomogeneities , capture non - locality of damage processes , and reflect deterministic or stochastic size effects .examples of the successful use of lattice models for such materials are given by , , , and .as lattice models are typically constructed at the meso- , micro- , or nano - scale , they require reduced - model techniques to allow for application - scale simulations .a prominent example is the quasicontinuum ( qc ) method , which specifically aims at discrete lattice models .the qc method was originally introduced for conservative atomistic systems by and extended in numerous aspects later on , see e.g. .subsequent generalizations for lattices with dissipative interactions ( e.g. plasticity and bond sliding ) were provided by . in principle , the qc is a numerical procedure that can deal with local lattice - level phenomena in small regions of interest , whereas the lattice model is coarse grained in the remainder of the domain .the aim of this contribution is to develop a qc framework that can deal with the initiation and subsequent propagation of damage and fracture in the underlying structural lattice model .because such a phenomenon tends to be a highly localized and rather unstable process , sensitive to local mesh details , the qc framework must fully refine in critical regions _ before _ any damage occurs in order to capture the physics properly , cf .[ sect : introduction : fig:1 ] . moreover ,as the entire problem is evolutionary , the location of the fully resolved region must evolve as well , which requires an adaptive qc framework .several previous studies have also focused on adaptivity in qc methodologies , but they were always dealing with atomistic lattice models at the nano - scale , see e.g. , , .contrary to that , this contribution focuses on structural lattice networks for materials with discreteness at the meso - scale .the qc approach aimed for is schematically presented in fig .[ sect : introduction : fig:1 ] .the macro - scale fracture emerges as individual interactions failures at the lattice level .their damage leads to strain - softening and hence , the fracture process zone remains spatially localized .consequently , only the crack tip and the process zone have to be fully resolved .the displacement fluctuations elsewhere remain small , allowing for efficient interpolation and coarse graining .due to the spatial propagation of the crack front through the system of interest , available qc formulations need to be generalized to involve dissipation induced by damage , and an adaptive meshing scheme that includes a suitable marking strategy needs to be developed .the theoretical framework employed in this contribution is closely related to our variational qc formulation for hardening plasticity discussed in .the present work can be viewed as an extension towards lattices with localized damage and with an adaptive refinement strategy . in principle , the overall procedure is based on the variational formulation by , developed for _ rate - independent _ inelastic systems .this variational formulation employs at each time instant an incremental potential energy , that can be minimized with respect to the observable ( kinematic ) as well as the internal ( history , dissipative ) variables , i.e. where denotes a generalized state variable .hence , this formulation is different from the one employed in the virtual - power - based qc framework of , which is based on the virtual - power statement of the lattice model in combination with a coleman - noll procedure .note that earlier variational formulations for rate - independent systems can be found e.g. in , , , , and .the theoretical concepts of the variational formulation and its application to damaging lattice models are discussed in section [ sect : varform ] . after the incremental energy is presented for the full lattice system ,two reduction steps can be applied to it in analogy to the standard qc framework , see e.g. , , . in the first step , _ interpolation _constrains the displacements of all atoms according to the displacements of a number of selected representative atoms , or _repatoms _ for short .this procedure reduces the number of degrees of freedom drastically . in the second step ,only a small number of atoms is sampled to approximate the exact incremental energy , its gradients , and hessians , analogously to the numerical integration of fe technology .this step is referred to as _ summation _ , and it entails also a significant reduction of the number of internal variables .together , the two steps yield a reduced state variable and an approximate incremental energy .in section [ sect : qc ] , a more detailed discussion about qc techniques , adaptive modelling , marking strategy , and mesh refinement will be provided .consequences of the proposed mesh refinement strategy will be discussed also from the energetic point of view .the minimization problem provides governing equations presented in section [ sect : sol ] , where a suitable solution strategy is described .incremental energy minimization procedures often employ some version of the alternating minimization ( am ) strategy , see e.g. . the approach used here, however , minimizes the so - called _ reduced energy _ , i.e. the energy potential with eliminated internal variables , cf . or . as a result ,the overall solution process simplifies and is more efficient compared to the am approach . in section [sect : examples ] , the proposed theoretical developments are first applied to an l - shaped plate test . the force - displacement diagrams and crack paths predicted with the adaptive qc approach are compared to those predicted with full lattice computations .further , the energy consistency during the entire evolution is assessed and the errors in energies are discussed .the second numerical example focuses on the antisymmetric four - point bending test , described e.g. in .it demonstrates the ability of the adaptive qc scheme to predict nontrivial , curved crack paths .finally , this contribution closes with a summary and conclusions in section [ sect : conclusion ] .in this section , we recall the general variational theory for rate - independent systems , followed by the geometric setting , description of state variables , and by the construction of energies .the entire exposition will be confined to 2d systems , but the extension to 3d is straightforward .let us note in addition that throughout this work the term `` atoms '' will be used to refer to individual lattice nodes or particles , consistently with the original qc terminology developed for atomistic systems .the evolution of a system within a time horizon ] can be arbitrarily rescaled without any influence on the results .the system of interest is fully specified by the total free ( helmholtz type ) energy \times\mathscr{q}\rightarrow\mathbb{r} ] is called an _ energetic solution _ of the energetic rate - independent system if it satisfies the following _ stability condition _ and _ energy balance _ for all ] is the internal free energy , \rightarrow \mathscr{r}^ * \times \mathscr{z}^* ] . upon introducing a discretization of the time interval ] , . throughout this contribution ,greek indices refer to atom numbers whereas latin indices are reserved for spatial coordinates and other integer parametrizations .the nearest neighbours of an atom are furthermore stored in a set .in contrast to atomistic lattices , the nearest neighbours of each atom do not change in time .the initial distance between two neighbouring atoms and and the set of all initial distances between neighbouring atoms within the network are defined as [ subsect : disslatt : eq:1 ] where is the euclidean norm . since , the set in consists of components , where is the number of all interactions collected in an index set .the above introduced symbol will be employed below in two contexts .first , in the context of atoms , the symbol measures the distance in the reference configuration between two atoms , as used in eq . .second , in the context of interactions , the same symbol measures the length of the -th interaction in the reference configuration , , , with end atoms . a similar convention holds also for other physical quantities . and current configuration . ] as the body deforms , the atoms in the reference configuration transform to a current configuration .the deformed locations of all atoms are specified by their position vectors , . in analogy to ,they are collected in a column matrix ^\mathsf{t} ] . in order to prevent any damage evolution underapplied dirichlet boundary conditions , local stiffening is used in the vicinity of ( the young s modulus is times larger than elsewhere and the limit elastic strain is infinite ) .the overall evolution of the system is controlled by the cmod , recall eq . where we set , meaning that the indirect displacement control parameter is equal to .the numerical study has been performed for three systems .the first one is the fully - resolved lattice , whereas the other two correspond to the adaptive qc with different safety margins , cf .the first qc system uses a _ moderate _ ( ) and the second a _ progressive _ ( ) refinement strategy ; these systems are referred to as _ moderate qc _ and _ progressive qc _ for brevity . rather low values of are necessary because of the steep peaks of interaction ( and site ) energies occurring in the vicinity of the crack tip . the deformed configuration predicted by the full lattice computation at depicted in fig .[ subsect : simpleex : fig:1a ] ; note that only the atoms are shown .it can be observed that the crack initiates at the inner corner ( ) , and propagates horizontally leftward .for all three systems , the crack paths are identical ( not shown ) .the reaction force as a function of is plotted in fig .[ subsect : simpleex : fig:1b ] . in the initial stages , the reaction forces increase linearly , with the smallest stiffness corresponding to the fully - resolved system . because both qc systems have the same initial triangulations ( cf . figs .[ subsect : simpleex : fig:3a ] and [ subsect : simpleex : fig:3e ] ) , their stiffnesses are identical . for an increasing applied displacement ,the progressive qc starts to refine first , followed by the moderate one . as the refinement process entails the relaxation of the geometric interpolation constraints and hence , a decrease of stiffness , the reaction forces approach the values of the fully - resolved system .the peak force is nevertheless overestimated by approximately by both qc systems .the post - peak behaviour exhibits a mild structural snap - back , and the curves match satisfactorily with negligible error for the post - peak part of the diagram ( when the crack fully localizes ) .overall , it can be concluded that the results are more accurate for a lower safety margin .this entails , however , an increase in the number of repatoms and hence , decrease in efficiency .nevertheless , once the crack fully localizes , the differences between the results for various values of decrease , though the lower value is generally more accurate .the energy profiles are shown in fig . [subsect : simpleex : fig:2a ] as functions of .the elastic energy increases quadratically until the peak load and then drops gradually as the fracture process occurs .near the peak load , the dissipated energy increases rapidly .it then continues to grow at a more moderate rate throughout the softening part of the load - displacement response .notice that the energy balance is satisfied along the entire loading path , since the thin dotted lines corresponding to the work performed by external forces lie on top of the thick dashed lines representing .the maximum relative unbalance between and is below . from the energy point of view, it can be concluded that both qc systems approximate the results of the full lattice system well . from the -curveswe deduce that the crack starts to propagate first according to the moderate qc approach ( because of its highest stiffness ) , and that the cracks corresponding to the progressive qc and to the full - lattice solution initiate almost at the same instant .let us recall section [ subsect : meshenergy ] and note that the energy profiles presented in fig .[ subsect : simpleex : fig:2a ] correspond to reconstructed energies evaluated at time instants , cf .[ subsect : meshenergy : fig:1 ] .the energy components that are exchanged ( the artificial energies ) during the moderate qc prediction are presented in fig .[ subsect : simpleex : fig:2b ] .we see that their magnitudes are large compared to the two physical energies ( and ) .this means that the energy - reconstruction procedure described in section [ subsect : meshenergy ] is essential and that the artificial energies can not be neglected . in fig .[ subsect : simpleex : fig:4 ] , the number of repatoms normalized by the total number of atoms ( i.e. ) as a function of is shown . because both curves are situated below , and the moderate qc below , appreciable computational savings are achieved . in terms of sampling interactions ,the relative numbers of sampling interactions are slightly higher .namely , below and for the progressive and moderate approach . in the l - shaped plate simulations as a function of .] + for completeness , fig .[ subsect : simpleex : fig:3 ] shows eight snapshots of the mesh evolution . although both initial meshes are similar , different safety margins cause the fully - resolved region of the progressive qc approach to be larger .consequently , the obtained results are more accurate , but at the price that also regions far from the crack path are refined ( e.g. along the part of the boundary , cf .[ subsect : simpleex : fig:3h ] ) .the mesh of the moderate approach remains more localized , at the expense of a minor loss of accuracy . in the second example, a rectangular domain is exposed to antisymmetric four - point bending , cf .[ subsect : complexex : fig:1 ] and , e.g. , .the homogeneous body is pre - notched from the top edge to initiate a crack , and stiffened locally where the dirichlet and neumann boundary conditions are applied ( again , the young s modulus is times larger than elsewhere and the limit elastic strain is infinite to prevent any damage evolution ) .the lattice spacing is of a unit length in both directions .the entire specimen consists of atoms connected by interactions .the ( vertical ) forces and are prescribed as , cf .[ subsect : complexex : fig:1 ] , where is the indirect displacement control parameter , cf .eq . and the discussion on it .in contrast to the previous example , the sum of cmod and cmsd is used to control the simulation ( recall eq . where and ) .this combination of the two measures is required because of the following reasons .initially , the cmod is close to zero or even negative whereas the cmsd drives the evolution . in the later stages , however , the cmsd is constant while cmod parametrizes the process .their sum , therefore , naturally switches between the two approaches , see also fig .[ subsect : complexex : fig:2a ] . due to a higher brittleness compared to the previous example , cf . tab .[ sect : examples : tab:1 ] , two loading rates for , as specified in fig .[ subsect : complexex : fig:2a ] , are used .the numerical example is studied only for the two qc systems : the moderate qc , , and the progressive qc , , with no validation against the fully - resolved solution . in order to achieve a higher accuracy using the progressive approach ,a globally fine initial mesh is used , in which the maximum triangle edge length is restricted to lattice spacings . for the moderate approach ,the mesh is as coarse as possible to describe the specimen geometry by a right - angled triangulation .the initial meshes are the top triangulations in fig .[ subsect : complexex : fig:5 ] ( figs .[ subsect : complexex : fig:5a ] and [ subsect : complexex : fig:5e ] ) . the deformed configuration predicted by the progressive qc approach at presented in fig .[ subsect : complexex : fig:2b ] . in qualitative accordance with experimental data , see e.g. , section 4.1 , the crack path initiates at the right bottom corner of the notch , subsequently curves downwards and then approaches the bottom part of the boundary to the right side of the force .the crack paths predicted by both adaptive qc schemes are presented on the undeformed configuration in fig .[ subsect : complexex : fig:3a ] . herewe notice that the results are almost identical .the total applied force , recall eq ., is plotted in fig .[ subsect : complexex : fig:3b ] against .although the initial triangulations differ significantly ( cf . figs .[ subsect : complexex : fig:5a ] and [ subsect : complexex : fig:5e ] ) , the initial slopes are practically identical . as the moderate qc refines later and less extensively , the peak force is overestimated by the moderate qc compared to the progressive qc approach by approximately . in the post - peak region ,both curves are practically identical again .the reconstructed energy profiles of both refinement approaches are presented in fig .[ subsect : complexex : fig:4a ] .from there it may be concluded that the results match well .moreover , we see that both solutions satisfy the energy balance along the entire loading path . in fig .[ subsect : complexex : fig:4b ] , substantial energy exchanges ( i.e. artificial energies ) due to mesh refinement can again be observed ( similar to the first numerical example ) . because the size of the fully - resolved domain is small compared to the entire domain, both adaptive qc approaches achieve a substantial computational gain .this is supported by fig .[ subsect : complexex : fig:6 ] in which the relative numbers of repatoms are presented .the ratio remains below and even below for the moderate refinement strategy . in the case of the relative numbers of sampling interactions ,the ratios remain below and for the progressive and moderate approaches .note also that the number of repatoms increases rapidly near the peak load for the progressive approach whereas it develops more gradually for the moderate one . finally , in fig .[ subsect : complexex : fig:5 ] several snapshots that capture the evolution of the triangulations are presented .it can again be noticed that the progressive approach refines until quite far from the crack tip , whereas the fully - refined region in the moderate qc remains localized . as a function of .eight chosen triangulations corresponding to , , , and are presented in fig .[ subsect : complexex : fig:5 ] . ] + + +in this contribution , we have developed an energy - based dissipative qc approach for regular lattice networks with damage and fracture .the study shows that the efficiency of the qc methodology applies also to brittle phenomena , and that together with an adaptive refinement strategy it provides a powerful tool to predict crack propagation in lattice networks .the main results can be summarized as follows : 1 .the general variational formulation for rate - independent processes by was rephrased for the case of lattice networks with damage .the two standard qc steps , interpolation and summation , were revisited from an adaptive point of view . for the interpolation ,meshes with right - angled triangles were used because of their * ability to naturally refine to the fully - resolved underlying lattice * binary - tree structure that allows for fast and efficient data transfer * significant reduction of the summation part of the qc error .3 . to determine the location of the critical region , a heuristic marking strategy with a variable parameter that controls the accuracy of the simulation was proposed .the mesh refinement procedure was discussed from an energetic standpoint , and the significance of the reconstruction procedure with respect to the energy consistency was shown .the numerical examples demonstrated that the introduced marking strategy is capable of satisfactorily predicting the evolution of the crack path and the load - displacement response , especially in the post - peak region .solutions obtained using indirect load displacement control satisfied the energy equality condition .let us note that as the crack tip propagates throughout the body , it would be convenient to include besides the mesh refinement ahead of it also mesh coarsening in its wake .furthermore , instead of the proposed heuristic marking strategy , techniques such as goal - oriented error estimators may be implemented to improve further the performance of the adaptive qc .both aspects enjoy our current interest and will be reported separately .this appendix provides the details on the dissipation function needed in the dissipation distance of eq . for the constitutive law with exponential softening shown in fig .[ sect : examples : fig:1 ] , cf . also eq . .because damage evolves only under tension , recall section [ subsect : energies ] , we assume in the remainder of this section that for the ease of notation . recall that , in accordance with eq . , denotes the initial length of the interaction , and its ( admissible ) deformed length . in order to make the interaction behaviour geometry - independent, we introduce the interaction strain , , and the interaction stress ( the superscripts are dropped for the sake of brevity ) where the normal force of the interaction is denoted as , and where we have used . remind that is the young s modulus , the cross - sectional area , and associated ( admissible ) damage variable . the pair potential from eq .can be rewritten as where we have emphasized that the admissible strain is a function of the admissible length . to construct a constitutive model that under monotonic damage evolution displays a specific stress - strain response ,say where is a given target softening function ( recall eq . and fig .[ sect : examples : fig:1 ] ) , the damage variable is considered as a function of the current strain . using the constitutive relation in eq . while employing , one obtains i.e. the damage variable as a function of the strain . in the interval of growing damage ,the function is invertible , providing eq .substituted into ( where is now considered as a function of rather than according to ) provides ; recall the first - order optimality conditions in eqs . , where the minimizer was denoted as , cf .also eqs . and .using hats in eq .is therefore a slight abuse of notation as is not entirely arbitrary . ] integrating this relation yields that can be expressed in a closed form for some special cases such as linear softening , cf . , eq . ( 85 ) . for the exponential softening law defined in eq ., the expression attains the following form and is shown in fig .[ sect : a : fig:1a ] . rewriting eq . as can cast its inversion in terms of the lambert transcendental function ( recall the defining equation , cf . , eq .( 1.5 ) ) , \label{sect : a : eq:2}\ ] ] which is the counterpart to eq . .upon substituting this inversion in eq ., we end up with the following integral ^ 2 \,\mathrm{d}\eta , \label{sect : a : eq:3}\ ] ] where have been introduced for brevity .let us further denote and write ( by the defining equation of the lambert function ) which can be differentiated on both sides to yield employing eq . in eq ., we obtain now , a change of variables in the integral according to eq . can be carried out , providing us with the relation on the right hand side of eq . can be expanded as \right .+ w(c_2)e^{-w(c_2 ) } - w(c_2/(1-\widehat{\omega}))e^{-w(c_2/(1-\widehat{\omega } ) ) } \right\ } , \end{aligned } \label{sect : a : eq:9}\ ] ] see also fig .[ sect : a : fig:1b ] where a sketch of is shown .the energy dissipated by the complete failure process then reads .\label{sect : a : eq:10}\ ] ]financial support of this work from the czech science foundation ( gar ) under project no .14 - 00420s is gratefully acknowledged .52 natexlab#1#1[1]`#1 ` [ 2]#2 [ 1]#1 [ 1]http://dx.doi.org/#1 [ ] [ 1]pmid:#1 [ ] [ 2]#2 , , , . . ,http://www.sciencedirect.com/science/article/pii/s0022509615000630 , ., , , , a. . , .http://www.sciencedirect.com/science/article/pii/s0045782514002047 , ., , , . . ,http://dx.doi.org/10.1002/nme.3134 , ., , , b. . , .http://www.sciencedirect.com/science/article/pii/s0022509614001100 , . , , ,http://www.sciencedirect.com/science/article/pii/s0022509613002445 , ., , , d. . , .http://www.sciencedirect.com/science/article/pii/s004578251300279x , ., , , , . . ,http://dx.doi.org/10.1002/gamm.201510018 , .http://www.sciencedirect.com/science/article/pii/s1359835x13000134 , . , .http://dx.doi.org/10.1007/s10898-010-9560-6 , . , .. , . . , , ,http://www.sciencedirect.com/science/article/pii/s0022509699000289 , ., , , . . ,http://epubs.siam.org/doi/abs/10.1137/080741033 , ., , , . . ,, , , , . ., , , , , . . ,http://dx.doi.org/10.1007/bf02124750 , ., , , . . ,http://www.sciencedirect.com/science/article/pii/s0045782505003956 , ., , , , . . ,http://www.sciencedirect.com/science/article/pii/s0013794415000053 , .http://www.sciencedirect.com/science/article/pii/s0022509698000349 , .http://www.sciencedirect.com/science/article/pii/s0020768309004752 , .http://dx.doi.org/10.1007/s10704-012-9753-8 , .http://www.sciencedirect.com/science/article/pii/s0022509610002425 , .http://eu.wiley.com/wileycda/wileytitle/productcd-0471987166.html . , , .http://www.sciencedirect.com/science/article/pii/s0020768315002620 , .http://cvgmt.sns.it / paper/2832/. , , . .http://www.sciencedirect.com/science/article/pii/s0167663612000683 , ., , , . . ,http://www.sciencedirect.com/science/article/pii/s016784421000039x , .http://journals.cambridge.org/article_s0962492913000068 , . , , , .http://dx.doi.org/10.1007/s00466-015-1127-4 , .http://www.sciencedirect.com/science/article/pii/s004578251400423x , .http://www.springer.com/us/book/9781493927050 , ., , , . . ,http://www.sciencedirect.com/science/article/pii/s0045782509003181 , . , , , .http://dx.doi.org/10.1007/s002050200194 , .http://dx.doi.org/10.1023/a%3a1026098010127 , .http://www.sciencedirect.com/science/article/pii/s0021999102971834 , .http://www.sciencedirect.com/science/article/pii/s0045782598002199 , .http://www.sciencedirect.com/science/article/pii/s1359835x04002593 , .http://dx.doi.org/10.1007/s00161-011-0228-3 , .http://www.sciencedirect.com/science/article/pii/s002250961100055x , ., , , . . ,, , , . . ,http://www.sciencedirect.com/science/article/pii/s0022509610001341 , . , .. , . . , , , ,http://arxiv.org/abs/1601.00625[arxiv:1601.00625 ] . , .ph.d . thesis .technische universiteit delft . .http://www.sciencedirect.com/science/article/pii/095894659290004f , ., , , , , , . . ,http://www.sciencedirect.com/science/article/pii/s0022509698000519 , .http://dx.doi.org/10.1080/01418619608243000 , , http://arxiv.org/abs/http://dx.doi.org/10.1080/01418619608243000 [ arxiv : http://dx.doi.org/10.1080/01418619608243000 ] . , , . .http://www.sciencedirect.com/science/article/pii/s0045782514003545 , .
|
lattice networks with dissipative interactions can be used to describe the mechanics of discrete micro- or meso - structures of materials such as 3d - printed structures and foams , or more generally heterogeneous materials . this contribution deals with the crack initiation and propagation in such materials and focuses on an adaptive multiscale approach that captures the spatially evolving fracture . lattice networks naturally incorporate non - locality , large deformations , and dissipative mechanisms taking place directly inside fracture zones . because the physically relevant length scales for fracture are significantly larger than those of individual interactions , discrete models are computationally expensive . the quasicontinuum ( qc ) method is a multiscale approach specifically constructed for discrete models . this method reduces the computational cost by fully resolving the underlying lattice only in regions of interest , while coarsening elsewhere . in this contribution , an incremental energy potential corresponding to the full lattice model with damageable interactions is constructed for engineering - scale predictions . subsequently , using a variational qc approach the incremental energy along with governing equations for the upscaled lattice network are derived . in order to deal with the spatially evolving fracture zone in an accurate , yet efficient manner , an adaptive scheme is proposed that utilizes the underlying lattice . implications induced by the adaptive procedure are discussed from the energy - consistency point of view , and theoretical considerations are demonstrated on two examples . the first one serves as a proof of concept , illustrates the consistency of the adaptive schemes , and presents errors in energies compared to the fully - resolved solution . the second one demonstrates the performance of the adaptive qc scheme for a more complex problem for which the fully resolved solution is not accessible . lattice networks , quasicontinuum method , damage , adaptivity , variational formulation , multiscale modelling
|
from gestures and smoke signals to books , paintings , telecom and high - capacity optical fibres connecting continents , light is one of the main information carriers that has been driving human civilisation since antiquity . aside from the cultural aspect of communicating ideas with pictures ,optical information processing is an important economic engine . in 2020 ,the market volume of the photonics industry will be 650 billion euro and is expected to continue to outperform gdp .light is the ideal medium for fast , reliable and high - bandwidth communication .the amount of data that can be transmitted through optical fibres continues to grow , and we are approaching the limit for the capacity of single - mode fibres . to increase the capacity ,multi - mode fibres can be employed to achieve a data transmission rate of 255 terabits per second over a distance of one kilometre .for comparison , this is similar to streaming one hundred thousand full length hd movies _each second_. similarly , imaging is fundamentally an optical task , and it has been known for a long time that the wave nature of light places a limit on how well we can make out small details using microscopes and telescopes . the resolution limit for imagingis called the abbe limit , and depends on the opening angle of the aperture of the imaging device , called the_ numerical aperture_. the bigger the numerical aperture , the higher the resolution .this is why telescopes are made in larger and larger sizes . in microscopy , where one can have more control over the object that is to be imaged, several techniques have been developed that can beat the abbe limit ( e.g. , sted , storm , palm ) .these techniques use prior information about the source and selective activation of photo - emitters to achieve so - called super - resolution .finally , conventional computing faces some important barriers , such as heat generation and bandwidth limitations .classical optics can help heat reduction by using passive elements and reversible computing , and optics also allows us to handle complex calculations by using fan - in and fan - out , in addition to parallelisation .these techniques are expected to become more prevalent in the near future .all of the above examples are using classical optics .however , light is not classical . from a fundamental physics point of view, optical information processing must be extended to take quantum effects into account .these effects do not just add noise to existing techniques , but enable dramatic improvements in information processing with light , from communication , metrology and imaging to full - scale quantum computing . in this review, i will sketch the physical principles and phenomena that lie at the heart of optical quantum information processing .i place special emphasis on the quantum mechanical properties of light that make it different from classical light , and how these properties relate to information processing tasks . in section [ sec : transition ] i introduce the concepts of coherence , anti - bunching and the hong - ou - mandel effect , and in section [ sec : photons ] i give a brief introduction how photons as quantum information carriers are described mathematically . section [ sec : comm ] is devoted to quantum communication , covering the no - cloning theorem , quantum key distribution , teleportation and repeaters . in section [ sec : metro ] i introduce the idea of precision metrology with classical and quantum light and show how similar techniques can be used for high resolution imaging .section [ sec : comp ] starts with a discussion about optical entanglement and introduces the famous _ klm protocol _ for quantum computing .i also sketch the most recent ideas for creating a quantum computer architecture based on linear optics .finally , in section [ sec : future ] i give an overview of the practical challenges that still remain in implementing the ideas presented in this review .as we continue to improve the information processing capabilities of our computer processors and networks , everything is being made smaller and more efficient . in miniaturizing and squeezing all the information out of every last bit of light, we will be coming up against a fundamental limit of nature , namely the fact that light comes in discrete quanta called _photons_. entirely new laws of physics come into play that have a profound effect on our capabilities for information processing . before we can explore these new capabilities , however , we need to establish in some more detail what makes quantum light different from classical light . to this endwe will briefly describe the idea of coherence , anti - bunching , and two - photon interference .classical light exists in roughly two categories , namely _ thermal _ and _ coherent _ light .thermal light is the type that is emitted by sources like stars , light bulbs and leds , while coherent light is typically associated with the output of a laser .the two types should not be seen as completely distinct , but rather as the extremes of a whole spectrum that spans from coherent , via partially coherent , to thermal ( or incoherent ) light .a key concept in this regard is the _ coherence length _ of a beam of light . and point ) , as well as long times ( between point and point ) .alternatively , if the frequency fluctuates over time ( right figure ) , there will still be an appreciable phase correlation at short times ( between point and point ) , but at long times the phase relationship fluctuates ( between point and point ) .the characteristic time at which the coherence between the phases drops below a certain level is called the coherence time .sometimes this is also referred to as the coherence length.,title="fig:",height=124 ] and point ) , as well as long times ( between point and point ) . alternatively ,if the frequency fluctuates over time ( right figure ) , there will still be an appreciable phase correlation at short times ( between point and point ) , but at long times the phase relationship fluctuates ( between point and point ) .the characteristic time at which the coherence between the phases drops below a certain level is called the coherence time .sometimes this is also referred to as the coherence length.,title="fig:",height=124 ] the coherence length is most easily explained by first looking at the coherence _ time_. consider a wave of constant frequency , as shown in figure [ fig : coherence ] on the left .the phase of the wave at time will be given by .we can calculate the difference between the phases at difference times .for example , is the difference in the phase of the wave for the time interval .since the frequency of the wave is constant over time , the phase difference between two points in time separated by will be constant as well .in addition , given a third phase at time much later than we have again a constant phase difference for a time interval : next , consider a wave whose frequency fluctuates over time , as shown in figure [ fig : coherence ] on the right ( the effect is quite subtle ) . if the time interval is short compared to the ( inverse ) rate of change of , the difference between the phases at and will still be nearly constant .however , for longer times between two points on the wave this will no longer be the case .the fluctuations in the frequency will cause fluctuations in the phase relationship between and , and we have where is a strongly fluctuating function over time taking values in the entire interval .there will be a characteristic time where the behaviour of the phase difference changes from nearly constant to strongly fluctuating .this is the coherence time of the wave .multiplying the coherence time by the velocity of the wave in the medium gives us the coherence length . a very similar argumentcan be made for the phase coherence of two emitters a distance apart , leading to the concept of _ transverse _ coherence length .the coherence length of light plays a fundamental role in classical interference .coherent sources interfere , while incoherent sources do not .these properties carry over to the quantum mechanical treatment of light , and we will see that coherence is a crucial property for optical quantum information tasks .classical waves can have any amplitude , no matter how small .however , this is not the case for quantum mechanical light .when the power of a light source is reduced , at some point a detector will no longer record a constant signal , but rather we find that the light arrives in bursts . these bursts are called photons .we can easily imagine how such bursts come about : the atom that emits the photon does so by having one of its electrons drop down to a lower energy state ( the difference in energy escapes in the form of our photon ) .creating a second burst of light requires the electron to be loaded back up into the excited state , which takes time .this leads to the phenomenon of _ anti - bunching _ : it is relatively unlikely that two photons are detected in much shorter succession than the time it takes the electron to reoccupy the excited state . to make this description a bit more mathematically precise, we can consider the probability distribution of the number of photons that are recorded in a time interval .if the photons arrive completely randomly , this distribution will be poissonian : where is the number of photons detected in the interval and is the ( dimensionless ) intensity of the light , which corresponds to the average number of photons in .when a source exhibits anti - bunching at a time scale , the probabilities of finding two , three , four , etc . , photons will be suppressed compared to the poisson distribution in eq ( [ eq : poisson ] ) . .we vary the path length of one of the detectors . for a poissonian distribution ( blue curve )the function remains constant at 1 , while at very short times ( i.e. , equal distances of the detectors to the beam splitter ) the probability of getting detector coincidences drops to zero for anti - bunched light ( green curve ) .note that thermal light is bunched ( orange curve ) , in the sense that it is _ more _ likely that two photons are emitted at the same time.,title="fig:",height=154 ] .we vary the path length of one of the detectors . for a poissonian distribution ( blue curve )the function remains constant at 1 , while at very short times ( i.e. , equal distances of the detectors to the beam splitter ) the probability of getting detector coincidences drops to zero for anti - bunched light ( green curve ) . note that thermal light is bunched ( orange curve ) , in the sense that it is _ more _ likely that two photons are emitted at the same time.,title="fig:",height=154 ] experimentally , we can demonstrate this by putting a beam splitter in the light beam and count the number of coincidences in the two photodetectors , as shown in figure [ fig : hbt ] .the number operator for detector 1 is given by and that of detector 2 by .the average photon number in detector at time is then calculated via the quantum mechanical expectation value .the second - order correlation is measured by the function defined according to which does not depend on for stationary processes , such as the situation we consider here .a typical function is shown in figure [ fig : hbt ] . for long timescales the intensities for anti - bunched light in both detectors become independent , and the function for anti - bunched light ( the green curve ) tends towards that of poissonian light ( the blue curve ) . for completenesswe also plotted the function for thermal light ( the orange curve ) , which shows _ bunching _ : it is more likely that the two detectors fire in unison at very short timescales compared to poissonian light .this is in fact also a quantum mechanical effect : a thermal `` gas '' of photons obeys the bose - einstein distribution , rather than the classical maxwell - boltzmann distribution .the question is now what makes light quintessentially quantum - mechanical , and which features of light are well - explained by the classical theory .maxwell s equations do not provide a description of quantised energy , but they do very accurately describe the shapes of the wave packets . in particular , the coherence lengths ( transverse and longitudinal ) that are determined by the classical theory determine the interference properties of photons . to illustrate this , we consider young s double - slit experiment with photons .suppose that a source of single photons illuminates a screen with two narrow slits of width placed at a distance apart .the classical theory predicts that the intensity pattern at a screen a distance from the slits is given by where is the position along the screen and is the wavelength of the light .when the source emits single photons one at a time , after collecting many photons the average intensity on the screen will be exactly the same .therefore the spatial behaviour of single photons is identical to that of classical waves .however , this presupposes that we do not know which slit the photon travels through .this is equivalent to saying that the amplitudes of the photon wave packet at the left slit and the right slit add coherently . in other words ,the transverse coherence of the photon wave packet must be larger than the distance between the slits .next , we consider the quantum mechanical description of light .classically , we can write the electric field as a solution to the wave equation : where is the complex amplitude of a wave in mode ( commonly referred to as the wave vector , but in principle we can have more exotic labelings ) , and the sum over indicates that the waves can be superposed .the functions in eq .( [ eq : fieldop ] ) are the so - called _ mode functions _ , and they form a complete set of solutions to the classical maxwell equations . often a plane wave expansion for is given , in which and the sum in eq .( [ eq : fieldop ] ) becomes an integral over with the frequency of the wave vector .the vector determines the polarisation of the mode .other expansions are also possible , and may be more convenient depending on the application ( for example , a plane wave expansion is not very suited to describe a wave in an optical fibre ) . in the full quantum mechanical description of optics ,the electric field becomes an operator and can be written as : where and are the annihilation and creation operator for the optical mode indicated by .these operators replace the complex amplitudes in the classical theory and obey the commutation relations = \left [ \hat{a } _ { \mathbf{k}}^\dagger , \hat{a } _ { \mathbf{k}'}^\dagger \right ] = 0 \qquad\text{and}\qquad \left [ \hat{a } _ { \mathbf{k } } , \hat{a } _ { \mathbf{k}'}^\dagger \right ] = \delta_{\mathbf{k}\ , \mathbf{k}'}\ , , \end{aligned}\ ] ] where is the kronecker delta symbol , in which case the sum over becomes an integral and the kronecker delta becomes a dirac delta . ] .the creation and annihilation operators act on photon number states according to the rules the number operator for mode is then given by .the algebra of these operators is identical to that of the simple harmonic oscillator . creating a photon in mode that the photon will behave according to the spatio - temporal description provided by .since is determined by maxwell s equations , we can say that the classical maxwell equations are the equations of motion for the photon .the quantum mechanical behaviour of light is restricted to the photon _statistics_. while the mode shapes of propagating photons are determined by the classical theory of electrodynamics , the quantum behaviour of light is most apparent in multi - photon effects . for the purposes of quantum information processing , the most important example ofthis is the hong - ou - mandel effect , which is a two - photon intensity interference effect .specifically , the hong - ou - mandel effect occurs when two photons with identical frequency , polarisation and shape of the wave packet enter a 50:50 beam splitter on either side .if we place detectors in the two outgoing modes of the beam splitter , each photon has two ways of triggering the detectors .either the photon triggers the upper detector , or it triggers the lower detector .the resulting four possible paths for the two photons are shown in figure [ fig : hom ] . in ( a ) the top input photonis transmitted while the bottom photon is reflected . in that case, both photons end up in the bottom detector . and vice versa, both photons may end up in the top detector ( d ) .whenever a photon is reflected off the topside of the beam splitter it experiences a phase shift , which results in a factor in the state of the photon . phase shift upon reflection of the top surface , the case where both photons are reflected has a relative minus sign compared to the case where both photons are transmitted . when the input photons are identical , we can not distinguish between the two middle outcomes , and they cancel .so the identical photons will always pair off towards the same output beam , and never leave the beam splitter in different beams.,width=415 ] an interesting effect happens when we consider situations ( b ) and ( c ) in figure [ fig : hom ] . in case ( b ) , both photons are reflected by the beam splitter , while in case ( c ) , both photons are transmitted .since the photons are identical , the two processes ( b ) and ( c ) are _ indistinguishable _ from each other .no physical process can tell whether the photons were reflected or transmitted , and the beam splitter itself holds no memory of the process . according to feynman , this means that the two contributions must be superposed coherently and are allowed to interfere .however , since the top photon in process ( b ) picks up a phase factor , the two states must be subtracted .this leads to destructive interference , and as a result the two identical photons will _ never _ end up in separate detectors .this effect was first observed by hong , ou , and mandel in 1987 .the effect lies at the heart of the protocols that enable quantum computing with single photons and linear optics .a recent experimental realisation of the hong - ou - mandel effect is shown in figure [ fig : homexp ] .the depth of the dip indicates the level of indistinguishability between the two photons .together with a single - mode measurement to verify the presence of only a single photon , the hong - ou - mandel dip gives a good indication for the quality of single - photon sources .as we encounter the quantum limits of light , we may ask what we can do in terms of information processing if we embrace this natural behaviour .imagine that we send classical bits using the polarisation degree of freedom .in other words , an optical pulse of horizontally polarised light ( ) is defined as the bit value 0 , and a vertically polarised pulse ( ) is the bit value 1 . at the single photon levelthe polarisation is still a well - defined physical property , since it is determined by the mode functions ( and therefore obey maxwell s equations ) .the polarised photons and their bit values can be described quantum mechanically with the quantum states a fundamental property of classical light is that two pulses can be prepared in superposition . for our example , this means that we can make a superposition of vertical and horizontal light .the result is a new pulse with a polarisation in a different direction depending on the relative phase between the and pulses .the new polarisation can be linear , circular , or elliptical .this property carries over to photons . for example ,left- and right - handed circularly polarised photons have quantum states and , respectively : if we treat the horizontal and vertical polarisation of a photon as bit values 0 and 1 , we see that we now have two _ new _ bit values that are superpositions of and : this is not possible for a classical bit , and we therefore call the polarised photon a quantum bit , or_ qubit _ for short .every classical polarisation has a corresponding qubit when we bring the optical pulse down to a single photon .this requires that the two pulses have a well - defined phase relationship , and are therefore _ coherent _ in the sense of the discussion in section [ sub : coherence ] .the extra states in a qubit over a classical bit suggest that information processing with qubits can be more powerful than information processing with classical bits , because loosely speaking more available states means more room for the information to play in . instead of polarisation, we can use other degrees of freedom of light , but for simplicity we will restrict our discussion to polarised photons in this article . since the qubit structureis directly inherited from the classical superposition principle , the next question is : what makes the photonic qubit a fundamentally quantum mechanical object ?the answer is given by anti - bunching .there is only one indivisible photon ( see figure [ fig : hbt ] ) that triggers either detector 1 or detector 2 .classically , both detectors could register a non - zero signal simultaneously .the fact that this is not possible for single photons means that the photon ends up in either one or the other detector in a _probabilistic _ manner .if we wish to measure the polarisation of the photon , we must first choose which _ basis _ we want to measure ( and , or and ) .if we measure the photon in the state in the basis , we will obtain the outcome `` h '' or `` v '' with 50:50 probability .the superposition principle together with the concept of the photon as a particle further leads to the phenomenon of _ entanglement_. two photons can be prepared in the state where and are short - hand for the polarisation states of the two photons .it is easy to see that the state in eq .( [ eq : phiplus ] ) can not be written as the product of two separate photon states : where and are complex numbers obeying and .the effect of entanglement is that the two photons are more strongly correlated than is possible classically : \cr & = \frac{\ket{lr } + \ket{rl}}{\sqrt{2}}\ , .\end{aligned}\ ] ] there is not only a correlation in and , but also in and .this is not possible in classical systems , and these stronger quantum correlations can be utilised in information processing . for future use , we define a basis of four entangled states , called the _ bell states _ : a measurement in this basis is called a _ bell measurement _ , and plays a crucial role in quantum information processing . now that we have a photonic qubit , what exactly can we do with it ?we would expect that all the classical information processing tasks with light will in some way carry over to quantum light with some enhancements due to qubit superpositions and entanglement .indeed , we can construct new communication protocols and create a quantum internet , we can use photons as measurement probes to achieve a much higher precision in parameter estimation and imaging , and we can use photons as the fundamental information carriers in quantum computing .however , photons are not equally good at all these things . while they are clearly very good data carriers over long distances , it is rather hard to slow them down significantly or even stop them completely .typical classical and quantum information processing tasks require feed - forward operations in which the state of a qubit is modified in a way that depends on the measurement outcome of another process .if the photon flies away at the speed of light while the measurement is being made , we can not perform feed - forward because we can not catch up with the photon .we therefore need to store the photon in some kind of photon memory .this is a complicated process that will likely introduce a lot of noise .another complication is that while the polarisation of a single photon is very easy to manipulate using half wave plates and quarter wave plates , the creation of entanglement between two photons is extremely hard .this is due to the complete absence of direct photon - photon interactions .an operation that entangles two photons must therefore be an inherently nonlinear process , either involving nonlinear materials or a clever arrangement of projective measurements . in this reviewwe will consider the latter .classical light makes for an excellent information carrier over long distances .this is also true for quantum light .moreover , we can use the quantum mechanical properties of photons to accomplish new communication tasks that are more difficult or impossible with classical light . as an example, we will consider secure communication using quantum key distribution , and explore what requirements are necessary to extend these techniques beyond a few hundred kilometres .consider a photon with horizontal and vertical polarisation states and , respectively .as we have seen , we can make quantum superpositions of these states to obtain different polarisation states .the no - cloning theorem says that it is impossible to create a machine that copies an unknown quantum state perfectly . to see this ,suppose that we have a photon in some unknown polarisation state and a second photon in a known initial polarisation state , e.g. , .a proper copying machine would have to produce the following effect on _ any _ state : in particular , the machine must act on the states and according to this completely determines how the copying machine will handle the unknown polarisation state , because any polarisation state can be written as a quantum superposition of and .for example , suppose that the state is in fact the left - handed circularly polarised state .then and the copying machine will produce however , this is _ not _ the same as , as you can tell when we write it out in the basis : this means that a copying machine that works for and will not faithfully copy and , and vice versa .practically , this means that the badly copied photon behaves differently from a photon in the original state .the no - cloning theorem is a fundamental result in quantum mechanics and applies to all physical systems .next , consider the situation in which two agents , alice and bob , wish to communicate in private .one way they can accomplish this if they share a secret string of random zeros and ones called a _key _ : alice adds this string to her binary message , creating the encrypted message .bob decrypts the message by subtracting the key from the encrypted message .since no - one else has the secret key , nobody can decrypt the message but alice and bob .the question is how to generate such a secret key . sending the secret key overa public channel will invite eavesdroppers to copy it and gain access to the private message between alice and bob .if alice and bob can detect the eavesdropper , they know the channel is compromised and move to a different channel .this is what the no - cloning theorem allows them to do : let s suppose that and denote a bit value of zero , and and denote a bit value of one .alice sends a random string of photons in polarisation states , , , and .bob measures randomly in the basis or the basis .about half the time alice will have created a photon in the same basis as bob s measurement , and in those cases both alice and bob will know whether they shared a zero or a one .in the rest of the cases there is no correlation between the bit value sent by alice and the bit value measured by bob .alice and bob then publicly compare their preparation and measurement bases ( or ) , and keep only those bits for which the preparation and measurement bases coincide .note that they do not reveal the actual bit values , only the bases . to see whether there is an eavesdropper on the line , alice and bob sacrifice a small part of their secret key .they publicly compare this part of the key and see if the bit values match up .if they do , there was no eavesdropper , but if there is a sizeable amount of errors there may have been an eavesdropper .anyone trying to copy the secret key as it was being established must copy the information of the photon polarisation .however , since the photons were sent in two different bases ( unknown to anyone but alice ) , a copying machine that works perfectly in the basis will create imperfect copies and cause incorrect measurement results for bob .these incorrect measurement results will show up when alice and bob compare part of their secret key .the secrecy of the key is therefore guaranteed by the no - cloning theorem .the comparison between alice and bob of a fraction of their secret key is what guarantees the privacy of the key .there is a trade - off between the amount of information eve can gain , and the level of privacy attained by alice and bob .when this protocol is implemented with real devices , additional noise will appear in the system , and alice and bob must be able to account for that also . to this end , they can further sacrifice part of the key to increase their privacy .this is called _ privacy amplification _ , and is a crucial part of any practical implementation of quantum key distribution .the general trade - off in the communication between alice and bob is then privacy versus bit rate .a final observation about this quantum key distribution protocol is that it relies critically on the quantum mechanical possibility of anti - bunching of light .if light did not come in discrete packages ( photons ) , the eavesdropper could siphon off a small part of the signal with a beam splitter and measure the polarisation of this very weak field .the fact that there is only one photon in each successive pulse from alice means that it either shows up in bob s detector , or it arrives in the eavesdropper s detector , in which case bob will register a failed transmission . in either casethat photon will not be used for the secret key and is useless to the eavesdropper .if there was a possibility of more than one photon in each pulse ( a poissonian or thermal distribution ) , the eavesdropper could measure one photon while an identical photon makes its way to bob .note that this does not violate the no - cloning theorem , since alice can create however many photons she chooses in a state of her choice .when a photon travels in an optical fibre , it has a certain probability of being scattered by impurities in the fibre . in this casethe photon does not make it to the end of the fibre , and we call this _ photon loss_.a fibre can be characterised by an attenuation length at which the original signal is reduced by a factor .the attenuation for a fibre of length is then given by .this is an exponential decay , which means that we can not lay arbitrarily long fibre - optic cables and still expect a sizeable bit rate from end to end . in practice , the cable length can be a few hundred kilometres at most .if we want to extend the reach of quantum communication protocols , we have to add some active devices in the communications channel .classically , this is accomplished by repeater stations , which amplify the signal and transmit it to the next repeater station . however , amplification is a form of copying , and we have just seen that the no - cloning theorem prevents such devices from working properly on general qubit states . at firstthis looks like quantum communication will remain viable only for short distances .however , another fundamental protocol in quantum mechanics comes to the rescue here , namely quantum _ teleportation_ . in quantum teleportation , shown schematically in figure [ fig : teleportation ], alice wishes to send an arbitrary quantum state of a photon to bob . rather than sending the state directly ( which would be subject to photon loss ) , they first establish entangled photons pairs between each other .we denote the arbitrary quantum state by and the shared entanglement is in the bell state , where the labels 1 , 2 , and 3 indicate the photons .photons 1 and 2 are held by alice , and photon 3 is held by bob .the total state of the three photons is then given by where and are complex numbers obeying .we can write this as we suppressed the photon labels for brevity .next , alice performs a bell measurement , in which her two photons are projected onto the bell states . writing the states , , , and in the bell basis and rearranging the terms , the state just before the bell measurement is given by .\end{aligned}\ ] ] the outcomes of alice s bell measurement indicate which state bob s photon is in : bob does not know which of these states his photon is in until he receives a message from alice telling him her measurement outcome . after receiving the message , bob can apply a corrective operation ( using half wave plates and quarter wave plates ) to bring the quantum state of his photon back to the original .this completes the teleportation protocol .quantum teleportation was demonstrated in 1997 and 1998 in various optical implementations , and to various degrees of completeness . a quantum repeater based on quantum teleportation works as follows ( see figure [ fig : repeater ] ) : alice needs to send a polarised photon to bob , who is too far away to send directly .instead , she sends it to a repeater station somewhere between her and bob . for now , let s assume that this station is close enough to alice .the repeater station establishes shared entangled pairs with bob , or another repeater ( more on that later ) .this allows the repeater station to receive the incoming photon from alice and teleport it to bob using the shared entanglement ( the `` swap '' gate in figure [ fig : repeater ] ) . for this to work, the bell measurement must reveal whether the photon sent by alice made it to the repeater station , and the entanglement between the repeater station and bob must be ( near ) perfect .the repeater station then informs bob what the corrective operation on his part of the entangled pair must be , and after making this correction bob can measure his photon in the basis of his choice .alice and bob can now be far away from each other and still establish a secret key with a sufficiently high bit rate . in the description above we have , of course , cheated !we magically assumed that the repeater station and bob share near - perfect entangled photon pairs .this is far from trivial to establish .the photons must be created together , and at least one of them must travel the distance between the repeater station and bob .that photon will inevitably incur losses of the same magnitude as the photon sent by alice .the repeater station and bob can share several pairs and try to find out which of the photons made it through .this is difficult , because it requires detecting a photon without destroying it ( after all , we still need to use it in the teleportation protocol ) .alternatively , we can perform entanglement distillation , which takes several imperfect entangled pairs and extracts one perfectly entangled pair .this is not easy either , because it requires entangling gates between photons . finally ,while all this processing is going on , the photons do nt just sit there in the repeater station .they need to be actively stored in either an optical delay line or a quantum memory .if the distillation protocol requires communication between the repeater station and bob , the length of time for which the photon needs to be stored is comparable to the time it takes light to travel from bob to the repeater station .any delay line memory would then incur the same amount of photon loss as the channel between the repeater station and bob , and we re back to square one .there are several architectures that attempt to circumvent these various difficulties , and one particular question of interest is what are the minimal requirements for a repeater to work ? does it need two - way communication between stations or can we construct a protocol that requires only one - way communication as shown in figure [ fig : repeater ] ? does the repeater require memories that last as long ( or longer ) than the flight time of the photons between repeater stations ? a lot of progress is being made on these questions , and this is currently an active area of research . finally , we note that while anti - bunching and the no - cloning theorem are sufficient for the design of quantum key distribution , the implementation of this protocol over long distances will most likely require the use of polarisation entanglement .this lifts the construction of repeaters into a new realm of difficulty over direct quantum communication .light is also extremely useful in metrological applications . as a simple example , consider the measurement of the thickness of a thin piece of foil .a traditional mechanical micrometer has a precision of about 0.05 mm , which is not good enough to measure foils that are much thinner . to obtain the required precision, we can use an optical interferometric micrometer shown in figure [ fig : umm ] , the principle of which is identical to that of newton s rings .the foil is placed at the end of a mirror and holds up a plate of glass .the reflected light will consist of two contributions , namely the light that is reflected off the inside surface of the glass plate , and the light that is reflected off the mirror .whether constructive or destructive interference occurs at the outgoing light depends on the path difference between the two contributions . in figure[ fig : umm ] , we show this effect at three different positions along the mirror . of a piece of foil by counting the number of interference fringes along a distance as seen from above .the precision in the measurement of can be estimated as , where is the wavelength of the light .this is much more precise than a mechanical micrometer , which has a precision of about 0.05 mm.,width=340 ] we find destructive interference between the two reflected light waves when the path length is a half - integer multiple of the wavelength : where is an integer , is the angle between the glass plate and the mirror , is the distance between the pivot point and the foil , and is the distance from the pivot point to the dark fringe .adjacent fringes are a distance apart , with where is the number of fringes per unit length .if we know and , we can count the fringes to obtain , which in turn gives us a value for the foil thickness .the precision of this measurement can be estimated by noting that the number of fringes per unit length can be counted very accurately , as long as there is at least one fringe across the length , or . using the error propagation formula we calculate from this estimate as follows : we see that this device can reach a precision of a few hundred nanometers , which is two to three orders of magnitude better than the mechanical micrometer .the above example is using only classical light .can we improve on this technique by using quantum light ? in order to answer this question we will have to go back to what makes quantum light different from classical light .we can in principle improve the precision of the optical micrometer by using a lot of light : if we measure the intensity along the -direction with very high precision we can detect any variation in intensity , even if that amounts to much fewer than one fringe per length .there is a catch , however , since the intensity itself is noisy .a ( transversely ) coherent state of light with on average photons has intensity fluctuations that are proportional to .the signal to noise ratio ( snr ) is then .the more photons , the higher the snr , and the higher the snr , the more precisely the intensity curve in the -direction can be measured ( inversely proportional to the snr ) .the number of photons is therefore a _ resource _ for measuring the foil thickness : the more you have of them , the better you can estimate .the precision of will scale according to this is called the _ shot noise limit _ , or the _ standard quantum limit _ ( sql ) .it originates in the natural intensity fluctuations of light .the behaviour is specific to the type of light , which in this case is classical coherent light .we therefore sometimes refer to this precision as the classical precision limit . if there is a way to suppress these fluctuations in the intensity we may be able to increase the precision . how would this work ? to answer this , note that the photons arrive in the detector completely independently of each other , which means that they will cluster randomly at each pixel in the detector according to the poisson distribution in eq .( [ eq : poisson ] ) . to remove this randomness we need a `` conspiracy '' between the photons in the form of transverse anti - bunching ( e.g. , see figure [ fig : hbt ] , where may now denote the distance between the detected positions ) .if the photons arrive nicely spaced out , the intensity fluctuations at each pixel will be suppressed .the periodic structure of the fringes will then become clear much quicker , and an accurate count of the fringes can be performed with fewer photons . however , there is a limit to the precision gain that can be obtained this way .no matter how evenly spaced , we still need an appreciable number of photons to reveal the fringes . the ultimate precision in be calculated as this is called the _ heisenberg limit _ . since reaching this limit requires anti - bunching ( which in this case will require some form of entanglement between the photons ) this is a truly quantum mechanical precision scaling without a classical implementation .more generally , the ultimate precision allowed by quantum mechanics of the measurement of a parameter is given by the expectation value of the operator that drives the changes in that parameter .these operators are called _generators_. for example , the generator for changes in time is the hamiltonian , the generator for translations in space is the momentum operator , and the generator for phase changes is the number operator .the unitary evolution that imparts the parameter onto the quantum state is given by .the ultimate precision is then written as a _ lower bound _ on the root mean square error of : for optimal states , the quantum mechanical operator variance is bounded by the ( squared ) expectation value , and is the proper definition of the resource ( e.g. , the amount ) that allows us to increase the precision of the measurement of .the expectation value is often much easier to estimate than the variance . from eq .( [ eq : hl ] ) it is clear why this limit is called the heisenberg limit : if is the position of a particle and is its momentum , then the inequality becomes which you will recognise as heisenberg s uncertainty relation for position and momentum .( [ eq : hl ] ) is more general than the traditional heisenberg - robertson relation that is derived only for ( non - commuting ) observables , since it is valid also for physical quantities that do not have an associated quantum operator , such as time , phase and rotation angle .it is often argued that entanglement is a prerequisite for reaching the heisenberg limit . while this is certainly true in the context of estimation procedures involving many distinguishable particles ,the situation in optics where photons may be indistinguishable is a little more subtle . since the heisenberg limit in eq .( [ eq : hl ] ) depends on , we can in principle construct a quantum state on a single optical mode that maximises .for example , if we wish to measure an optical phase , the relevant generator is the number operator .the state with a maximal variance ( and bounded maximum number states ) is given by with the state of no photons , and the state of photons in the optical mode .a phase shift on the optical mode then leads to the transformation measuring the observable will then give a precision there is no entanglement in a single optical mode , but we still attain the heisenberg limit . in any real estimation procedure , however , the use of entanglement can help overcome practical difficulties such as creating the superposition in eq .( [ eq:0n ] ) or implementing the observable .for example , instead of the state in eq .( [ eq:0n ] ) , we may want to create the so - called noon state which is a two - mode state in which all the photons are in one mode ( but it is undetermined which mode ) .a phase shift in one of the modes will then induce the same relative phase factor as in eq .( [ eq:0n ] ) .however , since this is not a superposition of different photon numbers but rather a superposition of the distribution of photons , it is conceptually easier to see how this can be made in practice .still , creating noon states is extraordinarily difficult , and they are extremely sensitive to decoherence . more promising is the use of _ sqeezed _ light .this is another type of quantum mechanical light that has no classical analog . instead of considering the photon number , which are the eigenvalues of the operator , we may look at the _ quadrature _ operators the commutation relation between and is = i$ ] , which means that they obey the uncertainty relation to get an idea what these operators mean , remember that the creation and annihilation operators are mathematically identical to the ladder operators of the simple harmonic oscillator . by analogy , and as the position and momentum of the simple harmonic oscillator .measuring would then be equivalent to measuring the amplitude of the oscillator , while measuring would be equivalent to measuring the momentum of the oscillator . in the language of wavesthese are called quadratures .a classical coherent state of light is a minimum uncertainty state , in the sense that eq .( [ eq : quad ] ) becomes an inequality .not only that , the two variances are also equal : quantum mechanically , we can reduce the variance in one quadrature at the expense of the other while still obeying eq .( [ eq : quad ] ) .we can write this as where is called the squeezing parameter . using these types of light allows us to achieve a measurement precision in of where is the average number of photons in the light probe , and is the number of times the experiment is repeated .the advantage of this approach is that it does not require too exotic quantum states such as the noon state , and the measurement can be achieved by ordinary homodyne detection .this method is proposed as part of advanced ligo for the measurements of gravitational waves finally , the same transverse anti - bunching effect that was used to increase the precision of the optical micrometer can be employed to improve the resolution in imaging .suppose that we wish to image an object that we know is circular ( for example a star or a small aperture ) .we can use a telescope or a microscope and obtain an image like the one shown in figure [ fig : aperture ] .we may infer the radius ( or , more precisely , the opening angle ) of the sources , as well as the position of the source , by matching the intensity of the light in the imaging plane with a theoretical model of the image ( related to the fourier transform of the source geometry ) . given a perfectly smooth intensity profile in the imaging plane we can find the position and radius with arbitrarily high precision .however , as before the intensity of the light in the image plane fluctuates , and this will create a degree of uncertainty in the fitting of the theoretical curve to the data .if the fluctuations can be suppressed via transverse anti - bunching , the precision of the estimates of the position and radius can achieve the heisenberg limit .the last , and arguably most challenging information processing task with single photon qubits is quantum computing .quantum computers have very stringent noise requirements , which linear optics can in principle meet .the major challenge , however , is in the generation of entanglement . to make matters worse ,not all entanglement in created equal .one of the key ingredients in quantum information processing is quantum entanglement .for example , in section [ sec : comm ] we used the maximally entangled bell states as resources for quantum teleportation . for quantum computing , entanglement is also a key resource .no exponential speed - up can be achieved without it .however , when we deal with photons as our information carriers , we must distinguish between different types of entanglement .consider a single photon impinging on a 50:50 beam splitter .there are two input and two output modes for a simple beam splitter , and we can write the mode transformations as where and are the annihilation operators for the input modes , and and are the annihilation operators for the output modes . a single photon entering the beam splitter in mode 1 can then be written as a quantum state transformation this state is entangled .it can in principle be used to violate a bell inequality ( even though it would be difficult to implement in practice ) .the entanglement is between the spatial degree of freedom ( mode 1 or 2 ) , and the photon number degree of freedom ( 0 or 1 photons ) . in general , quantum states that are not thermal or classical coherent states become entangled when they interact with beam splitters .unfortunately , this type of entanglement is of limited use for quantum computation . to see this , consider a bell state required for quantum computing : this is a state of two photons with polarisation degree of freedom ( and ) in two spatial modes ( 1 and 2 ) .we can write this in terms of creation operators acting on the vacuum as suppose that we wish to create this state from the separable input state .the mode transformation that must be implemented is then linear optics is _ linear _ in the mode transformations , which means that each input mode operator transforms into a linear combination of the output mode operators . in other words , where are the elements of a unitary matrix .each mode operator is replaced with a sum over mode operators . however , the substitution rule of eq .( [ eq : sep ] ) applied to the left - hand side of eq .( [ eq : insep ] ) can never produce the right - hand side of eq .( [ eq : insep ] ) because the left - hand side is separable into a product of two mode operators , whereas the right - hand side is not .therefore , linear optics alone can not be used to create the necessary entanglement for quantum computing .one potential way around this problem is to use an induced photon - photon interaction , for example using a kerr nonlinearity .such a nonlinearity imparts a phase shift on one optical mode that is proportional to the intensity in another mode . at the single - photon levelthis can act as a coherent switch .unfortunately , kerr nonlinearities are inherently noisy and can not be used for single - photon quantum gates .the question is then whether we can use photonic qubits for quantum computing .the problem was solved in 2000 by knill , laflamme and milburn , in what was to become one of the classic papers in quantum information processing . instead of a medium - induced photon - photon interaction , the required nonlinearity of the mode transformationsis provided by projective measurements .in addition to the photons that are part of a computation , we may send extra _photons through a linear optical network of beam splitters and phase shifters .this gives us the freedom to detect photons in very specific output modes , as shown in the example of the nonlinear phase shift circuit in figure [ fig : ns ] , also known as the ns gate .since the number of added ancilla photons is the same as the number of detected photons , the photon number in the input mode does not change once it has passed through the network .phase shift , only the two - photon component picks up the phase shift , leaving the one - photon component unaffected .the implementation requires one ancilla photon in the middle mode , and a detection signature of one photon and no photons in the two detectors in the output .the beam splitters bs1 , bs2 and bs3 are not 50:50 , but have specially chosen transmission coefficients.,height=94 ] of course , it is not guaranteed that the two detectors will detect one and zero photons , respectively .if it was , there would be no need for detection .this implies that the circuit in figure [ fig : ns ] succeeds only part of the time ( in this case , the success probability of the gate is one quarter ) .this is no good for quantum computation , in which all the circuits must be successful simultaneously . to overcome this problem , knill , laflamme and milburn employed quantum teleportation : instead of trying to apply the probabilistic gate directly to the quantum information carrying qubits ( which can not be copied and must therefore be handled with care ) , the gate is applied to one half of an entangled pair .if the gate is successful , the now modified entangled pair is used as the entanglement resource in quantum teleportation of the information carrying qubit .the teleported qubit emerges with the gate applied to it .knill , laflamme and milburn found a way to make the teleportation procedure nearly deterministic , which means that the probabilistic gate can now be applied deterministically to the qubit , and linear optical quantum computing was in principle possible .once we can create an ns gate , we can use the hong - ou - mandel effect to create controlled pauli gates , or cz gate .these are the two - qubit gates that can create the entanglement necessary for quantum computing . in terms of qubits ,the cz gate operates as follows on the two - qubit states : in other words , when both qubits are in the state , the gate applies a phase shift . to see how this gate can be implemented with two ns gates and the hong - ou - mandel effect , consider the circuit in figure [ fig : cz ] .we can arrange the two incoming qubits q1 and q2 in such a way that the states for each qubit i.e ., horizontal polarisation are mapped onto the top and bottom modes that propagate freely .the states for each qubit are the vertically polarised photons , and will be reflected in the polarising beam splitters ( pbs ) .the photons will combine in the first 50:50 beam splitter .if an input state enters this circuit , both photons will meet at the first beam splitter and experience to hong - ou - mandel effect .this means that both photons will exit the beam splitter in a quantum superposition of both photons in the top mode and both photons in the bottom mode .the ns gate , if successful , will then impart a phase shift on the two - photon state .the second beam splitter will apply the hong - ou - mandel effect in reverse , such that if only one photon enters the first beam splitter , for example because q1 is in the logical state and q2 is in the state , there will only be one photon going through the ns gates , and there will be no phase shift . similarly , when both photons are in the top and bottom mode , no photons travel through the ns gates and no phase shift is imported on the quantum state .the result is that the circuit in figure [ fig : cz ] implements the gate in eq .( [ eq : cz ] ) .the gate is successful when both ns gates are successful , and the total success probability is therefore .the gate can be applied to qubits in the computation using the teleportation trick described above .( cz ) gate .two photonic qubits , q1 and q2 , enter the interferometer .the modes corresponding to qubit value are sent into a 50:50 beam splitter , the output of which are subject to a nonlinear phase shift ( ns ) .if there is a photon in each input mode of the beam splitter , the hong - ou - mandel effect guarantees that the two photons will either go both through the top ns gate or through the bottom ns gate .consequently , these photons will pick up a phase .the second 50:50 beam splitter will separate the two photons again into one photon in each output mode of the beam splitter.,height=113 ] it is important for the operation of the cz gate that the hong - ou - mandel effect works perfectly .this means that the two photons must be indistinguishable in every respect , including frequency , polarisation and mode shape .imperfections in the photon source , the beam splitters , or the ns gate will create faulty gates that can ruin the computation . the knill - laflamme - milburn protocol is a type of measurement - based quantum computing , in which the computations are induced by measurements and feed - forward processing of the measurement outcomes . as a practical scheme , however , it has many downsides : a single entangling gate needs tens to hundreds of thousands of ancilla photons ; the detectors must be nearly perfectly efficient and be able to tell the difference between 0 , 1 and 2 photons ; all the photons must be identical to an extremely high degree ; and the feed - forward procedure requires high - quality , low - loss optical switches .in addition , while the feed - forward takes place , the photons must be stored in a quantum memory . the first problem , the resource count , can be mitigated if instead of single photon ancilla states we use entangled photons from the start .this requires a reliable source of photons in one of the bell states ( it does not really matter which one ) .traditionally , entangled photon pairs have been generated using a process called spontaneous parametric down - conversion ( spdc ) , in which a high energy laser pumps a nonlinear crystal .the photons of the laser have a very small probability of `` breaking up '' in the crystal into two photons of lower energy . depending on the arrangement ,these two photons can be created in an entangled polarisation state .alternatively , we can engineer quantum dot structures that create entangled photons on demand as shown in figure [ fig : spdc ] .the dot can be placed in a bragg stack that sends photons in the vertical direction , and a prism separates the different frequency components . while spdc is clean and straightforward to implement , the rate of photon pair production is extremely low , and occasionally two or more pairs are created . the quantum dot approach would therefore be preferable , but it is currently still in the research stage . assuming that we have a reliable two - photon source we can design a new architecture for linear optical quantum computing that requires significantly fewer resources .the key is still to use gate teleportation , but instead of the complicated states required by the knill - laflamme - milburn protocol we create conceptually ( and practically ) simpler cluster states . consider two polarised photons in the ( unnormalised )entangled state we can measure the first photon in a special basis `` '' with eigenstates after finding , say the measurement outcome the quantum state of the remaining photon is where the last equality is true up to an unobservable global phase , and is a rotation generated by the pauli operator .in other words , measuring the first photon in the special basis `` '' produces a unitary gate on the second photon . we can daisy - chain this process by using a four - photon entangled state and measuring the first three photons in special bases defined by successive angles , , and . the resulting operation on the final photon is the unitary gate where we have used that , a rotation generated by the pauli operator .such a gate can implement any single qubit operation given judiciously chosen values of , , and .depending on the ( probabilistic ) measurement outcome ( ) , the subsequent measurement angle must be chosen as , and the measurement outcome determines the angle .this forward dependence of the measurement angles creates a definite direction of the computation .the four - photon state can be graphically represented as a linear ( one - dimensional ) graph , in which the nodes denote the photons and the edges denote entanglement created by cz gates between the photons : {linear_cluster.pdf}}}\end{aligned}\ ] ] we can create two - dimensional graphs in which the vertical entanglement connections represent entangling gates : {2d_cluster.pdf}}}\end{aligned}\ ] ] these structures can be mapped onto any quantum computational circuit and are therefore universal for quantum computation . the entangled states in eqs .( [ eq:1dcluster ] ) and ( [ eq:2dcluster ] ) are called _cluster states _ , and the method is called _ one - way _ quantum computation . since photon measurements can in principle be carried out efficiently , the challenge is to create the required cluster states .a particularly promising way to create large cluster states is to use so - called _ fusion gates _ , shown in figure [ fig : fusion ] .the entanglement is created by a variation of a probabilistic bell measurement for polarised photons , and can be implemented with linear optical elements such as half wave plates and polarising beam splitters . since the creation of the cluster state occurs before we introduce the quantum computation via measurements , we are at liberty to create the cluster state in a probabilistic manner and purify the result until we have the desired fidelity .there are two types of fusion gates , type - i and type - ii .the first type purports to detect a single photon , leaving the three remaining photons in an entangled linear cluster state .this fusion gate can be described mathematically by the operator where the sign is determined by the polarisation of the photon measured in detector d. similarly , the type - ii fusion gate can be described by the operator starting with bell pairs , the type - ii fusion gates clearly can not grow large clusters on their own since they remove two photons from the entangled state . however , type - ii has a much more beneficial behaviour that type - i when the fusion gate fails .the best strategy is therefore to create three - photon entangled states using type - i gates , and subsequently create larger cluster states using only type - ii fusion gates .improvements in the architecture of linear optical quantum computers continue to be made , and in a recent proposal the need for quantum memories is reduced by using a percolation - based _ ballistic _approach .one of the aims of this review is to show that different quantum information processing tasks have different technological requirements . for quantum computing , we need sources that produce bell pairs on demand with high efficiency and , perhaps more importantly , identical mode shapes to accommodate the hong - ou - mandel effect .moreover , the bell pairs must be very close to pure states .the photodetectors must have a high detection efficiency , so that photon loss in the course of the computation remains low .the linear optical components must similarly be low - loss and accurate .the polarising beam splitters must have nearly perfect transmission or reflection for the polarised photons , and beam splitters must have carefully calibrated transmission coefficients .the exact allowed tolerances of the components of a linear optical quantum computer will be determined by the error correction mechanism that is employed .finally , the feed - forward nature of linear optical quantum computing means that we require fast , low - loss optical switches .this is currently a major challenge .an actual implementation of a linear optical quantum computer will not use bulk optical elements , but rather have a chip - based architecture in which microscopic waveguides are wired into programmable circuits .beam splitters can then be constructed from evanescently coupled waveguides . by adjusting the distance between the waveguides , the transmission coefficient can in principlebe carefully calibrated .recently , photon sources have been placed in or on top of waveguides , which allows for directional coupling of the photon into the waveguide depending on the spin of the photon source .this new technology can be employed for alternative bell pair generation methods based on photon which - path erasure and spin readout .quantum metrology is similarly challenging to implement .it is known that in the limit of large photon numbers the heisenberg limit is extremely sensitive to noise .this means that some type of quantum error correction must be employed in order to achieve the heisenberg limit , and this places the practical challenge on a par with the construction of a full - scale quantum computer . on the other hand , quantum metrology techniques that do not achieve the heisenberg limit but that nonetheless improve on the shot - noise limit by a constant factorwill still be very welcome . squeezed ( quantum ) light will be used in the next generation gravitational wave detectors , especially now that gravitational waves have been observed directly .quantum communication is arguably the least challenging task to implement in practice .quantum key distribution requires single - photon sources that may not be fully indistinguishable from each other .however , much care must be taken in the prevention of side - channel detection , in which an eavesdropper can infer or influence the polarisation of a photon via classical methods ( e.g. , monitoring the photon source for tell - tale signals , etc . ) .these can be difficult engineering questions that must be solved .extending quantum communication over longer distances will require quantum repeaters .these devices are much more challenging to build , and require multi - photon entanglement , high - efficiency photodetectors , and generally rather large optical circuits . while repeaters do not have strict fault - tolerance requirements ,the techniques that will make them work will likely be similar to those of full - scale quantum computers ( indistinguishable photons , fast low - loss switches , etc . ) .to conclude , optical quantum information processing presents various physical and engineering challenges for different tasks .some processes , such as quantum key distribution are currently being implemented in commercial products , while others are still very much in the research stage .i have shown that different tasks have very similar physical requirements at different stages of development , which makes it more likely that as our understanding and mastery of nature continues , even the more exotic applications will find their way into working devices .i would like to thank nikola prtljaga for providing me with the data that was used in figure 4 .van uden , r.a .correa , e.a .lopez , f.m .huijskens , c. xia , g. li , a. schlzgen , h. de waardt , a.m.j .koonen , and c.m .okonkwo , _ ultra - high - density spatial division multiplexing with a few - mode multicore fibre _ ,nature photonics 8 ( 2014 ) , p. 865 .n. prtljaga , c. bentham , j. ohara , b. royall , e. clarke , l.r .wilson , and a.m.f .maurice s skolnick , _ on - chip interference of single photons from an embedded quantum dot and a laser _, arxiv:1602.08363 ( 2016 ) .boto , p. kok , d.s .abrams , s.l .braunstein , c.p .williams , and j.p .dowling , _ quantum interferometric optical lithography : exploiting entanglement to beat the diffraction limit _ , phys .85 ( 2000 ) , p. 2733 .bennett , g. brassard , c. crepeau , r. jozsa , a. peres , and w.k .wootters , _ teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels _ , phys .70 ( 1993 ) , p. 1895 .d. boschi , s. branca , f. demartini , l. hardy , and s. popescu , _ experimental realization of teleporting an unknown pure quantum state via dual classical and einstein - podolsky - rosen channels _ , physical review letters 80 ( 1998 ) , p. 1121 .m. gimeno - segovia , p. shadbolt , d.e .browne , and t. rudolph , _ from three - photon greenberger - horne - zeilinger states to ballistic universal quantum computation _, physical review letters 115 ( 2015 ) , pp . 0205025 .rodriguez - fortuno , g. marino , p. ginzburg , d. oconnor , a. martinez , g.a .wurtz , and a.v .zayats , _ near - field interference for the unidirectional excitation of electromagnetic guided modes _ , science 340 ( 2013 ) , p. 328 .i. sllner , s. mahmoodian , s.l .hansen , l. midolo , a. javadi , g. kirv sanske , t. pregnolato , h. el - ella , e.h .lee , j.d .song , s. stobbe , and p. lodahl , _ deterministic photon emitter coupling in chiral photonic circuits _ , nature nanotechnology 10 ( 2015 ) ,. 775778 .coles , d.m .price , j.e .dixon , b. royall , e. clarke , a.m. fox , p. kok , m.s .skolnick , and m.n .makhonin , _ chirality of nanophotonic waveguide with embedded quantum emitter for unidirectional spin transfer _ , nature communications ( 2016 ) .
|
information processing with light is ubiquitous , from communication , metrology and imaging to computing . when we consider light as a quantum mechanical object , new ways of information processing become possible . in this review i give an overview how quantum information processing can be implemented with single photons , and what hurdles still need to be overcome to implement the various applications in practice . i will place special emphasis on the quantum mechanical properties of light that make it different from classical light , and how these properties relate to quantum information processing tasks .
|
extraction of generalized parton distribution ( gpd ) functions from exclusive scattering data is an important endeavour , related to such practical questions as the partonic decomposition of the nucleon spin and characterization of multiple - hard reactions in proton - proton collisions at lhc collider . to reveal the shape of gpds , one employs global or local fits to data . however , compared to familiar global parton distribution ( pdf ) fits , fitting of gpds is intricate due to their dependence on three kinematical variables ( at fixed input scale ) , and the fact that they can not be fully constrained even by ideal data .thus , final results can be significantly influenced by the choice of the particular fitting ansatz . to deal with this source of theoretical uncertainties, we used an alternative approach , in which _ neural networks _are used in place of specific models .this approach has already been successfully applied to extraction of the deeply inelastic scattering ( dis ) structure function and normal pdfs .we expect that the power of this approach is even larger in the case of gpds . in the light of the scarce experimental data , in this pilot study we attempted the mathematically simpler extraction of form factor of deeply virtual compton scattering ( dvcs ) .we used data from the kinematical region where this compton form factor ( cff ) dominates the observables and depends essentially only on two kinematical variables : bjorken s scaling variable and proton momentum transfer squared .these simplifications make the whole problem more tractable .neural networks were invented some decades ago in an attempt to create computer algorithms that would be able to classify ( i.e. recognize ) complex patterns .the specific neural network type used in this work , known as _ multilayer perceptron _, is a mathematical structure consisting of a number of interconnected `` neurons '' organized in several layers .it is schematically shown in fig .[ fig : perceptron ] , where each blob symbolizes a single neuron .each neuron has several inputs and one output .the value at the output is given as a function of a sum of input values , each weighted by a certain number .the parameters of a neural network ( weights ) are adjusted by a procedure known as `` training '' or `` learning '' .thereby , the input part of a chosen set of training input - output patterns is presented to the input layer and propagated through the network to the output layer .the output values are then compared to known values of the output part of training patterns and the calculated differences are used to adjust the network weights .this procedure is repeated until the network can correctly classify all ( or most of all ) input patterns . if this is done properly ,the trained neural network is capable of generalization , i.e. , it can successfully classify patterns it has never seen before .this whole paradigm can be applied also to fitting of functions to data . here ,measured data are the patterns , the input are the values of the kinematical variables the observable in question depends upon , and the output is the value of this observable , see fig .[ fig : perceptron ] . in this case, the generalization property of neural networks represents its ability to provide a reasonable estimate of the actual underlying physical law . for the particular application of neural networks to fits of hadron structure functions we refer the reader to papers of the nnpdf group .our approach is similar and is described in detail in . to propagate experimental uncertainties into the final result, we use the `` monte carlo '' method , where neural networks are not trained on actual data but on a collection of `` replica data sets '' .these sets are obtained from original data by generating random artificial data points according to gaussian probability distribution with a width defined by the error bar of experimental measurements . taking a large number of such replicas, the resulting collection of trained neural networks defines a probability distribution ] thereof .thus , the mean value of such a functional and its variance are \big\rangle & = \int \mathcal{d}\mathcal{h } \ : \mathcal{p}[\mathcal{h } ] \ , \mathcal{f}[\mathcal{h } ] \nonumber \\ & = \frac{1}{n_{rep}}\sum_{k=1}^{n_{rep } } \mathcal{f}[\mathcal{h}^{(k)}]\ ; , \label{eq : funcprob } \\\big(\delta \mathcal{f}[\mathcal{h}]\big)^2 & = \big\langle \mathcal{f}[\mathcal{h}]^2 \big\rangle - \big\langle \mathcal{f}[\mathcal{h } ] \big\rangle^2 \;. \label{eq : variance}\end{aligned}\ ] ]to illustrate the neural network fitting method , we shall now present a toy example where we will extract a known function of one variable by fitting to fake data .first we define some simple target function as a random composition of simple polynomial and logarithm functions constrained by the property this function is plotted in fig .[ fig : toy ] as a thick dashed line and labeled as `` target '' .next , =10 fake data points are generated equidistantly in .their mean values are smeared around target values by random gaussian fluctuations with standard deviation =0.05 , which is also taken to be the uncertainty of generated points .these fake data are then used for fits , first using the standard least - squares method with a two - parameter model and , second , utilizing the neural network method . note that the monte carlo method of error propagation , which we use together with neural network fitting , itself requires to generate artificial data sets .thus , we generated =12 replicas from original fake data and used them to train 12 neural networks that represent 12 functions , plotted as thin solid lines on the second panel of fig .[ fig : toy ] .these functions define a probability distribution in the space of functions which , according to eqs .( [ eq : funcprob][eq : variance ] ) , provides an estimate of the sought function , together with its uncertainty .this estimate is shown on the right panel of fig .[ fig : toy ] as a ( red ) band with ascending hatches .the corresponding model fit result , obtained by the standard method of least - squares optimization and error propagation using the hessian matrix , is shown in the left panel of fig .[ fig : toy ] as a ( green ) band with descending hatches .we have deliberately chosen the ansatz ( [ eq : ansatz ] ) with two properties , incorporating theoretical biases about endpoints : and .the first of these actually `` corresponds to the truth '' , i.e. , to eq .( [ eq : endpoint ] ) , whereas the second one is erroneous . as a result ,for the model fit is in much better agreement with the target function ( thick dashed line ) than neural networks , which rely only on data and are insensitive to this endpoint behaviour . on the other side , for model fit is in some small disagreement with the target function , and , what is much worse , it very much underestimates the uncertainty of the fitted function there ( the uncertainty becomes zero at endpoints ! ) , demonstrating the dangers of unwarranted theoretical prejudices. we can be more quantitative and say that according to the standard measure , both methods lead to functions that correctly describe data and the degrees of freedom neural networks have very many free parameters and for them degrees of freedom is not such an important characteristic as in the case of standard model fits . ] : we can now further ask to what extent the two methods extract the underlying target function .naturally , we can measure this by a kind of criterion where the denominator is now the propagated uncertainty rather than the experimental one . in our toy examplewe get showing that the model fit underestimates its uncertainties , while neural networks are much more realistic .this example shows that the neural network method has a clear advantage if we want bias - free propagation of information from experimental measurements into the cffs .still , if we want to use some additional input , e.g. , if we rely on the spectral property ( [ eq : endpoint ] ) , we can do so also within the neural network method . for example, we could take the output of neural networks in this toy example not as an representation of the function itself , but as representing , with some positive power .then the final neural network predictions for would also be constrained by eq .( [ eq : endpoint ] ) , without any further loss of generality ( in practice it turns out that the dependence of the results on the choice of power is small ) .various methods of implementing theoretical constraints in the neural network fitting method are discussed in sect .5.2.4 of .to extract the cff from asymmetries , measured by the hermes collaboration in photon electroproduction off unpolarized protons , we applied the described neural network fitting method in .we used 36 data points : 18 measurements of the first sine harmonic of the beam spin asymmetry , and 18 measurements of the first cosine harmonic of the beam charge asymmetry .as for the toy model from the previous section , we compare the results with the standard least - squares model fit .let us first shortly describe this model fit of .for the partonic decomposition of the imaginary part we used a model , presented in : .\ ] ] here , are gpds along the cross - over trajectory , parameterized as : the parameters of were fixed by separate fits to collider data , and some parameters of were also fixed using information from dis data and regge trajectories .the real part is expressed in terms of the imaginary one via a dispersion integral and the subtraction constant , leaving us finally with a model that possesses four parameters : , , and .this model is fitted to experimental data , resulting in parameter values , which can be found in , and shapes of and that are plotted on fig .[ fig : cff ] as ( green ) bands with descending hatches .the neural network fit was performed by creating 50 neural networks with two neurons in the input layer ( corresponding to kinematical variables and ) , 13 neurons in the hidden middle layer , and two neurons in the output layer ( corresponding to and ) , cf .[ fig : perceptron ] .these were trained on =50 monte carlo replicas of hermes data .we checked that the resulting cff does not depend significantly on the precise number of neurons in the hidden layer .the results are also presented on fig .[ fig : cff ] , where we show the neural network representation of and as ( red ) bands with ascending hatches . comparing the two approaches ,one notices that in the kinematic region of experimental data ( roughly the middle- parts of fig .[ fig : cff ] panels ) neural network and model fit results coincide , i.e. , error bands are of similar width and they overlap consistently .however , outside of this data region , we see that the predictions of the two approaches can be different . therethe uncertainty of the model fit is in general smaller , and we observe a strong disagreement in the low region , reflecting the theoretical bias of the chosen model that possesses a regge behavior .the lesson learned from the toy model example is that , even if we believe in regge behaviour for small , we should still consider the uncertainty from the neural network method as more realistic .utilizing both a simplified toy example and hermes measurements of photon electroproduction asymmetries , we demonstrated that neural networks and monte carlo error propagation provide a powerful and unbiased tool that extracts information from data .comparisons with standard least - squares model fits reveal that the uncertainties , obtained from neural network fits , are reliable and realistic .relying on the hypothesis of dominance , we found the cff from a completely unconstrained neural network fit .it is expected that the extraction of all four leading twist - two cffs ( , , and , or the corresponding gpds ) from presently or soon - to - be available data will still be an ill - defined optimization problem .thus , it might be necessary to implement in neural network fits some carefully chosen theoretically robust constraints , such as dispersion relations , sum rules and lattice input .this work was supported by the bmbf grant under the contract no .06ry9191 , by eu fp7 grant hadronphysics2 , by dfg grant , contract no .436 kro 113/11/0 - 1 and by croatian ministry of science , education and sport , contract no .119 - 0982930 - 1016 .
|
we describe a method , based on neural networks , of revealing compton form factors in the deeply virtual region . we compare this approach to standard least - squares model fitting both for a simplified toy case and for hermes data .
|
numerous models of neuronal spiking activity based on very different assumptions with different resemblance to reality exist .this is a natural situation and no one is surprised as the suitability of a model depends on a purpose for which it has been developed . on the other hand, it has been always important to point out the bridges , to find connections among different models , as finally , all of them should stem out of the same principles .one example of such an effort are the studies on reduction of the hodgkin - huxley model . for a similar purpose , we recently investigated the behavior of stein s neuronal model , which is based on the leaky integrate - and - fire principle , under the condition that its input is the output of the model itself .our aim here is similar asking what is the connection between the commonly used ornstein - uhlenbeck ( ou ) neuronal model and the model of interspike intervals ( isis ) based on the gamma renewal process .the choice is not coincidental as both of these simple models are quite often applied for description of experimental data as well as for theoretical studies on neuronal coding .many references for the ou neuronal model can be given , here are only a few examples ( ; and a recent review , where many other references can be found ) .the model stems out from the ou stochastic process which is restricted by an upper boundary , representing the firing threshold , crossings of which are identified with generation of spikes .after a spike is initiated , the memory , including the input , is cleared and the system starts anew .therefore , the sequence of isis creates a renewal process .despite the fact that the ou model coincides with the langevin equation , it was originally derived directly from the stein s neuronal model and thus its parameters have a clear physiological interpretation .the ou model has been generalized in many directions to take into account very different features of neurons which are not depicted in its basic form . among these many variants , important for our purpose ,are the models with time - varying input and time - varying firing threshold .the history of time - dependent thresholds in neural models is very long ( for recent reviews see ) . however , except the attempts to describe the effect of refractoriness , the dynamical thresholds are aimed at mimicking adaptation in neuronal activity , i.e. , a gradual change in the firing rate .typically , it has been modeled as a decaying single or double - exponential function .this is usually accompanied by introducing a correlation structure in a sequence of isis ( for a review see ) .it should be stressed that the time - dependent threshold appearing here is of entirely different nature .whatever the stimulus is applied at the input , the constant firing rate appears at the output .simultaneously , the investigated model generates independent and identically distributed isis . while the time - dependent threshold is used to reflect the existence of refractoriness in the behavior of real neurons , the time variable input naturally describes time - varying intensities of the impinging postsynaptic potentials arriving from other neurons in the system . dealing with these models we have to return to the theoretical results on the so called inverse first - passage - time problem .these results permit us to deduce the shapes of these functions under the condition that the output of the model is the gamma renewal process .the renewal stochastic process with gamma distributed intervals between events is usually called the gamma ( renewal ) process ( yannaros , 1988 ; gourevitch and eggermont , 2007 ; farkhooi et al . , 2009 , koyama and kostal , 2014 ) .this model , which we aim to relate to the ou neuronal model , is of a different nature .it has never been constructed from biological principles , but has been widely accepted in neuronal context as a good descriptor of the experimental data and also as a suitable descriptor of data used in theoretical studies .it appeared immediately when the poisson process of isis was disregarded as a too simplified description of reality .the selected references are , similarly to the ou model , only a sample from a much longer list .as mentioned , our aim is to relate the ou and the gamma models .more specifically , we ask under which conditions the ou model generates the gamma renewal process as an output .the problem was already mentioned in but only to illustrate the proposed method . herethe aim is to understand the role of the parameters of the process and of the gamma distribution .the relevant properties of both models are summarized in the first part of the paper .then , the method to solve the problem is presented .finally , the results are illustrated on two examples and their consequences are shortly discussed .the ou stochastic process is a classical model of the membrane potential evolution .it describes the membrane potential through the one - dimensional process solving the stochastic differential equation with initial condition . here is the membrane time constant , and are two constants that account for the mean and the variability of the input to the neuron and denotes a standard wiener process .further , the model assumes that after each spike the membrane potential is reset to the resetting value . in absence of the external input, the membrane potential decays exponentially to the resting potential , which in equation ( [ x ] ) is set to zero .the ou process is a continuous markov process characterized by its transition probability density function ( pdf ) .\nonumber\end{aligned}\ ] ] hence it is gaussian and the mean membrane potential is and its variance is the isis generated by the model are identified with the first - passage times ( fpts ) of the process through a boundary , often taken to be constant unfortunately the distribution of is not known in a closed form but its laplace transform is available , as well as the expressions for the mean and the variance . in the presence of the boundary different dynamics of arise according to the values of and , the asymptotic value of . when , i.e. in the subthreshold regime , the boundary crossings are determined by the noise and the number of spikes in a fixed interval exhibits a poisson - like distribution . on the contrary , when the isis are strongly influenced by the input and we speak of suprathreshold regime .this division on supra- and sub - threshold regimens is based on the mean behavior of the membrane potential and it evokes the question how it influences the distribution of the fpt . in some generalizationsthe model assumes the presence of a time - dependent threshold or of a time - depending mean input . in the latter cases , the process is solution of an equation analogous to ( [ x ] ) but with in substitution of . in both these cases , the closed forms for the mean and the variance of isis analogous to those for ( [ x ] ) and ( [ t ] ) are no more availablehowever , suitable numerical methods for determining the fpts distribution exist as well as reliable simulation techniques . from a mathematical point of view , the case of time - dependent input and that of time - varying threshold are related .in fact , the ou model with time dependent - boundary and in absence of input , , ( we refer to the case because the case can be obtained from it through a simple transformation ) , can be transformed into the ou model characterized by time - dependent input and constant threshold through the space transformation the relationship between the input in ( [ x2 ] ) and the threshold in ( [ x1 ] ) becomes which can be integrated to give explicitly in terms of and .note that since ( [ transf ] ) is a space transformation , it does not change the fpt distribution of the random variable .a random variable is gamma distributed if its pdf is here is the rate parameter and is the shape parameter .such a random variable is characterized by the following mean , variance and coefficient of variation ( black ) , ( blue ) , , ( cyan ) , ( magenta ) , ( red ) .the parameters of the ou model are , and if the time - variable input is searched for and if the time - variable threshold is deduced . , height=566 ] the shapes of gamma pdf for different values of the parameters and can be seen in figure [ fig : input](a ) .these shapes strongly change with the value of cv .we recall that corresponds to isis exponentially distributed while when bursting activity can be modeled and for decreasing cv the activity tends to regularity .we study here , under which conditions the gamma model of isis could be generated by the ou model .more specifically we investigate two closely related possibilities : 1 .the membrane potential evolves according to an ou process with and we ask if there is a time - dependent threshold such that the isis are gamma distributed with prescribed parameters ; 2 .the membrane potential evolves according to an ou process with time - dependent input and spikes are determined by the crossing of a constant threshold , the question remains the same .the absence of in the first case is only formal due to the transformation of the resting level to zero . actually , solving one of the problemsgives a solution to the other as mentioned , see equation ( [ transf ] ) .the case of time - varying input is related to the input restart with a spike .it could arise in a biologically plausible way if the neuron s spike provided input to a second neuron , for instance an inhibitory interneuron , in such a way as to immediately suppress the next spike ( for ) or a reciprocally connected excitatory neuron , to accelerate the next spike ( for ) .alternatively adaptation currents can produce the effects embodied in the time - varying input .however , the time - varying threshold does not require any interpretation being a phenomenological quantity and thus we sketch the theoretical method for determining the boundary shape only .the existence and uniqueness of the solution of the inverse first - passage - time problem is shown for any regular diffusion process in .two computational methods to determine the boundary shape of a wiener process characterized by an assigned fpt distribution are presented in while the extension of these methods to the case of an ou process is considered in . in that paperthe use of the proposed algorithm is illustrated through two examples : the inverse gaussian distribution and the gamma distribution .further related results are in .our task is to determine the time - dependent boundary . to deal with this problemwe consider the fortet integral equation that relates the transition pdf of an ou process originated in at time , as given by equation ( [ pdf ] ) with the fpt pdf of the process through .this equation holds for any and . after integrating the fortet equation with respect to in , we get \nonumber\end{aligned}\ ] ] where is the transition probability distribution function .this last equation is a linear volterra integral equation of the first type where the unknown is the pdf , while it is a non - linear volterra integral equation of the second kind in the unknown . in the following examples we use the algorithm proposed in to solve the inverse fpt problem and to determine .we underline that we fix the gamma density as a fpt density and we solve the equation ( [ fortet ] ) where the unknown function is the boundary and is the gamma density .we introduce a time discretization for , of step and we discretize the integral equation .then , the error of the method concerns the boundary .the study of the order of the error on the boundary has been done for a wiener process in the paper .it is proved that the error at each time step is where is the approximated boundary obtained by the algorithm .reproducing the same proof for the ou process , it is possible to prove the same error order also for our process .note that the error does not depend on the parameters , it is due to the right - hand rectangular rule for the discrtization of the integral in ( [ fortet ] ) ( cf .the algorithm can be applied to every kind of densities , even narrow densities are not a problem .it is sufficient to take a smaller discretization step in order to take into account all the shape of the density .the correct discretization step could be chosen looking at the shape of the fpt density .we explicitly underline that the algorithm does not require the knowledge of since it is built with the right - hand rectangular rule , i.e. the formula does not use the value of the boundary on the l.h.s . of the interval .no condition on is a numerical advantage because this value is often not known a priori .the gamma density can either start from zero , from a constant or from infinity .these different shapes of the gamma arise in correspondence to different behaviors of the boundary in the origin .if the density is null in zero , the corresponding boundary is strictly positive : the density null in zero implies the absence of a probability mass in zero , i.e. no crossing happens for such times . on the contrary ,a positive mass in the origin requires a boundary starting from zero typically with infinite derivative . in shown that the only option to get a positive mass for arbitrary small times is to allow the boundary to start together with the process . in order not to have an immediate crossing of all possible trajectories, the boundary should have infinite derivative in zero .exponential distribution is often suggested to model isis without considering the implication of the choice of a density positive at time zero .the behavior of the threshold is interesting from a theoretical point of view , but biologically it carries a limited information as the model is hardly suitable in a close proximity of a previous spike .the algorithm at time always gives a positive and finite boundary .the values of as goes to zero may exhibit three different behaviors : in most instances , in other cases and in the last one the algorithm is valid if although the more is away from zero , the less accurate the estimation of the boundary through the algorithm becomes in a neighborhood of the origin .numerically we identify with .since the algorithm is adaptive , if the discretization step is small enough the error done in a neighborhood of the origin is negligible . in figure[ fig : input](b ) the boundaries corresponding to different gamma pdf are shown .since the first steps of the algorithm are not reliable due to the imprecise approximation of the integral in ( [ fortet ] ) in the algorithm , we skip the first interval $ ] . to improve the boundary estimation for small times, we refer to remark 5.4 in . in figure[ fig : input](c ) the input functions are plotted .they have been just derived from the boundaries in figure [ fig : input](b ) by applying ( [ mu ] ) and they strongly depend on . since the boundary close to zero can not be deduced reliably , its derivative may have a substantial bias here .the algorithm is self - adaptive , therefore the results become more precise with increasing time . in figure[ fig : input](b ) we also note that the thresholds in general decrease as increases .it may correspond to a weak facilitation of the spiking activity , avoiding the presence of very long isis .however , in extreme cases the threshold reaches negative values , which means going below the resting level , and it looks quite unrealistic .the explanation of this feature is straightforward if we take into account figure [ fig : input](a ) and simultaneously realize what is the input ( ) and the parameter of the underlying ou model ( , ). these values would correspond to a strongly subthreshold regimen if a constant threshold ( , like for variable ) is considered and thus the spiking activity would be poissonian .under such a scenario the threshold must go in the direction of the mean depolarization ( ) to get gamma distribution which looks almost gaussian ( and ) .in conclusion , the obtained result of a very negative threshold is an indicator of the unrealism of the hypothesized distribution in the case of employed ou parameters .one can not obtain in ou model almost regular firing for low signal unless the threshold is substantially modified .there is a common substantial increase of the threshold after spike generation in figure [ fig : input](b ) observed mainly for those thresholds which ultimately go to the negative values . herethe same arguments can be presented : while mathematically the model can be forced to produce any shape of gamma output , biologically it would require a speculative interpretation .note that a boundary becoming negative , despite it can be seen as biologically difficult to interpret , represents no formal problem .the reason is that there are still numerous trajectories of the ou process below zero and thus below the threshold .therefore the negative threshold gradually absorbs these trajectories ( hyperpolarized below zero ) and it creates the tail of the fpt density corresponding to the gamma distribution .we apply the method proposed in the previous section to the gamma pdfs of different shapes and show their effect on the shapes of the variable - input and the variable - threshold .the shape of the gamma distributions was varied by changing its cv while its mean was kept constant .the corresponding parameters can be deduced from equations ( [ et ] ) and ( [ cv ] ) .the pdfs of are plotted for different values of in figure 1(a ) and the related boundaries and the input functions , making use of ( [ ex ] ) , respectively , are plotted in figure 1(b ) and 1(c ) .the shape of the boundary corresponding to lower values of cv presents a maximum that tends to disappear as cv grows to higher values . hence , for small values of cv the growth of the boundary eliminates short isis . after a certain period of timethe thresholds decrease facilitating the attainment of the maxima of the pdf .the gamma density has no maxima for and the time - variable threshold tends to become flat . in all cases a complementary behavioris exhibited by the input . in the second examplewe investigate not only the behavior of the time - variable firing threshold but also the dynamics of the underlying ou neuronal model .the mean membrane potentials ( [ ex ] ) and the boundaries corresponding to the gamma distributed isis with mean isi equal to but different values of the mean input and for three different values of are shown in figure 2 .the shapes of the thresholds in figure [ fig : cv ] are concave and initially increasing . for low input ( when is small enough ) the curves exhibit a maximum after which the firing threshold starts to decrease . from a biological view pointsuch a decrease may be interpreted as reflecting an adaptation phenomenon .this adaptation is meant in a sense that if there is no spike for a period , then the system becomes more sensitive by decreasing the firing threshold .initially , the distance between the mean of the membrane potential and the threshold increases , however , after a certain period , in all the cases the mean membrane potential crosses the threshold . for fixed this always happens at the same time .this fact can be easily understood by noting that a change of determines the same shift both on the mean potential and on the boundary shape , see ( [ transf ] ) .this moment of crossing between the mean potential and the threshold increases with increasing .so , practically , for large the regimen is again subthreshold ( see figure [ fig : cv](c ) ) for all generated isis , whereas for low there is a change of the regimen shortly after the mean isi ( see figure [ fig : cv](a ) and [ fig : cv]b ) .this is entirely a new phenomenon if compared with the classical ou model .further , we can see that for low very short isis are rather improbable due to the presence of an higher threshold for a short time .in general , all the thresholds are initially increasing and that is in contrast with the previously employed time - varying thresholds in the ou neuronal model . ) and the time - variable boundaries corresponding to the gamma distributed isis of three different shapes , ( a ) , ( b ) and ( c ) , the probability densities are on the subplots .the parameters of the ou model are , , , the mean isi is equal to in all cases represented by a cross on the horizontal axes .different lines correspond to different values of the mean input : ( black ) , ( blue ) , ( cyan ) , ( magenta ) .thicker lines correspond to the time - variable boundaries .the vertical dotted lines give the isi distribution quantiles.,height=566 ]we presented a method how to modify the ou neuronal model , which is one of the most common models for description of spike generation , to achieve at its output the gamma renewal process of isis .the method is based on the inverse fpt problem and uses the time - variable firing threshold or time - variable input . while the time - variable input rather lacks a clear biological interpretation ,the time - varying firing threshold has been commonly accepted . however , the previous generalizations of the ou model based on introduction of the time - variable threshold always aimed to make the model more realistic _ a priori _ , whereas here it comes out as a result of a requirement to identify the output of the model with the observed or expected data . in some parameter cases , the ou process is incapable of generating gamma - distributed isis , unless unrealistic features of the model are employed .for example , the threshold getting below the resting level is an indicator of this situation .interestingly , in other cases , the threshold corresponding to gamma distributed isis may have a biologically interpretable shape .it decreases with time and this could be related with neuronal adaptability .however , this effect does not last over a single isi and thus can not be interpreted as decreasing the firing rate over a spike train .any further interpretation of the threshold behavior would be difficult and surely beyond the scope of this article .on the other hand , it is obvious that it at least partly changes so often applied concept of sub- and supra - threshold firing regimen .nevertheless , the results of this paper implies that gamma distributed isi generated by ou neuronal model with are noise driven in contrast to those with which are driven by both , the signal and the noise .chacron mj , pakdaman k and longtin a ( 2003 ) interspike interval correlations , memory , adaptation , and refractoriness in a leaky integrate - and - fire model with threshold fatigue .neural comput _ * 15 * : 253 - 278 inoue j , sato s and ricciardi lm ( 1995 ) on the parameter estimation for diffusion models of single neuron s activities . i. application to spontaneous activities of mesencephalic reticular formation cells in sleep and waking states ._ biol cybern _ * 73 * ( 3 ) : 209-221 iolov a , ditlevsen s and longtin a ( 2014 ) fokker - planck and fortet equation - based parameter estimation for a leaky integrate - and - fire model with sinusoidal and stochastic forcing ._ j. math .neurosci . _ * 4 * : 4 jolivet r , lewis tj and gerstner w ( 2004 ) generalized integrate - and - fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy _ j neurophysiol _ * 92 * : 959976 sacerdote l and giraudo mt ( 2013 ) stochastic integrate and fire models : a review on mathematical methods and their applications _ lect notes math , stochastic biomathematical models : with applications to neuronal modeling _ * 2058 * : 99148 smith pl ( 2010 ) from poisson shot noise to the integrated ornstein uhlenbeck process : neurally principled models of information accumulation in decision - making and response time ._ j math psychol _ * 54 * : 266-283
|
statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons . these models range from entirely descriptive ones to those deduced from the properties of the real neurons . one of them , the diffusion leaky integrate - and - fire neuronal model , which is based on the ornstein - uhlenbeck stochastic process that is restricted by an absorbing barrier , can describe a wide range of neuronal activity in terms of its parameters . these parameters are readily associated with known physiological mechanisms . the other model is descriptive , gamma renewal process , and its parameters only reflect the observed experimental data or assumed theoretical properties . both of these commonly used models are related here . we show under which conditions the gamma model is an output from the diffusion ornstein - uhlenbeck model . in some cases we can see that the gamma distribution is unrealistic to be achieved for the employed parameters of the ornstein - uhlenbeck process . example.eps gsave newpath 20 20 moveto 20 220 lineto + + 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
the growing interest in security at the physical layer of wireless communications has sparked a resurgence of research in information - theoretic secrecy .physical layer security incorporates signal and code design to limit the information that can be extracted by an eavesdropper at the bit level , as a supplement to classical cryptographic security at the link or higher layers .wyner s landmark paper on secure communications in a point - to - point wiretap channel paved the way for characterizing the secrecy capacity of specific types of multiuser broadcast , interference , and multiple - access channels with single - antenna nodes , although their general secrecy capacity regions under fading remain mostly unknown .similarly , the secrecy capacity achievable in multiple - input multiple - output ( mimo ) multiuser networks is largely an open problem , with limited results available for mimo broadcast channels with two receivers . in the majority of the literature on confidential transmissions in multiuser networks , knowledge of the probability distribution of the eavesdropper s channelis assumed at the transmitter , which inherently provides information about the number of eavesdropper antennas as well .motivated by the above , this paper studies the effectiveness of simple beamforming strategies for maintaining confidentiality in a mimo downlink system with multiple legitimate multi - antenna receivers and a single passive eavesdropper with an unknown channel distribution .a portion of the transmit power is used to broadcast the information signal vector with just enough power to guarantee a certain signal - to - interference - plus - noise ratio ( sinr ) for the intended receivers , and the remainder of the power is used to broadcast artificial noise in order to mask the desired signal from a potential eavesdropper .the artificial interference is designed to be orthogonal to the information - bearing signals at the intended receivers , which ensures that only the eavesdropper suffers a sinr penalty .jamming potential eavesdroppers with artificial noise has been previously proposed for a point - to - point mimo wiretap channel in . for the mimo broadcast channel with independent signals , we compare the power efficiency and relative sinr of two different approaches : a zero - forcing beamforming design , and an iterative minimum - power joint transmit - receive beamformer design with a minimum sinr constraint per user .the zero - forcing solution allocates slightly lower power for artificial noise at low transmit power levels , but enjoys a significant advantage in terms of complexity . for the mimo multicast channel ,an iterative minimum - power optimal beamformer design is employed with a minimum mean square error ( mmse ) criterion for each user . in the next section ,the mathematical models for the mimo broadcast and multicast channels are presented .the known algorithms for zero - forcing beamforming and the optimal minimum - power beamformer design are outlined in section [ sec : design ] .the wiretapping strategies that a potential eavesdropper could employ are described in section [ sec : wiretapper ] .the resulting system performance is studied via simulation in section [ sec : sim ] , and concluding remarks are presented in section [ sec : concl ] . _notation _ : denotes expectation , the transpose , the hermitian transpose , is the trace operator , {p , q} ] denote the aggregate transmit beamforming matrix .the signal broadcast by the transmitter is then given by in a flat - fading scenario , the received signal at the legitimate receiver , , can be written as where is the corresponding channel matrix between the transmitter and user , and is the naturally occurring i.i.d additive white gaussian noise vector with covariance .analogous parameters can be defined for the eavesdropper , who receives the receiver uses a beamformer to recover its information , which leads to the decision variable in the case of multicast , a common information symbol with power is transmitted to all receivers .this necessitates the use of a common transmit beamformer , with .assuming the same power constraints and artificial noise properties as in section [ sec : broad ] , the transmitted signal is the received signals are now and the receiver employs a beamformer to obtain its decision variable as as will be discussed in the next section , our goal will be to minimize the total transmit power required to achieve a certain sinr for each receiver , and use all remaining power to jam the eavesdropper . to achieve this goal, we will choose the transmit and receive beamformers in such a way that the jamming signal does not impact the quality of the desired receiver s signal .assuming the transmitter is aware of the beamformers used by each of the receivers , an effective downlink channel to the receivers can be constructed for either the broadcast or multicast case as follows : ^t \label{eq : downlink}\ ] ] where or . when , the jamming signal can be chosen from the nullspace of in order to guarantee that it does not impact the desired receivers .if , the nullspace of the effective downlink channel does not exist in general , and the artificial noise could not be guaranteed to be orthogonal to the desired signals . although the artificial noise can still be constructed so as to minimize its impact at the receivers ( _ e.g. , _ by forcing it to lie in the right subspace of with smallest singular value ) , a scheduling strategy to ensure may be more appropriate in the context of this work .we note that user scheduling in wireless networks with secrecy considerations has received limited attention thus far .in either the broadcast or multicast case , the design of the transmit and receive beamformers is , in general , coupled ; that is , the choice of depends on the choice of respectively , and vice versa .one solution to this problem is to fix the beamformer on one end of the link and then optimize the other .an optimal approach would design the beamformers jointly , although at the expense of increased complexity - . in this paper , we consider both types of approaches . in the first , zero - forcing at the transmitter is used for design of the transmit beamformers in the broadcast case ; this eliminates multi - user interference at each receiver , and leads to a simple maximum - ratio combiner at each receiver . in the second approach ,we consider the optimal joint beamformer design problem for both the broadcast and multicast cases .the lack of eavesdropper csi preempts the development of beamforming designs tailored towards maximizing a particular secrecy metric ; hence we utilize existing precoding methods as discussed in the sequel . in this section ,we adopt a modified version of the coordinated zero - forcing beamforming approach in .assume that , and for user , define as \label{eq : txbfstep1}\ ] ] the singular value decomposition ( svd ) of yields ^h , \label{eq : txbf2}\ ] ] where is the matrix of left singular vectors , is the diagonal matrix of singular values , is the right singular vector associated with the smallest ( zero ) singular value , and is the collection of right singular vectors corresponding to other singular values .evidently , is a logical choice for the transmit beamformer given the objective of nulling all multiuser interference .since receiver then sees only its desired signal in spatially white noise , the optimal receive beamformer is simply the maximum - ratio combiner : assuming that the jamming signal is orthogonal to , the sinr at user can then be written as for a target sinr , the required power allocation for user can be calculated as although relatively simple , the proposed zero - forcing algorithm in section [ sec : zf ] will not in general minimize . to minimize the transmit power necessary to achieve the desired sinr , it is necessary to jointly design the transmit and receive beamformers - .since the optimal beamformers will not be of the zero - forcing type , the downlink beamformer design problem is non - convex due to the interdependence of the problem variables .this issue can be overcome by exploiting the sinr duality of the downlink and uplink channels , which states that the minimum sum power required to achieve a set of sinr values on the downlink is equal to the minimum sum power required to achieve the same sinr vector on the dual uplink channel .therefore , as a benchmark we compute the optimum transmit / receive beamformers and power allocation that minimizes the sum transmit power while satisfying the sinr constraint per user based on .let , , and represent the transmit / receive beamformers and downlink / uplink power allocation for user at the iteration .define for as the signal and interference powers for each user : the sinr per stream on either link can then be written as where is the power allotted to user either on the downlink or the dual uplink channel .finally , define matrices and as {s , k } = \left\ { { \begin{array}{*{20}c } { g_{s , k } } & { { \text{if } } s \ne k } \\ 0 & { { \text{if } } s = k } \\\end{array } } \right.\ ] ] the iterative beamformer design that minimizes the transmit power can be summarized as follows . 1 .set the initial transmit beamformers to the zero - forcing solution , with an initial power allocation per user .compute the optimum set of receive beamformers as 2 . now consider the dual uplink channel where the transmit beamformers from step 1 serve as the receive beamformers , and vice versa .update the power allocation vector from then update the beamformer set using ( [ eq : txupdate ] ) : calculate the achieved sinr set on the uplink using ( [ eq : sinr ] ) , and go to step 3 .revert back to the downlink channel and recompute the power allocation vector using ( [ eq : power ] ) .calculate the downlink sinr set using ( [ eq : sinr ] ) and compare with the uplink sinr vector from step 2 for convergence .if the stopping criterion is not satisfied , set and return to step 2 , otherwise terminate the algorithm . despite superficial similarities , the beamforming design problems with per - user sinr constraints for the broadcast and multicast channels are fundamentally different .many efficient numerical solutions exist for the former as cited in section [ sec : zf ] , whereas the multicast problem is known to be non - convex even for the miso case with single antennas at each receiver .a number of approximate solutions based on semidefinite relaxation techniques have been proposed for the miso multicast channel , e.g. . however , the mimo multicast beamforming problem was recently reformulated as a convex optimization by replacing the per - user sinr requirements with minimum mse constraints . in this case, the optimal receiver structure is known to be mmse , which allows an alternating iterative optimization of the transmit and receive beamformers .the minimum sum - power optimization problem can be expressed as a convex second - order cone program ( socp ) : where is the mmse constraint per user , , and . in brief , the algorithm is initialized with random receive beamformers , after which the corresponding optimal transmit beamformer is computed by solving ( [ eq : socp ] ) .the receive beamformers are updated using the mmse criterion as and the iterations continue until convergence . for consistency with our choice of sinr as the performance metric, the following equivalence relation between maximum sinr and mmse is useful : _ remark 1 _ : for both zero - forcing and joint minimum - power beamformer designs , the computations are carried out at the transmitter , which then needs to supply the receivers with information about the optimal beamformers .this can be done using a limited ( quantized ) feedforward scheme , as proposed in ._ remark 2 _ : the assumption of perfect csit of the intended receivers is admittedly a strong one . robust beamforming design for the mimo downlink with multi - antenna receivers is an ongoing research problem , with some recent results provided in .however , incorporating artificial noise into any such robust beamforming schemes is not a straightforward extension , since the lack of exact receiver csi at the transmitter would no longer allow any artificial noise to be perfectly orthogonal to the intended receivers .the authors have conducted a perturbation analysis to capture the performance degradation in single - user mimo wiretap channels , and an extension to the downlink case is in progress . _ remark 3 _ : as mentioned previously , the assumption that the number of receivers is less than the number of transmit antennas in section ii - c can be relaxed by implementing a user selection stage prior to transmission .the authors have shown in that a greedy algorithm which schedules the user set based on maximizing the transmit power available for artificial noise performs close to an optimal exhaustive search in terms of eavesdropper ber .we consider two types of eavesdropper strategies : ( 1 ) a simple linear receiver approach in which the eavesdropper attempts to maximize the sinr of the data stream she is interested in decoding , and ( 2 ) a multi - user decoding scheme in which the eavesdropper uses maximum likelihood detection to decode all information - bearing waveforms .we begin by illustrating approach ( 1 ) for the broadcast case , with the extension to the multicast channel being straightforward .assume that the eavesdropper seeks to recover the data stream of user from her received signal given in ( [ eq : ye_eve ] ) .the interference - plus - noise covariance matrix given that is the symbol of interest is the maximum sinr beamformer for the data stream of user is then given by the use of an optimal beamformer here presumes that , is somehow known at the eavesdropper . using this approach ,the sinr at the eavesdropper can be calculated to be an eavesdropper with more extensive computational resources will attack the mimo broadcast network using a more sophisticated approach . in general , the optimal decoder at the eavesdropper in terms of minimizing the symbol error rate would be the following maximum likelihood ( ml ) detector : where is the signal space from which is drawn , and is the interference - plus - noise covariance matrix perceived by the eavesdropper . in the next section , we present numerical examples that show the sinr and the ber that the eavesdropper experiences with the proposed jamming scheme .we investigate the performance of the eavesdropper and the desired receivers as a function of the target sinr at the receivers and the total available transmit power . in each simulation , we assume a transmitter with antennas , legitimate receivers with antennas each , and an eavesdropper with antennas .the channel matrices for all links are composed of independent gaussian random variables with zero mean and unit variance .the background noise power is assumed to be the same for all receivers and the eavesdropper : .the algorithm of is used to implement the optimal joint beamformer design , and the socp in ( [ eq : socp ] ) was solved using the matlab optimization toolbox .all of the displayed results are calculated based on an average of 5000 trials with independent channel and noise realizations . for users , antennas.,width=336 ]figure [ fig_rho ] displays the average fraction of the transmit power allocated to data and artificial noise by the zero - forcing and the optimal minimum transmit power beamforming algorithms with an sinr threshold of 5 db .the joint design requires roughly less transmit power at small to intermediate power levels , albeit with a significantly greater level of complexity .the curve labeled `` user 1,zf '' represents the fraction of the total power assigned to an arbitrary user ( referred to as user 1 ) among the three legitimate receivers . for users , antennas.,width=336 ]figure [ fig_sinr ] shows the achieved sinr levels versus transmit power for user 1 and the average sinr averaged over all three streams for the eavesdropper when using single - user detection .the legitimate receiver almost always achieves the desired sinr target of 5 db , while the eavesdropper has a significantly lower sinr due to the artificial noise .the sinr of user 1 is slightly below 5 db for transmit powers of 10 and 15 db , since there were a few trials for which the 5 db sinr target could not be achieved . in such cases ,the transmitter devotes all power to the desired receivers and none to jamming , and the resulting sinr is averaged in with the other trials .note that there is not a significant difference in performance for the eavesdropper whether the zero - forcing or optimal broadcast beamformers are used . for users , antennas.,width=336 ]figure [ fig_ber ] compares the eavesdropper s ber with and without artificial noise when the eavesdropper employs mimo ml detection , assuming an uncoded bpsk - modulated information signal and zero - forcing transmit beamforming . for low target sinrs, we observe a significant degradation in the eavesdropper s interception capabilities , e.g. , by approximately 10.5 db at .a more modest gain of 2.5 db is achieved at for intermediate target sinrs . at high sinr thresholds ,the two curves converge since the transmitter is constrained to allocate progressively smaller fractions of the total power to jamming . for users , antennas.,width=336 ] figure [ fig_multicastrho ] displays the fraction of the transmit power allocated to data and artificial noise by the multicast minimum transmit - power beamforming algorithm of section [ sec : mmsebf ] , with sinr thresholds of db , 10 db .the lack of inter - user interference allows the transmitter to allocate more power for jamming compared to the broadcast case for the same transmit power . for users , antennas.,width=336 ]figure [ fig_multicastsinr ] shows the average achieved sinr levels at the intended receivers and the eavesdropper versus transmit power for user sinr thresholds of db , 10 db . as before ,the legitimate receivers achieve the desired sinr targets , while the eavesdropper s performance is degraded .however , the degradation in the eavesdropper s sinr is not as severe as in the broadcast case since there is no multiuser interference to compound the effect of the artificial noise .this paper has examined beamforming strategies combined with artificial noise for providing confidentiality at the physical layer in multiuser mimo wiretap channels . for the mimo broadcast channel with single - user detection at the eavesdropper, the zero - forcing beamformer design is shown to provide an acceptable level of performance in terms of relative sinr when compared to optimal joint transmit - receive beamforming algorithms .the use of artificial noise is meaningful even when the eavesdropper employs optimal ml joint detection for the information vector . for the mimo multicast channel ,the degradation in the eavesdropper s sinr as the transmit power increases is not as severe , but artificial noise is still seen to be effective .r. liu , i. maric , p. spasojevic , and r. d. yates , discrete memoryless interference and broadcast channels with confidential messages : secrecy rate regions , " _ ieee trans .inf . theory _ ,54 , no . 6 , pp . 2493 - 2507 , june 2008 . z. pan , k. wong , and t. ng , generalized multiuser orthogonal space - division multiplexing , " _ ieee trans . wireless commun_. , vol .3 , no . 6 , pp .1969 - 1973 , nov . 2004 .q. spencer , a. l. swindlehurst , and m. haardt , zero - forcing methods for downlink spatial multiplexing in multiuser mimo channels , " _ ieee trans .signal process .462 - 471 , feb . 2004 .d. p. palomar and m. a. lagunas , joint transmit - receive space - time equalization in spatially correlated mimo channels : a beamforming approach , " _ ieee j. sel .areas commun . , _ , vol .730 - 743 , june 2003 .q. spencer and a. swindlehurst , a hybrid approach to spatial multiplexing in multi - user mimo downlinks , " _ eurasip journ .wireless commun . and network_. , pp .236 - 247 , dec . 2004 .m. codreanu , a. tlli , m. juntti , and m. latva - aho , joint design of tx - rx beamformers in mimo downlink channel , " _ ieee trans . signal process .9 , pp . 4639 - 4655 , sep .sidiropoulos , t.n .davidson , and z. q. luo , transmit beamforming for physical layer multicasting " , _ ieee trans .signal process_. , vol .6 , part 1 , pp .2239 - 2251 , june 2006 .e. karipidis , n.d .sidiropoulos , z. q. luo , quality of service and max - min - fair transmit beamforming to multiple co - channel multicast groups " , _ ieee trans .signal process_. , vol .1268 - 1279 , mar .2008 . c. b. chae , d. mazzarese , t. inoue , and r. w. heath , coordinated beamforming for the multiuser mimo broadcast channel with limited feedforward , " _ ieee trans .signal process .6044 - 6056 , dec . 2008g. zheng , k. k. wong , and n. tung - sang , robust linear mimo in the downlink : a worst - case optimization with ellipsoidal uncertainty regions , " _ eurasip j. adv . signal process ._ , pp . 1 - 15 , 2008 .a. mukherjee and a. l. swindlehurst , `` robust beamforming for secrecy in mimo wiretap channels with imperfect csi , '' submitted to _ieee trans .signal process_. , july 2009 .a. mukherjee and a. l. swindlehurst , user selection in multiuser mimo systems with secrecy considerations , " submitted to _ asilomar conf . on signals , systems , and computers_ 2009 , pacific grove , ca .
|
this paper examines linear beamforming methods for secure communications in a multiuser wiretap channel with a single transmitter , multiple legitimate receivers , and a single eavesdropper , where all nodes are equipped with multiple antennas . no information regarding the eavesdropper is presumed at the transmitter , and we examine both the broadcast mimo downlink with independent information , and the multicast mimo downlink with common information for all legitimate receivers . in both cases the information signal is transmitted with just enough power to guarantee a certain sinr at the desired receivers , while the remainder of the power is used to broadcast artificial noise . the artificial interference selectively degrades the passive eavesdropper s signal while remaining orthogonal to the desired receivers . we analyze the confidentiality provided by zero - forcing and optimal minimum - power beamforming designs for the broadcast channel , and optimal minimum - mse beamformers for the multicast channel . numerical simulations for the relative sinr and ber performance of the eavesdropper demonstrate the effectiveness of the proposed physical - layer security schemes .
|
real life networks , whether made by nature , ( e.g. neural , metabolic and ecological networks ) , or made by human ( e.g. the world wide web , power grids , transport networks and social networks of relations between individuals or institutes ) , have special features which is a blend of those of regular networks on the one hand and completely random ones on the other hand . to study any process in these networks,(the spreading of an epidemic in human society , a virus in the internet , or an electrical power failure in a large city , to name only a few ) ,an understanding of their topological and connectivity properties is essential ( for a review see and references therein ) .recently obtained data from many real networks show that like random networks , they have low diameter , and like regular networks , they have high clustering . since the pioneering work of watts and strogatz , these networks have attracted a lot of attentions and have been studied from various directions .+ in contrast to most of the models studied so far , many real networks like the world wide web , neural , power grids , metabolic and ecological networks have directed one - way links .these types of networks may have significant differences in both their static and dynamic properties with the watts - strogatz ( ws ) model and its variations .the presence of directed links affects strongly many of the properties of a network .for example , for the same pattern of shortcuts , the average shortest path in an directed network is longer than that in an undirected one , due to the presence of bonds with the wrong directions ( blocks ) in many paths .so is the spreading time of any dynamic effect on the lattice .+ consider the quantity defined as the average number of sites which are visited at least once when we start a naive spreading process at a site and continue it for steps .note that we mean an average over an ensemble of networks and initially infected sites and by the naive spreading process we mean that at each step of the spreading process all the neighbors of an infected site are equally infected .the quantity may be taken as a crude approximation for the number of people who have been infected by a contiguous disease after time steps has elapsed since the first person has been infected .clearly this is a simplification of the real phenomena , since in real world a disease may not affect an immunized person or may not transmit with certainty in a contact .however as a first approximation , may give a sensible measure of the effect in the whole network .since in an directed network , an effect only spreads to those neighbors into which there are correctly directed links , there will be pronounced differences in this important quantity between an directed and an undirected network . as a concrete example consider a ring with sites , without any shortcuts , where to emphasize the absence of shortcuts , we denote by . if all the links have the same direction , we have , and if all of them are bidirectional , we have . in both casesthe whole lattice gets infected after a finite time .however if the links are randomly directed then may be much lower and furthermore , there is a finite probability that only a small fraction of the whole lattice gets infected .+ adding shortcuts to this ring of course has a positive effect on the spreading . in a sensewe have a chance to see the interplay of two different concepts of small worlds in these networks .the size of the world as a whole may be small due to the ease of communication with the remote points provided by long range connections , however the world accessible to an individual may be small due to the absence of properly directed links to connect it to the outside world .+ it is therefore natural to ask how the presence of directed links and ( or ) directed shortcuts affects quantitatively the small world properties of a network ? how we can make a simple model of a small world network with such random directions? a ws - type model for these networks may be as shown in figure ( [ fig-1 ] ) .however due to their complexity , these networks should usually be studied by numerical or simulation methods and they seldom amend themselves to exact analytical treatment .as we will show in this paper , with slight simplification one can introduce simpler models which although retain most of the small world features , are still amenable to analytical treatment .this is what we are trying to do in this paper . in this paperwe introduce one such model following our earlier work which was in turn inspired by the work of .the basic simplifying feature of these networks is that all the shortcuts are made via a central site , figure ( [ fig-2 ] ) .for such a network many of the small world quantities , can be calculated exactly . ) ., width=302 ] in particular , once defined above is calculated , many other quantities like the average shortest path between two sites can be obtained .an exact calculation of is however difficult for the case where both the shortcuts and the links have random directions .we therefore proceed in two steps to separate the effects of randomness in the two types of connections .first , in section 2 , we remove the shortcuts and calculate exactly for a ring with random links , figure ( [ fig-3 ] ) . ., width=302 ] to emphasize the absence of shortcuts we denote this quantity by .note that depends only on the structure of the underlying ring and its short - range connections . then in section 3, we consider only the effects of randomly directed shortcuts , that is we let directions of the links on the ring to be regular and fixed say clockwise , and calculate exactly , where again for emphasis on the shortcuts we denote this quantity by .+ we then argue , in section 4 that in the scaling limit where the number of sites goes to infinity with the number of shortcuts kept finite , most of the spreading takes place via the links and only from time to time it propagates to remote points via the shortcuts . in this limitit is plausible to suggest a form for which takes into account the effect of both the random links and the shortcuts in the form . this may not be an exact relation but as we will see it will give a fairly good approximation of , as shown by the agreement of our analytical results and the results of simulations .this then means that in more complicated networks , one can separate the effects of short and long range connections and superimpose their effect in a suitable way . we conclude the paper with a discussion .consider a regular ring of sites whose bonds are directed randomly .each link may be directed clockwise with probability , counterclockwise with probability , and bidirectional with probability .+ thus we have a problem similar to bond percolation in a small world network .suppose that at time site number is infected with a virus .we ask the following questions : + after seconds how many sites have been infected on the average ? what is the average speed of propagation of this decease in the network ?these questions have obvious answers for rings with regularly directed or bi - directional bonds , namely the number of infected sites are respectively and , with corresponding speeds of propagation being and . in the randomly directed network, the situation is different .for example if both neighbors of site are directed into this site , this site can not affect any other site of the network .such a site being effectively isolated has an _ accessible world _ of zero size ( figure[fig-3 ] ) . to proceed with exact calculation ,consider the right hand side of site .the probability that exactly extra sites to the right have been infected is , and the probability that exactly extra sites have been infected is . therefore the average number of extra sites infected to the right of the original site is .\end{aligned}\ ] ] going to the large limit where , we find the simple result the same type of reasoning gives the number of sites infected to the left and thus the total number of infected sites will be : where . what are the meaning of the scaled variables ?the parameter is the total number of sparse blocked sites in the way of propagation to the right , with a similar meaning for . is the fraction of infected sites up to time . in a bidirectional lattice , all the sitescould be infected after the passage of seconds , or at and if passes , some of the sites become doubly visited .therefore it is plausible for the sake of comparison to define a quantity in our ring , namely the average size of the _ accessible _ world as , which turns out to be : it is seen that the presence of only a small number of blocked bonds causes a significant drop in the average size of this accessible world . for example a value of leads to .the long range connections ( shortcuts ) make the world small with the ease of communication they provide , however blockades make the world small in this new sense . + the speed of propagation is found from in the symmetric case where , equation ( [ dd ] ) simplifies to : with note that at the early stages of spreading when , and the effects of blocked bonds has not yet been experienced , the infection propagates with speed equal to as in a regular network .the effect of blocking comes into play when becomes comparable to . + as a few number of shortcuts may enhance the speed of propagation , a few number of blocked bonds may have the opposing effect .first the blocks reduce the speed of propagation as is clear from ( [ dddd ] ) and second and more importantly they reduce the number of accessible sites , or the size of accessible world .it will thus be of interest to see how these two effects compete in a random network where there are both shortcuts and blocks .we will study this in the final section of this paper . to this endwe first study the effect of directed shortcuts in an otherwise regular ring with no blocks .in this section we are to consider only the effect of randomly directed shortcuts in the spreading process and obtain exactly the function for this network , figure ( [ fig-4 ] ) .note that this function has the same meaning as , except that for emphasis on the role of shortcuts in it we have adopted a new name for it .we fix a regular clockwise ring . between a site and the centerthere is a shortcut going into the center with probability and out of the center with probability .the site remains unconnected to the center with probability .the average number of connections into and out of the center are respectively and .+ consider sites and .we want to find the probability that the shortest path between these two sites be of length , a probability which we denote by . a typical shortest path of length connecting these two nodes is shown in figure ( [ fig-4 ] ) , where the first inward connection to the center occurs at site and the last outward connection from the center occurs at site .such a path occurs with probability .summing over all such configurations gives us the probability for the shortest path between sites and to be of length .for , we have : + ,\ ] ] and is determined from normalization : note that dose not depend on , a property which is true for standard small world networks .+ now consider a naive spreading process starting at site .the number of sites affected up to time , denoted by , builds up in two ways , via the links on the ring and via the shortcuts .the first way gives a contribution and the second way gives a contribution where is the number of sites beyond direct reach at time which has been multiplied by the probability of any of these sites being at a distance shorter than to site via a shortcut .putting this together we find : + \end{aligned}\ ] ] in the scaling limit where where and are kept fixed and , we find : in the symmetric case where this equation simplifies to : with the speed of propagation figure ( [ fig-5 ] ) shows the speed of propagation as a function of time for several values of .we now come to the problem of composing both the blocks and the shortcuts in a model of small world network .that is we consider the ring of figure ( [ fig-2 ] ) where randomly directed shortcuts are added to a ring with randomly directed links .we can not obtain exact expressions for this network from first principle probability considerations .however we can obtain expressions for in the scaling limit by a heuristic argument and compare our results with those of simulations .consider equation ( [ 18s ] ) .this equation shows how the presence of randomly directed shortcuts in a regular clockwise ring affects the spreading effect . on the other handwe know that the number of sites infected up to time in the absence of shortcuts , has changed to .due to the rarity of shortcuts compared to the regular bonds , most of the spreading takes place via the local bonds , the role of shortcuts is just to make multiple spreading processes happen in different regions of the network .this role is the same whatever the underlying lattice is , and therefore for a general network , at least in the scaling regime , we can assume that equation ( [ 18s ] ) can be elevated to , i.e ; for a fully random network where randomly directed shortcuts are distributed on a ring with already random links , we assume that this relation holds true with taken from ( [ dd ] ) .this suggestion may not provide an exact solution for the network , however we think it provides a fairly good approximation . in factexact solution for the case where all the links on the ring are bidirectional is possible and it confirms the above ansatz , that is we obtain an exact expression only by setting in the above formula .moreover this separation of the effect of short and long range connections may be also useful in more complicated networks .whether this assumption is plausible or not can be checked by comparison with simulations .the results of simulations are compared with those of equations ( [ dd ] ) and([18 ] ) in figure ( [ fig-6 ] ) and ( [ fig-7 ] ) . for a fully random network in the case .analytic results(lines ) versus simulations(symbols ) which have been averaged over realizations of the network.,width=302 ] for a fully random network in the case .analytic results(lines ) versus simulations(symbols ) .,width=302 ]once the functions or are obtained , the static properties of the network i.e. , the average shortest path between two arbitrary sites and its probability distribution can be calculated directly .+ since by definition is the number of sites whose shortest distance to site is less than or equal to , we find the number of sites whose shortest distance is exactly to be . since site is an arbitrary site , we find the probability distribution of the shortest distance between two arbitrary sites which are accessible to each other as : , where is the average size of the accessible world .( there is of course a slight approximation here in that we are taking averages of the denominator and numerator separately . )+ for a regular ring with shortcuts , , since all the sites are accessible .we will discuss the case of random rings in the sequel . in the scaling regime the above formulas transform to : note that is normalized , i.e. . the average shortest path for the network of figure ( [ fig-4 ] ) when , turns out to be : this is in accord with the result of .this formula shows that the presence of a small number of shortcuts , causes a significant drop in the average shortest path from to very small values . in this sensethe world gets smaller by long range connections .+ we now study the static effects of random directed bonds on a ring without shortcuts .the presence of blocks makes the world small in a different sense , namely for each site the number of accessible sites gets smaller .in fact the average size of the world accessible to a site is not anymore but it is given by ( see the paragraph leading to equation ( [ ddd ] ) ) .hence the probability of shortest paths is given by , or in the scaling limit by this probability is normalized , i.e. .we obtain from ( [ static4 ] ) however in order to assess the situation in this network , we should compare the average shortest path with the size of this small world itself , namely we should calculate . inserting equation ( [ dds ] ) into ( [ static5 ] )we find : figure ( [ fig-8 ] ) shows both the average size of the accessible world and the ratio of the average shortest path to the size of accessible world as a function of the number of blocks .it is seen that for , when there is no block , the size is and the average of the shortest path is as it should be . with a few number of blocks the size drops dramatically and the average of shortest path within the world increases . note that with increasing the average shortest path increases to its maximum value of . + for the fully random network , we use equations ( [ eee ] ) and ( [ static4 ] ) to obtain the average of shortest path .the result is shown in figure ( [ fig-9 ] ) for several values of the parameters .we have studied the effect of directed short and long range connections in a simple model of small world network . in our modelsall the shortcuts pass via a central site in the network .this makes possible an almost exact calculation of many of the properties of the network .we have calculated the function , defined as the number of sites affected up to time when a naive spreading process starts in the network .as opposed to shortcuts , the presence of un - favorable bonds has a negative effect on this quantity. hence the spreading process may be able to affect only a fraction of the total sites of the network .we have defined this fraction to be the average size of the accessible world in our model and have calculated it exactly for our model .we have studied also the interplay of shortcuts , and un - favorable bonds on the small world properties like the size of accessible world , the speed of propagation of a spreading process , and the average shortest path between two arbitrary sites .our results show that one can separately take into account the effect of randomness in the directions of shortcuts and the short - range connections in the underlying lattice and at the end super - impose the two effects in a suitable way .we expect that this will hold also in more complicated lattices of small world networks .
|
we investigate the effect of directed short and long range connections in a simple model of small world network . our model is such that we can determine many quantities of interest by an exact analytical method . we calculate the function , defined as the number of sites affected up to time when a naive spreading process starts in the network . as opposed to shortcuts , the presence of un - favorable bonds has a negative effect on this quantity . hence the spreading process may not be able to affect all the network . we define and calculate a quantity named the average size of accessible world in our model . the interplay of shortcuts , and un - favorable bonds on the small world properties is studied .
|
sensor networks consist of a large number of small , inexpensive sensor nodes .these nodes have small batteries with limited power and also have limited computational power and storage space .when the battery of a node is exhausted , it is not replaced and the node dies . when sufficient number of nodes die , the network may not be able to perform its designated task .thus the life time of a network is an important characteristic of a sensor network and it is tied up with the life time of a node .various studies have been conducted to increase the life time of the battery of a node by reducing the energy intensive tasks , e.g. , reducing the number of bits to transmit ( , ) , making a node to go into power saving modes : ( sleep / listen ) periodically ( ) , using energy efficient routing ( , ) and mac ( ) .studies that estimate the life time of a sensor network include .a general survey on sensor networks is which provides many more references on these issues . in this paperwe focus on increasing the life time of the battery itself by energy harvesting techniques ( , ) .common energy harvesting devices are solar cells , wind turbines and piezo - electric cells , which extract energy from the environment . among these, solar harvesting energy through photo - voltaic effect seems to have emerged as a technology of choice for many sensor nodes ( , ) . unlike for a battery operated sensor node , now there is potentially an _ infinite _ amount of energy available to the node .hence energy conservation need not be the dominant theme .rather , the issues involved in a node with an energy harvesting source can be quite different .the source of energy and the energy harvesting device may be such that the energy can not be generated at all times ( e.g. , a solar cell ) .however one may want to use the sensor nodes at such times also .furthermore the rate of generation of energy can be limited .thus one may want to match the energy generation profile of the harvesting source with the energy consumption profile of the sensor node .it should be done in such a way that the node can perform satisfactorily for a long time , i.e. , at least energy starvation should not be the reason for the node to die . furthermore , in a sensor network , the mac protocol , routing and relaying of data through the network may need to be suitably modified to match the energy generation profiles of different nodes , which may vary with the nodes . in the following we survey the literature on sensor networks with energy harvesting nodes .early papers on energy harvesting in sensor networks are and .a practical solar energy harvesting sensor node prototype is described in .a good recent contribution is .it provides various deterministic theoretical models for energy generation and energy consumption profiles ( based on traffic models and provides conditions for _ energy neutral operation _ ,, when the node can operate indefinitely . in a sensor node is considered which is sensing certain interesting events .the authors study optimal sleep - wake cycles such that the event detection probability is maximized .a recent survey is which also provides an optimal sleep - wake cycle for solar cells so as to obtain qos for a sensor node .mac protocols for sensor networks are studied in , and .a general survey is available in and .throughput optimal opportunistic mac protocols are discussed in . in this paperwe summarize our recent results ( ) , on sensor networks with energy harvesting nodes and based on them propose new schemes for scheduling a mac for such networks .the motivating application is estimation of a random field which is one of the canonical applications of sensor networks .the above mentioned theoretical studies are motivated by other applications of sensor networks . in our application ,the sensor nodes sense the random field periodically . after sensing ,a node generates a packet ( possibly after efficient compression ) .this packet needs to be transmitted to a central node , possibly via other sensor nodes . in an energy harvesting node , sometimes there may not be sufficient energy to transmit the generated packets ( or even sense ) at regular intervals and then the node may need to store the packets till they are transmitted .the energy generated can be stored ( possibly in a finite storage ) for later use .initially we will assume that most of the energy is consumed in transmission only .we will relax this assumption later on .we find conditions for energy neutral operation of a node , i.e. , when the node can work forever and its data queue is stable .we will obtain policies which can support maximum possible data rate .we also obtain energy management ( power control ) policies for transmission which minimize the mean delay of the packets in the queue .we use the above energy mangement policies to develop channel sharing policies at a mac ( multiple access channel ) used by energy harvesting sensor nodes .we are currently investigating appropriate routing algorithms for a network of energy harvesting sensor nodes .the paper is organized as follows .section [ model ] describes the model for a single node and provides the assumptions made for data and energy generation .section [ stability ] provides conditions for energy neutral operation of a node .we obtain stable , power control policies which are throughput optimal .section [ opt ] obtains the power control policies which minimize the mean delay via markov decision theory .a greedy policy is shown to be throughput optimal and provides minimum mean delays for linear transmission .section [ general ] provides a throughput optimal policy when the energy consumed in sensing and processing is non - negligible . a sensor node with a fading channel is also considered .section [ simulation ] provides simulation results to confirm our theoretical findings and compares various energy management policies .section [ tree ] introduces a multiple access channel ( mac ) for energy harvesting nodes .section [ orthogonal ] provides efficient energy management schemes for orthogonal mac protocols .sections [ opportunistic ] and [ csma ] consider macs with fading channels ( orthogonal and csma respectively ) .section [ msimulation ] compares the different mac policies via simulations .section [ conclude ] concludes the paper .in this section we present our model for a single energy harvesting sensor node .we consider a sensor node ( fig . [ fig1 ] ) which is sensing a random field and generating packets to be transmitted to a central node via a network of sensor nodes . the system is slotted . during slot ( defined as time interval ] let -\epsilon ) \label{disp}\ ] ] where is an appropriately chosen small constant ( see statement of theorem 1 ) . we show that it is a throughput optimal policy , i.e. , using this with satisfying the assumptions in lemma 1 , is asymptotically stationary and ergodic .* theorem 1 * if are stationary , ergodic and is continuous , nondecreasing , concave then if < g(e[y]) ] ) , i.e. , it has a unique , stationary , ergodic distribution and starting from any initial distribution , converges in total variation to the stationary distribution .henceforth we denote the policy ( [ disp ] ) by to . from results on gi / gi/1 queues ( ) , if are _ iid _ , < g(e[y ] ) , t_k = min(e_k , e[y ] - \epsilon) ] for some then the stationary solution of ( [ eqn1 ] ) satisfies < \infty ] .if is linear then this coincides with the necessary condition .if is strictly concave then < g(e[y ] ) ] . thus provides a strictly smaller stability region .we will be forced to use this policy if there is no buffer to store the energy harvested .this shows that storing energy allows us to have a larger stability region .we will see in section [ simulation ] that storing energy can also provide lower mean delays . although to is a throughput optimal policy , if is small , we may be wasting some energy .thus , it appears that this policy does not minimize mean delay .it is useful to look for policies which minimize mean delay .based on our experience in , the greedy policy where , looks promising . in theorem 2, we will show that the stability condition for this policy is < e[g(y ) ] ] close to ] .therfore , we can improve upon it by saving the energy -\epsilon - g^{-1}(q_k)) ] .however for a log function , using a large amount of energy is also wasteful even when .taking into account these facts we improve over the to policy as + 0.001 ( e_k - cq_k)^+ ) ) \label{onestar}\ ] ] where is a positive constant .the improvement over the to also comes from the fact that if is large , we allow ] .one advantage of ( [ disp ] ) over ( [ eqn5 ] ) and ( [ onestar ] ) is that while using ( [ disp ] ) , after some time -\epsilon ] . thus one can use this policy even if exact information about is not available ( measuring may be difficult in practice ) .in fact , ( [ disp ] ) does not need even while ( [ eqn5 ] ) either uses up all the energy or uses and hence needs only exactly .now we show that under the greedy policy ( [ eqn5 ] ) the queueing process is stable when < e[g(y)] ] then under the greedy policy ( [ eqn5 ] ) , has an ergodic set . the above result will ensure that the markov chain is ergodic and hence has a unique stationary distribution if is irreducible .a sufficient condition for this is < 1 ] because then the state can be reached from any state with a positive probability . in general , can have multiple ergodic sets .then , depending on the initial state , will converge to one of the ergodic sets and the limiting distribution depends on the initial conditions .in this section we choose at time as a function of and such that \ ] ] is minimized where is a suitable constant .the minimizing policy is called -discount optimal . when , we minimize .\ ] ] this optimizing policy is called average cost optimal . by little s law ( )an average cost optimal policy also minimizes mean delay . if for a given , the optimal policy does not depend on the past values , and is time invariant , it is called a stationary markov policy. if and are markov chains then these optimization problems are markov decision problems ( mdp ) . for simplicity , in the following we consider these problems when and are _iid_. we obtain the existence of optimal -discount and average cost stationary markov policies .* theorem 3 * if is continuous and the energy buffer is finite , i.e. , then there exists an optimal -discounted markov stationary policy .if in addition < g(e[y]) ] , then there exists an average cost optimal stationary markov policy .the optimal cost does not depend on the initial state . also , then the optimal -discount policies tend to an optimal average cost policy as .furthermore , if is the optimal -discount cost for the initial state then in section [ stability ] we identified a throughput optimal policy when is nondecreasing , concave .theorem 3 guarantees the existence of an optimal mean delay policy .it is of interest to identify one such policy also . in generalone can compute an optimal policy numerically via value iteration or policy iteration but that can be computationally intensive ( especially for large data and energy buffer sizes ) . also it does not provide any insight and requires traffic and energy profile statistics . in section [ stability ] we also provided a greedy policy ( [ eqn5 ] ) which is very intuitive , and is throughput optimal for linear .however for concave ( including the cost function it is _ not _ throughput optimal and provides low mean delays only for low load .next we show that it provides minimum mean delay for linear .* theorem 4 * the greedy policy ( [ eqn5 ] ) is -discount optimal for when for some .it is also average cost optimal . the fact that greedy is -discount optimal as well as average cost optimal implies that it is good not only for long term average delay but also for transient mean delays .in this section we consider two generalizations .first we will extend the results to the case of fading channels and then to the case where the sensing and the processing energy at a sensor node are non - negligible with respect to the transmission energy . in case of fading channels , we assume flat fading during a slot . in slot the channel gain is .the sequence is assumed stationary , ergodic , independent of the traffic sequence and the energy generation sequence .then if energy is spent in transmission in slot , the process evolves as if the channel state information ( csi ) is not known to the sensor node , then will depend only on .one can then consider the policies used above .for example we could use -\epsilon) ] .we will call this policy unfaded to .if we use greedy ( [ eqn5 ] ) , then the data queue is stable if < e[g(hy)] ] and otherwise . thus if can take an arbitrarily large value with positive probability , then = \infty ] , is throughput optimal . this is because it maximizes ] . then if < e[y]-\delta ] in the above paragraph .in this section , we compare the different policies we have studied via simulations .the function is taken as linear or as .the sequences and are _iid_. ( we have also done limited simulations when and are autoregressive and found that conclusions drawn in this section continue to hold ) .we consider the cases when and have truncated poisson , exponential , erlang or hyperexponential distributions .the policies considered are : greedy , to , , mto ( with ) and the mean delay optimal . at the end , we will also consider channels with fading . for fading channels we compare unfaded to and mto against fading to and fading mto . for the linear , we already know that the greedy policy is throughput optimal as well as mean delay optimal . ;finite , quantized data and energy buffers ; : poisson truncated at 5 ; =1,e[g(y)]=0.92,g(e[y])=1 ] ] ; : exponential ; =10,e[g(y)]=2.01,g(e[y])=2.4 ] ] ; : erlang(5 ) ; =1,e[g(hy)]=0.62,e[g(he[y])]=0.64 ] ] the mean queue lengths for the different cases are plotted in figs .[ plot1]-[plot8 ] . in fig .[ plot1 ] , we compare greedy , to and mean - delay optimal ( op ) policies for nonlinear . the op was computed via policy iteration . for numerical computations ,all quantities need to be finite .so we took data and energy buffer sizes to be and used quantized versions of and .the distribution of and is poisson truncated at .these changes were made only for this example .now ) = 1 ] .we see that the mean queue length of the three policies are negligible till =0.8 ] till close to , mean queue length of to is approximately double of op ) . at low loads , greedy has less mean queue length than to .[ plot2 ] considers the case when and are exponential and is linear .now = 1 ] now all the policies considered are throughput optimal but their delay performances _differ_. we observe that the policy ( henceforth called unbuffered ) has the worst performance .next is the to .[ plot4 ] provides the above results for nonlinear , when and are exponential .now , as before , is the worst .the greedy performs better than the other policies for low values of ] while the throughput optimal policies become unstable at ) = 2.40 ] , the modified to performs the best and is close to greedy at low ] and /4.9 ] , =10 ] .the stability region of fading to is <e[g(\bar{h}y ) ] = 22.0 ] .however , mean queue length of fading to is larger from the beginning till almost 10 .this is because in fading to , we transmit only when which has a small probability ( ) .[ plot8 ] considers nonlinear with erlang distributed .also , =1 , e[g(hy)]=0.62 , e[g(he[y ] ) ] = 0.64 ] .mto and mwf provide improvements in mean queue lengths over to and wf .in a sensor network , all the nodes need to transmit their data to a fusion node .thus , for this a natural network to consider is a tree ( ) . in the present scenario of nodes with energy harvesting sources , selection of a treecan depend on the energy profiles of different nodes .this will be subject of another study .here we assume that an appropriate tree has been formed and will concentrate on the link layer .an important building block for this network is a multiple access channel . in sensor networks contention based ( e.g. , csma ) and contention - free ( tdma / cdma / fdma ) mac protocolsare considered suitable ( , ) .in fact for estimation of a random field , contention - free protocols are more appropriate .we consider the case where nodes with data queues are sharing a wireless channel .each queue generates its traffic , stores in a queue and transmits as in section [ model ] . also , each node has its own energy harvesting mechanism .the traffic generated at different queues and their energy mechanisms are assumed independent of each other .let and be the sequences corresponding to node . for simplicity we will assume and to be _ iid _ although these assumptions can be weakened as for a single queue .as mentioned at the end of section [ general ] , the energy consumption can be taken care of if we simply replace ] in our algorithms . in the followingwe do that and write it as ] whenever it transmits .for better delay performance , the slots allocated to different queues should be uniformly spaced .we can improve on the mean delay by using ( [ onestar ] ) .it is possible that more than one set of satisfy the stability condition ( [ meqn6 ] ). then one should select the values which minimize a cost function , ( say ) weighted sum of mean delays .now we discuss the mac with fading .let be the channel gain process for .it is assumed stationary , ergodic and independent of the fading process for .we discuss opportunistic scheduling for the contention free mac .we will study the csma based algorithms in the next section .if we assume that each of has infinite data backlog , then the policy that maximizes the sum of throughputs for and for symmetric statistics ( i.e. , each has same statistics and all ] . here is the fraction of time slots assigned to .however now we do not know and this may be estimated ( see the end of this section ) .if is replaced with the true value , then stability of the queues in the mac follows from if the fading states take values in a finite set and the system satisfies the following condition .let there exist a function which picks one of the queues as a function of where - \epsilon}{\alpha_i } \right) ] and is the stationary distribution of . thenif < \sum r_i 1 \ { f(r_1 , ... , r_n)=i \ } \pi ( r_1 , ... , r_n) ] for all .but it can be made throughput optimal ( as ( [ meqn11 ] ) ) while still retaining ( partially ) its mean delay performance as follows .choose an appropriately large positive constant .if none of is greater than , use ( [ meqn12 ] ) ; otherwise , on the set , use ( [ meqn11 ] ) .we call this modified greedy policy. the mean delay of the above policies can be further improved , if instead of - \epsilon ) / \alpha(i) ] , we use waterfilling for in ( [ allert3 ] ) .of course we reduce transmit power as in ( [ onestar ] ) if there is not enough data to transmit .now not only the mean delays reduce but the stability region also enlarges . the policies ( [ meqn11 ] ) , ( [ meqn12 ] ) and modified greedy provide good performance , require minimal information ( only ] where is the maximum allowed back - off time in slots .if is the channel gain in a slot then in the back - off time is taken to be . in our setup , to use opportunistic scheduling in csma , we use the above mentioned monotonic function on each of the sensor nodes contending for the channel .the uses the back off time of -\epsilon)}{\alpha(i ) } \right ) \right).\ ] ] when a node gets the channel , it will transmit a complete packet and use energy per slot as -\epsilon ] / \alpha(i^*_k).\ ] ] now we are making the usual assumption that the channel gains stay constant during the transmission of a packet .we can use ( [ onestar ] ) to improve performance . using the ideas in the last section we can develop better algorithms than ( [ meqn14])-([meqn15 ] ) . indeed , with ( [ meqn14 ] ) , instead of ( [ meqn15 ] ), we can use waterfilling ( for in ( [ allert3 ] ) ) .we can also improve over ( [ meqn14 ] ) by using , for back - off time of node , - \epsilon}{\alpha(i ) } \right ) \right)\ ] ] which takes care of the traffic requirements of different nodes .we can also use the ( modified ) greedy in ( [ meqn12 ] ) .the in the above algorithms will be computed via lms in ( [ meqn13 ] ) .we will compare the performance of these algorithms via simulations in section [ msimulation ] .an advantage of above algorithms over the algorithms in section [ opportunistic ] are that these are completely decentralized : each node uses only its own queue length , channel state and ] .water - filling improves the stability region of tdma as well as greedy . for csma , figs .[ delay ] and [ pl ] show mean delays and packet loss probabilities under symmetric conditions with 10 queues and with normal exponential backoff , zhao - tong , our policy ( [ meqn16 ] ) and our policy with water - filling ( with and at assumes values 0.1,0.5,1.0,2.2 for time fractions 0.1,0.3,0.4,0.2 ) .we simulated the 10 queues in continuous time .also , =1 $ ] and the data packets of unit size arrive at each queue as poisson streams .we see that opportunistic policies improve mean delays substantially .we have considered sensor nodes with energy harvesting sources , deployed for random field estimation .throughput optimal and mean delay optimal energy management policies for single nodes are identified which can make them work in energy neutral operation .next these results are extended to fading channels and when energy at the sensor node is also consumed in sensing and data processing .similarly we can include leakage / wastage of energy when it is stored in the energy buffer and when it is extracted .finally these policies are used to develop efficient mac protocols for such nodes . in particular versions of tdma , opportunistic macs for fading channels and csmaare developed .their performance is compared via simulations .it is shown that opportunistic policies can substantially improve the performance .i. f. akyildiz , w. su , y. sankara subramaniam and e. cayirei , a survey on sensor networks " , _ ieee communications magazine _ , vol 40 , 2002 , pp .102 - 114 .s. asmussen , applied probability and queues " , _ john wiley and sons_,n.y . ,s. j. baek , g. veciana and x. su , minimizing energy consumption in large - scale sensor networks through distributed data compression and hierarchical aggregation " , _ ieee jsac _ ,vol 22 , 2004 , pp .1130 - 1140 .d. niyato , e. hossain , m. m. rashid and v. k. bhargava , wireless sensor networks with energy harvesting technologies : a game - theoretic approach to optimal energy management " , _ ieee wireless communications _ , aug .2007 , pp .90 - 96 .s. ratnaraj , s. jagannathan and v. rao , optimal energy - delay subnetwork routing protocol for wireless sensor networks " , _ proc .of ieee conf . on networking , sensing and control _ , april 2006 , pp .787 - 792 .q. zhao and l. tong , opportunistic carrier sensing for energy - efficient information retrieval in sensor networks " , _ eurasip journal on wireless communications and networking _ , vol . 2 , 2005 , pp .231 - 241 .
|
we study sensor networks with energy harvesting nodes . the generated energy at a node can be stored in a buffer . a sensor node periodically senses a random field and generates a packet . these packets are stored in a queue and transmitted using the energy available at that time at the node . for such networks we develop efficient energy management policies . first , for a single node , we obtain policies that are throughput optimal , i.e. , the data queue stays stable for the largest possible data rate . next we obtain energy management policies which minimize the mean delay in the queue . we also compare performance of several easily implementable sub - optimal policies . a greedy policy is identified which , in low snr regime , is throughput optimal and also minimizes mean delay . next using the results for a single node , we develop efficient mac policies . * keywords : * optimal energy management policies , energy harvesting , sensor networks , mac protocols .
|
monte carlo simulations of polymer models have played a significant role in the development of monte carlo methods for more than fifty years .we present here results of simulations performed with a powerful new algorithm , flatperm , which combines a stochastic growth algorithm , perm , with umbrella sampling techniques .this leads to a flat histogram in a chosen parameterization of configuration space .the stochastic growth algorithm used is the pruned and enriched rosenbluth method ( perm ) , which is an enhancement of the rosenbluth and rosenbluth algorithm , an algorithm that dates back to the early days of monte carlo simulations .while perm already is a powerful algorithm for simulating polymers , the addition of flat - histogram techniques provides a significant enhancement , as has already been exploited in , where it has been combined with multicanonical sampling .before we describe the algorithm in detail and present results of the simulations , we give a brief motivating introduction to the lattice models considered here .if one wants to understand the critical behavior of long linear polymers in solution , one is naturally led to a course - grained picture of polymers as beads of monomers on a chain .there are two main physical ingredients leading to this picture .first , one needs an `` _ _ excluded volume _ _ '' effect , which takes into account the fact that different monomers can not simultaneously occupy the same region in space .second , the quality of the solvent can be modeled by an effective monomer - monomer interaction .monomers in a good solvent are surrounded by solvent molecules and hence experience an effective monomer - monomer repulsion .similarly , a bad solvent leads to an effective monomer - monomer attraction .consequently , polymers in a good solvent form swollen `` _ _ coils _ _ '' , whereas polymers in a bad solvent form collapsed `` _ _ globules _ _ '' and also clump together with each other ( see fig .[ figure1 ] ) . in order to study the transition between these two states , it is advantageous to go to the limit of an infinitely dilute solution , in which one considers precisely one polymer in an infinitely extended solvent . ,width=264 ] as we are interested in critical behavior , it is also possible to further simplify the model by discretizing it . due to universality, the critical behavior is expected to be unchanged by doing so .we therefore consider random walks on a regular lattice , _ eg _ the simple cubic lattice for a three - dimensional model .one can think of each lattice site corresponding to a monomer and the steps as monomer - monomer bonds .we model excluded volume effects by considering _ self - avoiding _ random walks which are not allowed to visit a lattice site more than once .the quality of the solvent is modeled by an attractive short - range interaction between non - consecutive monomers which occupy nearest - neighbor sites on the lattice . at this pointwe may add more structure to our polymer model by considering monomer - specific interactions .specific properties of monomers and on the chain lead to an interaction depending on and . in this paper, we will consider three examples in detail .first , we consider as pedagogical example , the problem of simulating polymers in a two - dimensional strip .the interaction energy is simply , however , the introduction of boundaries makes simulations difficult .our second example is the hp model which is a toy model of proteins .it consists of a self - avoiding walk with two types of monomers along the sites visited by the walk hydrophobic ( type h ) and polar ( type p ) .one considers monomer - specific interactions , mimicking the interaction with a polar solvent such as water .the interaction strengths are chosen so that hh - contacts are favored , _eg _ and .the central question is to determine the density of states ( and to find the ground state with lowest energy ) for a given sequence of monomers .an example of a conjectured ground state is given in fig .[ figure2 ] for a particular sequence of 85 monomers on the square lattice ( the sequence is taken from ) ., width=264 ] our third example is the interacting self - avoiding walk ( isaw ) model of ( homo)-polymer collapse ; it is obtained by setting independent of the individual monomers . here, one is interested in the critical behavior in the thermodynamic limit , _ ie _ the limit of large chain lengths .an example of an -step interacting self - avoiding walk with interactions is shown in fig .[ figure3 ] ., width=188 ] the partition function of -step interacting self - avoiding walks can be written as where is the energy of an -step walk , .note that the second sum is over the number of interactions , and is the number of configurations of -step self - avoiding walks with precisely interactions .while the motivation for simulations of the various models is different , the central problems turn out to be similar . for interacting self - avoiding walks ,the collapse transition is in principle understood .one has a tri - critical phase transition with upper critical dimension , so that one can derive the critical behavior from mean - field theory for , whereas for one obtains results from conformal invariance .however , even though this transition is in principle understood , there are surprising observations above the upper critical dimension .most importantly , there is no good understanding of the collapsed regime , which is also notoriously difficult to simulate .similarly , in the hp model one is interested in low - temperature problems , _ ie _ deep inside the collapsed phase .in particular , one wishes to understand the design problem , which deals with the mapping of sequences along the polymer chain to specific ground state structures .again , the most important open question is in the collapsed regime .it is therefore imperative , to find algorithms which work well at low temperatures . in the following section, we present just such an algorithm .this section describes our algorithm , as proposed in .the basis of the algorithm is the rosenbluth and rosenbluth algorithm , a stochastic growth algorithm in which each configuration sampled is grown from scratch .the growth is kinetic , which is to say that each growth step is selected at random from all possible growth steps .thus , if there are possible ways to add a step then one selects one of them with probability .for example , for a self - avoiding walk on the square lattice there may be between one and three possible ways of continuing , but it is also possible that there are no continuing steps , in which case we say that the walk is _ trapped _ ( see fig .[ figure4 ] ) . ]as the number of possible continuations generally changes during the growth process , different configurations are generated with different probabilities and so one needs to reweight configurations to counter this . if one views this algorithm as `` __ approximate counting _ _ ''then the required weights of the configurations serve as estimates of the total number of configurations . to understand this point of view ,imagine that we were to perform a _complete _ enumeration of the configuration space .doing this requires that at each growth step we would have to choose _ all _ the possible continuations and count them each with equal weight . if we now select _ fewer _ configurations , then we have to change the weight of these included configurations in order to correct for those that we have missed .thus , if we choose one growth step out of possible , then we must replace configurations of equal weight by one `` representative '' configuration with -fold weight .in this way , the weight of each grown configuration is a direct estimate of the total number of configurations .let the _ atmosphere _ , , be the number of distinct ways in which a configuration of size can be extended by one step .then , the weight associated with a configuration of size is the product of all the atmospheres encountered during its growth , _ ie _ after having started growth chains , an estimator for the total number of configurations is given by the mean over all generated samples , , of size with respective weights , _ ie _ here , the mean is taken with respect to the total number of growth chains , and _ not _ the number of configurations which actually reach size .configurations which get trapped before they reach size appear in this sum with weight zero .the rosenbluth and rosenbluth algorithm suffers from two problems .first , the weights can vary widely in magnitude , so that the mean may become dominated by very few samples with very large weight .second , regularly occurring trapping events , _ ie _ generation of configurations with zero atmosphere can lead to exponential `` _ _ attrition _ _ '' , _ ie _ exponentially strong suppression of configurations of large sizes . to overcome these problems , enrichment and pruning steps have been added to this algorithm , leading to the pruned and enriched rosenbluth method ( perm ) .the basic idea is that one wishes to suppress large fluctuations in the weights , as these should on average be equal to .if the weight of a configuration is too large one `` _ _ enriches _ _ '' by making copies of the configuration and reducing the weights by an appropriate factor . on the other hand ,if the weight is too small , one throws away or `` _ _ prunes _ _ '' the configuration with a certain probability and otherwise continues growing the configuration with a weight increased by an appropriate factor . note that neither nor the expression ( [ est ] ) for the estimate , , are changed by either enriching or pruning steps .a simple but significant improvement of perm was added in , where it was observed that it would be advantageous to force each of the copies of an enriched configuration to grow in _ distinct _ ways .this increases the diversity of the sample population and it is this version of perm that we consider below .we still need to specify enrichment and pruning criteria as well as the actual enrichment and pruning processes . while the idea of perm itself is straightforward , there is now a lot of freedom in the precise choice of the pruning and the enrichment steps .the `` art '' of making perm perform efficiently is based to a large extent on a suitable choice of these steps this can be far from trivial !distilling our own experience with perm , we present here what we believe to be an efficient , and , most importantly , _parameter free _ version .in contrast to other expositions of perm ( _ eg _ ) , we propose to prune and enrich constantly ; this enables greater exploration of the configuration space .define the _ threshold ratio _, , as the ratio of weight and estimated number of configurations , . ideally we want to be close to to keep weight fluctuations small .hence if the weight is too large and so we enrich . similarly if then the weight is too small and so we prune .moreover , the details of the pruning and enrichment steps are chosen such that the new weights are as close as possible to : * * enrichment step * : + make distinct copies of the configuration , each with weight ( as in nperm ) . * * pruning step * : + continue growing with probability and weight ( _ ie _ prune with probability ) . note that we perform pruning and enrichment _ after _ the configuration has been included in the calculation of .the new values are used during the _ next _ growth step .initially , the estimates can of course be grossly wrong , as the algorithm knows nothing about the system it is simulating .however , even if initially `` wrong '' estimates are used for pruning and enrichment the algorithm can be seen to converge to the true values in all applications we have considered . in a sense , it is self - tuning .we also note here , that the number of samples generated for each size is roughly constant .ideally , in order to effectively sample configuration space , the algorithm traces an unbiased random walk in configuration size .this means that perm is , in some sense , already a flat histogram algorithm .we shall return to this central observation below .it is now straight - forward to change perm to a thermal ensemble with inverse temperature and energy ( defined by some property of the configuration , such as the number of contacts ) by multiplying the weight with the appropriate boltzmann factor , which leads to an estimate of the partition function , , of the form the pruning and enrichment procedures are changed accordingly , replacing by and by .this gives threshold ratio .this is the setting in which perm is usually described . alternatively , however , it is also possible to consider microcanonical estimators for the total number of configurations of size with energy ( _ ie _ the `` density of states '' ) .an appropriate estimator is then given by the mean over all generated samples of size and energy with respective weights , _ ie _ again , the mean is taken with respect to the total number of growth chains , and _ not _ the number of configurations which actually reach a configuration of size and energy .the pruning and enrichment procedures are also changed accordingly , replacing by and using .as observed above , the pruning and enrichment criterion for perm leads to a flat histogram in length , _ ie _ a roughly constant number of samples being generated at each size for perm .in fact , one can motivate the given pruning and enrichment criteria by stipulating that one wishes to have a roughly constant number of samples , as this leads to the algorithm performing an unbiased random walk in the configuration size .similarly , in the microcanonical version described above , the algorithm performs an unbiased random walk in both size and energy of the configurations , and we obtain a roughly constant number of samples at each size and energy . it is because of the fact that the number of samples is roughly constant in each histogram entry , that this algorithm can be seen as a `` flat histogram '' algorithm , which we consequently call flat histogram perm , or flatperm . in hindsight inbecomes clear that perm itself can be seen as a flat histogram algorithm , at it creates a roughly flat histogram in size .recognizing this led us to the formulation of this algorithm in the first place .we have seen that by casting perm as an approximate counting method , the generalization from perm to flat histogram perm is straight - forward and ( nearly ) trivial .one can now add various refinements to this method if needed .for examples we refer the reader to .we close this section with a summary of the central steps to convert simple perm to flatperm by comparing the respective estimators and threshold ratios , : 1 . * athermal perm * : estimate the number of configurations * * 2 . * thermal perm * : estimate the partition function * * 3 .* flat histogram perm * : estimate the density of states * * one can similarly generalize the above to several microcanonical parameters , , to produce estimates of .once the simulations have been performed the average of an observable , , defined on the set of configurations can be obtained from weighted sums : these can then be used for subsequent evaluations . for instance , the expectation value of in the canonical ensemble at a given temperature can now be computed via _ ie _ only a single simulation is required to compute expectations at _ any _ temperature . for many problems we are interested in their behavior at low temperatures where averages of observables are dominated by configurations with high energy .such configurations are normally very difficult to obtain in simulations .the flatperm algorithm is able to effectively sample such configurations because it obtains a roughly constant number of samples at all sizes and energies ( due to the constant pruning and enrichment ) .this means that it is possible to study models even at very low temperatures .examples of this are given in the next section .a good way of showing how flatperm works is to simulate two - dimensional polymers in a strip .this kind of simulation has previously been performed with perm using markovian anticipation techniques which are quite complicated . with flatperm one simply chooses the vertical position of the endpoint of the walk in the strip as an `` energy '' for the algorithm to flatten against .we have found that this produces very good results .[ figure6 ] shows the results of our simulations of -step self - avoiding walks in a strip of width .the left - hand figure is the probability density of the endpoint coordinate shown as a function of walk length .the right - hand figure shows the actual number of samples generated for each length and end point position .one sees that the histogram of samples is indeed nearly completely flat .one can now extract several quantities from such simulations ( see ) , but we restrict ourselves here to the scaled end - point density shown in fig .[ figure7 ] . ):density of states versus energy ( above ) and specific heat versus temperature ( below ) .[ figure8 ] , title="fig:",width=313 ] + ): density of states versus energy ( above ) and specific heat versus temperature ( below ) .[ figure8 ] , title="fig:",width=302 ] ): density of states versus energy ( above ) and specific heat versus temperature ( below ) .[ figure9 ] , title="fig:",width=302 ] + ): density of states versus energy ( above ) and specific heat versus temperature ( below ) .[ figure9 ] , title="fig:",width=287 ] next we show results from simulations of the hp - model . here, we have obtained the whole density of states for small model proteins with fixed sequences .the first sequence considered ( taken from ) is small enough to enable comparison with exact enumeration data .it has moreover been designed to possess a unique ground state ( up to lattice symmetries ) .[ figure8 ] shows our results .we find ( near ) perfect agreement with exact enumeration even though the density of states varies over a range of eight orders of magnitude !the derived specific heat data clearly shows a collapse transition around and a sharper transition into the ground state around .the next sequence ( taken from ) is the one for which fig .[ figure2 ] shows a state with the lowest found energy .[ figure9 ] shows our results for the density of states and specific heat .we find the same lowest energy as ( though this is not proof of it being the ground state ) .the density of states varies now over a range of 30 orders of magnitude !the derived specific heat data clearly shows a much more complicated structure than the previous example . for several other sequences taken from the literaturewe have confirmed previous density of states calculations and obtained identical ground state energies .the sequences we considered had steps ( dimensions , ) and steps ( dimensions , ) from , and steps ( dimensions , ) from .we studied also a particularly difficult sequence with steps ( dimensions , ) from , but the lowest energy we obtained was . while we have not been able to obtain the ground state , neither has any other perm implementation ( see ) .+ + we now turn to the simulation of interacting self - avoiding walks ( isaw ) on the square and simple cubic lattices . in both cases have we simulated walks up to length . here, we encounter a small additional difficulty ; when perm is initially started it is effectively blind and may produce poor estimates of and this may in turn lead to overflow problems .it is therefore necessary to stabilize the algorithm by delaying the growth of large configurations . for this, it suffices to restrict the size of the walks by only allowing them to grow to size once tours ( the number of times the algorithm returns to an object of zero size ) has been reached .we found a value of sufficient .[ figure10 ] shows the equilibration of the algorithm due to the delay .snapshots are taken after , , and generated samples .while the sample histogram looks relatively rough ( even on a logarithmic scale ) the density of states is already rather well behaved . in the plots one clearly sees the effect of large correlated tours in which large number of enrichments produce many samples with the same initial walk segment .( left ) and number of generated samples ( right ) versus internal energy and length for isaw on the square lattice ( above ) and simple cubic lattice ( below ) [ figure11 ] , title="fig:",width=207 ] ( left ) and number of generated samples ( right ) versus internal energy and length for isaw on the square lattice ( above ) and simple cubic lattice ( below ) [ figure11 ] , title="fig:",width=207 ] + ( left ) and number of generated samples ( right ) versus internal energy and length for isaw on the square lattice ( above ) and simple cubic lattice ( below ) [ figure11 ] , title="fig:",width=207 ] ( left ) and number of generated samples ( right ) versus internal energy and length for isaw on the square lattice ( above ) and simple cubic lattice ( below ) [ figure11 ] , title="fig:",width=207 ] the final result of our simulations for interacting self - avoiding walks in two and three dimensions is shown in fig .[ figure11 ] .it clearly shows the strength of flatperm : with one single simulation can one obtain a density of states which ranges over more than 300 orders of magnitude !versus inverse temperature for isaw on the square lattice ( above ) and the simple cubic lattice ( below ) at lengths 64 , 128 , 256 , 512 , and 1024 .the curves for larger lengths are more highly peaked .the vertical lines denote the expected transition temperature at infinite length .[ figure12 ] , title="fig:",width=207 ] versus inverse temperature for isaw on the square lattice ( above ) and the simple cubic lattice ( below ) at lengths 64 , 128 , 256 , 512 , and 1024 .the curves for larger lengths are more highly peaked .the vertical lines denote the expected transition temperature at infinite length .[ figure12 ] , title="fig:",width=207 ] from these data one can now , for example , compute the specific heat curves .the results for both systems are shown in fig .[ figure12 ] .we see that the data is well behaved well into the collapsed low - temperature regime .we have reviewed stochastic growth algorithms for polymers . describingthe rosenbluth and rosenbluth method as an approximate counting method has enabled us to present a straight - forward extension of simple perm to flat histogram perm . using this algorithmone can obtain the complete density of states ( even over several hundred orders of magnitude ) from one single simulation .we demonstrated the strength of the algorithm by simulating self - avoiding walks in a strip , the hp - model of proteins , and interacting self - avoiding walks in two and three dimensions as a model of polymer collapse .99 g. w. king , in _monte carlo method _ ,volume 12 of _ applied mathematics series _ , national bureau of standards , 1951 t. prellberg and j. krawczyk , phys ., in print p. grassberger , phys .rev e * 56 * 3682 ( 1997 ) g. m. torrie and j. p. valleau , j. comput .* 23 * 187 ( 1977 ) m. n. rosenbluth and a. w. rosenbluth , j. chem. phys . * 23 * 356 ( 1955 ) f. wang and d. p. landau , phys .* 86 * 2050 ( 2001 ) m. bachmann and w. janke , phys .* 91 * 208105 ( 2003 ) b. a. berg and t. neuhaus , phys .b * 267 * 249 ( 1991 ) h .- p .hsu and p. grassberger , eur .j. b * 36 * 209 ( 2003 ) k. a. dill , biochemistry * 24 * 1501 ( 1985 ) h .- p .hsu and v. mehra and w. nadler and p. grassberger , j. chem .phys . * 118 * 444 ( 2003 ) i. d. lawrie and s. sarbach , in _ phase transitions and critical phenomena _ , edited by c. domb and j. l. lebowitz , volume 9 , academic , london , 1984 j. l. cardy , in _ phase transitions and critical phenomena _ , edited by c. domb and j. l. lebowitz , volume 11 , academic press , new york , 1987 t. prellberg and a. l. owczarek , phys .e. * 62 * , 3780 ( 2000 ) h .- p . hsu and v. mehra and w. nadler and p. grassberger , physe * 68 * 21113 ( 2003 ) h. frauenkron and u. bastolla and e. gerstner and p. grassberger and w. nadler , phys . rev. lett . * 80 * 3149 ( 1998 ) t.c .beutler and k.a .dill , protein sci .* 5 * 2037 ( 1996 )
|
we present monte carlo simulations of lattice models of polymers . these simulations are intended to demonstrate the strengths of a powerful new flat histogram algorithm which is obtained by adding microcanonical reweighting techniques to the pruned and enriched rosenbluth method ( perm ) .
|
one fundamental difference between classical and quantum mechanics is the unavoidable back - action of quantum measurement . on the one hand ,this back - action is generally thought to be detrimental for the implementation of effective quantum control . on the other hand, it also provides us one possibility to use the change caused by the measurement as a new route to manipulate the state of the system .a basic problem in quantum physics and engineering is how to drive a quantum system to a desired target state .there have been studies on the preparation of a given target state from a given initial state using sequential ( projective or non - projective ) measurements in the last few years .a quantum measurement is described by a collection of measurement operators where is an index set for measurement outcomes and the measurement operators satisfy suppose we perform the quantum measurement on density operator , the probability of obtaining result is , and when occurs , the post - measurement state of the quantum system becomes if we are unaware of the measurement result , the unconditional state of the quantum system after the measurement can be expressed as if are orthogonal projectors , i.e. , the are hermitian and , is a projective measurement .the idea of quantum state manipulation using sequential measurements is as follows . by consecutively performing the measurements , the unconditional state for quantum system with initial state be expressed as it has been shown , analytically or numerically , how to select the measurements so that can asymptotically tend to a desired target state . making use of feedback information for quantum measurement and detection actually has a long history , which can be viewed as the dual problem of state manipulation .the dolinar s receiver " proposes a feedback strategy for discriminating two possible quantum states with prior distribution with minimum probability of error .the problem is known as the quantum detection problem and helstrom s bound characterizes the minimum probability of error for discriminating any two non - orthonormal states .quantum detection is to identify uncertain quantum states via projective measurements ; while the considered quantum state projection is to manipulate a certain quantum state to a certain target , again via projective measurements .the dolinar s scheme follows a similar structure that measurement is selected based on previous measurement results on different segments of the pulse , and was recently realized experimentally .see for a survey for the extensive studies in feedback ( adaptive ) design in quantum tomography . in this paper, we propose a feedback design for quantum state manipulation via sequential measurements . for a given set of measurements and a given time horizon ,we show that finding the policy of measurement selections that maximizes the probability of successful state manipulation can be solved by dynamical programming. such derivation of the optimal policy falls to belavkin s quantum feedback theory .numerical examples are given which indicate that the proposed feedback policy significantly improves the success probability compared to classical policy by consecutive projections without taking feedback .in particular , the probability of reaching the target state via feedback policy reaches using merely measurements from initial state .other optimality criteria are also discussed such as the maximal expected fidelity and the minimal arrival time , and some connections and differences among the the different criteria are also discussed .the remainder of the paper is organised as follows . in the first part of section [ sec2 ], we revisit a simple example of reaching from using sequential projective measurements , and show how feedback policies work under which even a little bit of feedback can make a nontrivial improvement .the rest of section [ sec2 ] devotes to a rigorous treatment for the problem definition and for finding the optimal feedback policy from classical quantum feedback theory .numerical examples are given there .section [ sec3 ] investigates some other optimality criteria and finally section [ sec4 ] concludes the paper .consider now a qubit system , i.e. , a two - dimensional hilbert space .the initial state of the quantum system is , and the target state is .given projective measurements from the set where with and note that the choice of follows the optimal selection given in .the strategy in is simply to perform the measurements in turn from to .we call it a _ naive _ policy. the probability of successfully driving the state from to in steps under this naive strategy is denoted by .we can easily calculate that and .let . we next show that even only a bit of measurement feedback can improve the performance of the strategy significantly . _after the first measurement has been made , perform if the outcome is for the second step , and follow the naive policy for all other actions ._ following this scheme , it turns out that the probability of arriving at in three steps becomes around , in contrast with under the naive scheme .the improvement in the probability of success comes from the fact that a feedback decision is made based on the information of the outcome of so that in _ s1 _ a better selection of measurement is obtained between and .we now present the solution to the optimal policy for the considered quantum state manipulation in light of the classical work of quantum feedback control theory derived by belavkin ( also see and for a thorough treatment ) . consider a quantum system whose state is described by density operators over the qubit space .let be a given finite set of measurements serving as all feasible control actions .for each , we write where is a finite index set of measurement outputs and is the measurement operator corresponding to outcome .time is slotted with a horizon .the initial state of the quantum system is , and the target state is assumed to be , for the ease of presentation , . for ,we denote by the measurement performed at time and the post - measurement state after has been performed is denoted as .let be the outcome of . the measurement sequence is selected by a _ policy _ , where each takes value in the set such that can depend on all previous selected measurements and their outcomes for all .here for convenience we have denoted .we can now express the closed - loop evolution of as where .the distribution of is given by where .clearly defines a markov chain .we define must be a measurement in the set for to be a non - trivial function if all measurements in are projective . ] as the probability of successfully manipulating the quantum state to the target density matrix , where is the probability measure equipped with .we also define the cost - to - go function for . following standard theories for controlled markovian process , the following conclusion holds .the cost - to - go function satisfies the following recursion where , with boundary condition if , and otherwise . the maximum arrival probability is given by .the optimal policy is markovian , and is given by for .we now compare the performance of the policies with and without feedback .again we consider driving a two - level quantum system from state to .the available measurements are in the set as given in eq.([measurementset ] ) .first of all , we take .the naive policy in turn takes projections from to , denoted .we solve the optimal feedback policy using eq .( [ 1 ] ) .it is clear that is deterministic with , while is markovian with depending on .correspondingly , their arrival probability in steps are given by and , respectively .in figure [ success ] , we plot and for .as shown clearly in the figure , the probability of success is improved significantly . actually for , we already have . from the initial state using naive policy and optimal feedback policy , respectively ., width=377 ] moreover , as an illustration of the different actions between the naive and feedback strategies , we plot their policies for in tables i and ii , respectively .we now investigate how the size of the available measurement set influences the successful arrival probability in steps under optimal feedback . in this case , the optimal arrival probability is also a function of , and we therefore rewrite . in figure[ enlarge ] , we plot , for , respectively .the numerical results show that as increases , the quickly tends to a limiting curve , suggesting the existence of some fundamental upper bound on the arrival probability in steps using sequential projections from an arbitrarily large measurement set . from the initial state using different sizes of measurement set by feedback strategy ., width=377 ]in this section , we discuss two other useful optimality criteria , to maximize the expected fidelity with the target state , or to minimize the expected time it takes to arrive at the target state . given two density operators and , their _ fidelity _ is defined by fidelity measures the closeness of two quantum states . now that our target state is a pure state , we have alternatively , we can consider the following objective functional ,\ ] ] and the goal is to find a policy that maximizes .for the two objective functionals and , we denote their corresponding optimal policy as and , respectively , where the time horizon is also indicated .let be the policy that follows for and takes value for .let be the unconditional density operator at step for .the following equations hold : \nonumber\\ & = \tr \big ( \rho_{_{n-1}}^{\rm u } |1\rangle\langle 1| \big)\nonumber\\ & = \mathbb{p}_{\pi'}\big(\rho_{_n}=|1\rangle\langle 1| \big),\end{aligned}\ ] ] for any , where with . as a result, the following relation holds between the optimal policies under the two objectives and .[ prop2 ] it holds that .in fact , with .the intuition behind proposition [ prop2 ] is that one would expect to get as closely as possible to the target state at step , if one tends to successfully project onto the target state at step .we also know from proposition [ prop2 ] that we can solve the maximal expected fidelity problem in steps by the solutions of maximizing the arrival probability in steps .similarly , we can also find the optimal policy for the objective using dynamical programming .define the cost - to - go function for as \end{aligned}\ ] ] for .then satisfies the following recursive equation for , with terminal condition the optimal policy can be obtained by solving for .the maximal expected fidelity . in previous discussions the deadline plays an important role in the objective functionals as well as in their solutions .we now consider the case when the deadline is flexible , and we aim to minimize the average number of steps it takes to arrive at the target state .now the control policy is denoted as , where selects a measurement from the set . associated with ,we define note that defines a stopping time ( cf . , )associated with the random processes , and we assume that is _ proper _ in the sense that we continue to introduce \ ] ] as the objective functional , which is the expected time it takes for the quantum state to reach the target following policy .minimizing is a _stochastic shortest path problem _ .we introduce and .\ ] ] the markovian property of leads to that the optimal policy is _ stationary _ in the sense that for all .the following conclusion holds applying directly the results of .the cost - to - go function satisfies the following recursion for all , with boundary condition .the optimal policy is given by the optimal is given by .technically it can not be guaranteed that for any given measurement set , there always exists at least one policy under which admits a finite number .however , some straightforward calculations indicate that for the set of projective measurements given in eq .( [ measurementset ] ) , finite can always be achieved for a class of policies .again , consider projective measurements from the set in figure [ optimalstopping ] , we plot as a function of , for .numerical calculations show that the minimized average number of steps of driving to does not depend too much on the size of control set , it oscillates around for control sets of reasonable size . also for measurement set with , we show the optimal policy in table [ optimalstoppingtable ] . from the initial state employing control set of size ., width=377 ]we have proposed feedback designs for manipulating a quantum state to a target state by performing sequential measurements . making use of belavkin s quantum feedback control theory, we showed that finding the measurement selection policy that maximizes the probability of successful state manipulation is an optimal control problem which can be solved by dynamical programming for any given set of measurements and a given time horizon .numerical examples indicate that making use of feedback information significantly improves the success probability compared to classical scheme without taking feedback .it was shown that the probability of reaching the target state via feedback policy reaches using merely steps , while classical results suggested that naive strategy via consecutive measurements in turn reaches success probability one when the number of steps tends to infinity . maximizing the expected fidelity to the target state and minimizing the expected arrival timewere also considered , and some connections and differences among these objectives were also discussed . * acknowledgments * we gratefully acknowledge support by the australian research council centre of excellence for quantum computation and communication technology ( project number ce110001027 ) , and afosr grant fa2386 - 12 - 1 - 4075 ) .10 h. m. wiseman , d. w. berry , s. d. bartlett , b. l. higgins , and g. j. pryde , adaptive measurements in the optical quantum information laboratory , _ ieee journal of selected topics in quantum electronics _15 , no . 6 , pp .16611672 , 2009 .
|
in this paper , we propose feedback designs for manipulating a quantum state to a target state by performing sequential measurements . in light of belavkin s quantum feedback control theory , for a given set of ( projective or non - projective ) measurements and a given time horizon , we show that finding the measurement selection policy that maximizes the probability of successful state manipulation is an optimal control problem for a controlled markovian process . the optimal policy is markovian and can be solved by dynamical programming . numerical examples indicate that making use of feedback information significantly improves the success probability compared to classical scheme without taking feedback . we also consider other objective functionals including maximizing the expected fidelity to the target state as well as minimizing the expected arrival time . the connections and differences among these objectives are also discussed . _ keywords _ : feedback policy , quantum state - manipulation , quantum measurement
|
currently , applying constraint technology to a large , complex problem requires significant manual tuning by an expert .such experts are rare .the central aim of this project is to improve the scalability of constraint technology , while simultaneously removing its reliance on manual tuning by an expert .we propose a novel , elegant means to achieve this a _ constraint solver synthesiser _ , which generates a constraint solver specialised to a given problem .constraints research has mostly focused on the incremental improvement of general - purpose solvers so far .the closest point of comparison is currently the g12 project , which aims to combine existing general constraint solvers and solvers from related fields into a hybrid .there are previous efforts at generating specialised constraint solvers in the literature , e.g. ; we aim to use state - of - the - art constraint solver technology employing a broad range of different techniques . synthesising a constraint solver has two key benefits .first , it will enable a fine - grained optimisation not possible for a general solver , allowing the solving of much larger , more difficult problems .second , it will open up many new research possibilities .there are many techniques in the literature that , although effective in a limited number of cases , are not suitable for general use .hence , they are omitted from current general solvers and remain relatively undeveloped . among theseare for example conflict recording , backjumping , singleton arc consistency , and neighbourhood inverse consistency .the synthesiser will select such techniques as they are appropriate for an input problem .additionally , it can also vary basic design decisions , which can have a significant impact on performance .the system we are proposing in this paper , dominion , implements a design that is capable of achieving said goals effectively and efficiently .the design decisions we have made are based on our experience with minion and other constraint programming systems .the remainder of this paper is structured as follows . in the next section ,we describe the design of dominion and which challenges it addresses in particular .we then present the current partial implementation of the proposed system and give experimental results obtained with it .we conclude by proposing directions for future work .the design of dominion distinguishes two main parts .the _ analyser _ analyses the problem model and produces a solver specification that describes what components the specialised solver needs to have and which algorithms and data structures to use .the _ generator _ takes the solver specification and generates a solver that conforms to it .the flow of information is illustrated in figure [ design ] . ] both the analyser and the generator optimise the solver .while the analyser performs the high - level optimisations that depend on the structure of the problem model , the generator performs low - level optimisations which depend on the implementation of the solver .those two parts are independent and linked by the solver specification , which is completely agnostic of the format of the problem model and the implementation of the specialised solver .there can be different front ends for both the analyser and the generator to handle problems specified in a variety of formats and specialise solvers in a number of different ways , e.g. based on existing building blocks or synthesised from scratch .the analyser operates on the model of a constraint problem class or instance .it determines the constraints , variables , and associated domains required to solve the problem and reasons about the algorithms and data structures the specialised solver should use .it makes high - level design decisions , such as whether to use trailing or copying for backtracking memory .it also decides what propagation algorithms to use for specific constraints and what level of consistency to enforce .the output of the analyser is a solver specification that describes all the design decisions made .it does not necessarily fix all design decisions it may use default values if the analyser is unable to specialise a particular part of the solver for a particular problem model .in general terms , the requirements for the solver specification are that it describes a solver which is able to find solutions to the analysed problem model and describes optimisations which will make this solver perform better than a general solver .the notion of better performance includes run time as well as other resources such as memory .it is furthermore possible to optimise with respect to a particular resource ; for example a solver which uses less memory at the expense of run time for embedded systems with little memory can be specified .the solver specification may include a representation of the original problem model such that a specialised solver which encodes the problem can be produced the generated solver does not require any input when run or only values for the parameters of a problem class .it may furthermore modify the original model in a limited way ; for example split variables which were defined as one type into several new types .it does not , however , optimise it like for example tailor . the analyser may read a partial solver specification along with the model of the problem to be analysed to still allow fine - tuning by human experts while not requiring it .this also allows for running the analyser incrementally , refining the solver specification based on analysis and decisions made in earlier steps .the analyser creates a constraint optimisation model of the problem of specialising a constraint solver .the decision variables are the design decisions to be made and the values in their domains are the options which are available for their implementation .the constraints encode which parts are required to solve the problem and how they interact . for example , the constraints could require the presence of an integer variable type and an equals constraint which is able to handle integer variables .a solution to this constraint problem is a solver specification that describes a solver which is able to solve the problem described in the original model .the weight attached to each solution describes the performance of the specialised solver and could be based on static measures of performance as well as dynamic ones ; e.g.predefined numbers describing the performance of a specific algorithm and experimental results from probing a specific implementation .this metamodel enables the use of constraint programming techniques for generating the specialised solver and ensures that a solver specification can be created efficiently even for large metamodels .the result of running the analyser phase of the system is a solver specification which specifies a solver tailored to the analysed problem model .the generator reads the solver specification produced by the analyser and constructs a specialised constraint solver accordingly .it may modify an existing solver , or synthesise one from scratch .the generated solver has to conform to the solver specification , but beyond that , no restrictions are imposed . in particular, the generator does not guarantee that the generated specialised solver will have better performance than a general solver , or indeed be able to solve constraint problems at all this is encoded in the solver specification .in addition to the high - level design decisions fixed in the solver specification , the generator can perform low - level optimisations which are specific to the implementation of the specialised solver .it could for example decide to represent domains with a data type of smaller range than the default one to save space .the scope of the generator is not limited to generating the source code which implements the specialised solver , but also includes the system to build it .the result of running the generator phase of the system is a specialised solver which conforms to the solver specification .we have started implementing the design proposed above in a system which operates on top of minion .the analyser reads minion input files and writes a solver specification which describes the constraints and the variable types which are required to solve the problem .it does not currently create a metamodel of the problem .the generator modifies minion to support only those constraints and variable types .it furthermore does some additional low - level optimisations by removing infrastructure code which is not required for the specialised solver .the current implementation of dominion sits between the existing tailor and minion projects it takes minion problem files , which may have been generated by tailor , as input , and generates a specialised minion solver .the generated solver is specialised for models of problem instances from the problem class the analysed instance belongs to .the models have to be the same with respect to the constraints and variable types used .experimental results for models from four different problem classes are shown in figure [ results ] .the graph only compares the cpu time minion and the specialised solver took to solve the problem ; it does not take into account the overhead of running dominion analysing the problem model , generating the solver , and compiling it , which was in the order of a few minutes for all of the benchmarks .axis shows the time standard minion took to solve the respective instance .the labels of the data points show the parameters of the problem instance , which are given in parentheses in the legend .the times were obtained using a development version of minion which corresponds to release 0.8.1 and dominion - generated specialised solvers based on the same version of minion .symbols below the solid line designate problem instances where the dominion - generated solver was faster than minion .the points above the line are not statistically significant ; they are random noise .the dashed line designates the median for all problem instances.[results ] ] the problem classes balanced incomplete block design , golomb ruler , -queens , and social golfers were chosen because they use a range of different constraints and variable types .hence the optimisations dominion can perform are different for each of these problem classes .this is reflected in the experimental results by different performance improvements for different classes .figure [ results ] illustrates two key points .the first point is that even a quite basic implementation of dominion which does only a few optimisations can yield significant performance improvements over standard minion .the second point is that the performance improvement does not only depend on the problem class , but also on the instance , even if no additional optimisations beyond the class level were performed . forboth the balanced incomplete block design and the social golfers problem classes the largest instances yield significantly higher improvements than smaller ones . at this stage of the implementation ,our aim is to show that a specialised solver can perform better than a general one .we believe that figure [ results ] conclusively shows that . as the problem models become larger and take longer to solve , the improvement in terms of absolute run time difference becomes larger as well .hence the more or less constant overhead of running dominion is amortised for larger and more difficult problem models , which are our main focus . generating a specialised solver for problem classes and instances is always going to entail a certain overhead , making the approach infeasible for small and quick - to - solve problems .we have described the design of dominion , a solver generator , and demonstrated its feasibility by providing a preliminary implementation .we have furthermore demonstrated the feasibility and effectiveness of the general approach of generating specialised constraint solvers for problem models by running experiments with minion and dominion - generated solvers and obtaining results which show significant performance improvements .these results do not take the overhead of running dominion into account , but we are confident that for large problem models there will be an overall performance improvement despite the overhead . based on our experiences with dominion , we propose that the next step should be the generation of specialised variable types for the model of a problem instance .dominion will extend minion and create variable types of the sort `` integer domain ranging from 10 to 22 '' .this not only allows us to choose different representations for variables based on the domain , but also to simplify and speed up services provided by the variable , such as checking the bounds of the domain or checking whether a particular value is in the domain .the implementation of specialised variable types requires generating solvers for models of problem instances because the analysed problem model is essentially rewritten .the instance the solver was specialised for will be encoded in it and no further input will be required to solve the problem .we expect this optimisation to provide an additional improvement in performance which is more consistent across different problem classes , i.e. we expect significant improvements for all problem models and not just some .we are also planning on continuing to specify the details of dominion and implementing it .the authors thank chris jefferson for extensive help with the internals of minion and the anonymous reviewers for their feedback .lars kotthoff is supported by a sicsa studentship .stuckey , p.j ., de la banda , m.j.g . , maher , m.j ., marriott , k. , slaney , j.k . , somogyi , z. , wallace , m. , walsh , t. : the g12 project : mapping solver independent models to efficient solutions . in : iclp 2005 .
|
this paper proposes a design for a system to generate constraint solvers that are specialised for specific problem models . it describes the design in detail and gives preliminary experimental results showing the feasibility and effectiveness of the approach .
|
smart meters are a critical part of modern power distribution systems because they provide fine - grained power consumption measurements to utility providers .these fine - grained measurements improve the efficiency of the power grid by enabling services such as time - of - use pricing and demand response . however , this promise of improved efficiency is accompanied by a risk of privacy loss .it is possible for the utility provider or an eavesdropper to infer private information including load taxonomy from the fine - grained measurements provided by smart meters .such private information could be exploited by third parties for the purpose of targeted advertisement or surveillance .traditional techniques in which an intermediary anonymizes the data are also prone privacy loss to an eavesdropper .one possible solution is to partially obscure the load profile by using a rechargeable battery . as the cost of rechargeable batteries decreases ( for example , due to proliferation of electric vehicles ) , using them for improving privacy is becoming economically viable .in a smart metering system with a rechargeable battery , the energy consumed from the power grid may either be less than the user s demand the rest being supplied by the battery ; or may be more than the user s demand the excess being stored in the battery .a rechargeable battery provides privacy because the power consumed from the grid ( rather than the user s demand ) gets reported to the electricity utility ( and potentially observed by an eavesdropper ) . in this paper , we focus on the mutual information between the user s demand and consumption ( i.e. , the information leakage ) as the privacy metric .mutual information is a widely used metric in the literature on information theoretic security , as it is often analytically tractable and provides a fundamental bound on the probability of detecting the true load sequence from the observation .our objective is to identify a battery management policy ( which determine how much energy to store or discharge from the battery ) to minimize the information leakage rate .we briefly review the relevant literature .the use of a rechargeable battery for providing user privacy has been studied in several recent works , e.g. , .most of the existing literature has focused on evaluating the information leakage rate of specific battery management policies .these include the `` best - effort '' policy , which tries to maintain a constant consumption level , whenever possible ; and battery conditioned stochastic charging policies , in which the conditional distribution on the current consumption depends only on the current battery state ( or on the current battery state and the current demand ) . in , the information leakage rate was estimated using monte - carlo simulations ; in , it was calculated using the bcjr algorithm .the methodology of was extended by to include models with energy harvesting and allowing for a certain amount of energy waste .bounds on the performance of the best - effort policy and hide - and - store policy for models with energy harvesting and infinite battery capacity were obtained in .the performance of the best effort algorithm for an individual privacy metric was considered in .none of these papers address the question of choosing the optimal battery management policy .rate - distortion type approaches have also been used to study privacy - utility trade - off .these models allow the user to report a distorted version of the load to the utility provider , subject to a certain average distortion constraint .our setup differs from these works as we impose a constraint on the _ instantaneous _ energy stored in the battery due to its limited capacity .both our techniques and the qualitative nature of the results are different from these papers .our contributions are two - fold .first , when the demand is markov , we show that the minimum information leakage rate and optimal battery management policies can be obtained by solving an appropriate dynamic program .these results are similar in spirit to the dynamic programs obtained to compute capacity of channels with memory ; however , the specific details are different due to the constraint on the battery state .second , when the demand is i.i.d ., we obtain a single letter characterization of the minimum information leakage rate ; this expression also gives the optimal battery management policy .we prove the single letter expression in two steps . on the achievability sidewe propose a class of policies with a specific structure that enables a considerable simplification of the leakage - rate expression .we find a policy that minimizes the leakage - rate within this restricted class . on the converse side ,we obtain lower bounds on the minimal leakage rate and show that these lower bound match the performance of the best structured policy .we provide two proofs .one is based on the dynamic program and the other is based purely on information theoretic arguments . after the present work was completed , we became aware of , where a similar dynamic programming framework is presented for the infinite horizon case .however , no explicit solutions of the dynamic program are derived in . to the best of our knowledge ,the present paper is the first work that provides an explicit characterization of the optimal leakage rate and the associated policy for i.i.d. demand .random variables are denoted by uppercase letters ( , , etc . ) , their realization by corresponding lowercase letters ( , , etc . ) , and their state space by corresponding script letters ( , , etc . ) . denotes the space of probability distributions on ; denotes the space of stochastic kernels from to . is a short hand for and .for a set , denotes the indicator function of the set that equals if and zero otherwise .if is a singleton set , we use instead of .given random variables with joint distribution , and denote the entropy of , and denote conditional entropy of given and and denote the mutual information between and .consider a smart metering system as shown in fig .[ fig : system ] . at each time, the energy consumed from the power grid must equal the user s demand plus the additional energy that is either stored in or drawn from the battery . let , , denote the user s demand ; , , denote the energy drawn from the grid ; and , , denote the energy stored in the battery .all alphabets are finite . for convenience , we assume , , and .we note that such a restriction is for simplicity of presentation ; the results generalize even when and are not necessarily contiguous intervals or integer valued . to guarantee that user s demand is always satisfied , we assume or that holds more generally .the demand is a first - order time - homogeneous markov chain with transition probability .we assume that is irreducible and aperiodic .the initial state is distributed according to probability mass function .the initial charge of the battery is independent of and distributed according to probability mass function .the battery is assumed to be ideal and has no conversion losses or other inefficiencies .therefore , the following conservation equation must be satisfied at all times : given the history of demand , battery charge , and consumption , a randomized _ battery charging policy _ determines the energy consumed from the grid . in particular , given the histories , of demand , battery charge , and consumption at time , the probability that current consumption equals is . for a randomized charging policy to be feasible , it must satisfy the conservation equation .so , given the current power demand and battery charge , the feasible values of grid consumption are defined by thus , we require that the set of all such feasible policies is denoted by to denote the battery policy for both the infinite and finite - horizon problems ] . note that while the charging policy can be a function of the entire history , the support of only depends on the present value of and through the difference .this is emphasized in the definition in .the quality of a charging policy depends on the amount of information leaked under that policy .there are different notions of privacy ; in this paper , we use mutual information as a measure of privacy . intuitively speaking ,given random variables , the mutual information measures the decrease in the uncertainty about given by ( or vice - versa ) .therefore , given a policy , the information about leaked to the utility provider or eavesdropper is captured by , where the mutual information is evaluated according to the joint probability distribution on induced by the distribution as follows : .\end{gathered}\ ] ] we use information leakage _ rate _ as a measure of the quality of a charging policy . for a finite planning horizon ,the information leakage rate of a policy is given by while for an infinite horizon , the worst - case information leakage rate of a policy is given by we are interested in the following optimization problems : [ prob : original ] given the alphabet of the demand , the initial distribution and the transistion matrix of the demand process , the alphabet of the battery , the initial distribution of the battery state , and the alphabet of the consumption : 1 .for a finite planning horizon , find a battery charging policy that minimizes the leakage rate given by .2 . for an infinite planning horizon ,find a battery charging policy that minimizes the leakage rate given by .the above optimization problem is difficult because we have to optimize a multi - letter mutual information expression over the class of history dependent probability distributions . in the spirit of results for feedback capacity of channels with memory , we show that the above optimization problem can be reformulated as a markov decision process where the state and action spaces are conditional probability distributions .thus , the optimal policy and the optimal leakage rate can be computed by solving an appropriate dynamic program .we then provide an explicit solution of the dynamic program for the case of i.i.d.demand .we illustrate the special case when in fig .[ fig : binary ] . the input , output , as well as the state , are all binary valued .when the battery is in state , there are three possible transitions . if the input then we must select and the state changes to . if instead , then there are two possibilities .we can select and have or we can select and have . in a similar fashionthere are three possible transitions from the state as shown in fig .[ fig : binary ] .we will assume that the demand ( input ) sequence is sampled i.i.d .from an equiprobable distribution . or .the set of feasible transitions from each state are shown in the figure . ]consider a simple policy that sets and ignores the battery state .it is clear that such a policy will lead to maximum leakage .another feasible policy is to set .thus whenever we will set regardless of the value of and likewise will result in .it turns out that the leakage rate for this policy also approaches . to see this note that the eavesdropper having access to also in turn knows . using the battery update equation the sequence thus revealed to the eavesdropper , resulting in a leakage rate of at least . in reference a probabilistic battery charging policy is introduced that only depends on the current state and input i.e. , .furthermore the policy makes equiprobable decisions between the feasible transitions i.e. , and otherwise .the leakage rage for this policy was numerically evaluated in using the bcjr algorithm and it was shown numerically that .such numerical techniques seem necessary in general even for the class of memoryless policies and i.i.d .inputs , as the presence of the battery adds memory into the system . as a consequence of our main resultit follows that the above policy admits a single - letter expression for the leakage rate is an equiprobable binary valued random variable , independent of . ] , thus circumventing the need for numerical techniques . furthermore it also follows that this leakage rate is indeed the minimum possible one among the class of all feasible policies .thus it is not necessary for the battery system to use more complex policies that take into account the entire history .we note that a similar result was shown in for the case of finite horizon policies .however the proof in is specific to the binary model . in the present paperwe provide a complete single - letter solution to the case of general i.i.d .demand , and a dynamic programming method for the case of first - order markovian demands , as discussed next .we identify two structional simplifications for the battery charging policies .first , we show ( see proposition [ prop : a - b ] in section [ sec : a - b ] ) that there is no loss of optimality in restricting attention to charging strategies of the form the intuition is that under such a policy , observing gives partial information only about rather than about the whole history .next , we identify a sufficient statistic for is the charging strategies of the form . for that matter ,given a policy and any realization of , define the belief state as follows : then , we show ( see theorem [ thm : dp ] below ) that there is no loss of optimality in restricting attention to charging strategies of the form such a charging policy is markovian in the belief state and the optimal policies of such form can be searched using a dynamic program . to describe such a dynamic program , we assume that there is a decision maker that observes ( or equivalently ) and chooses `` actions '' using some decision rule .we then identify a dynamic program to choose the optimal decision rules .note that the actions take vales in a subset of given by to succiently write the dynamic program , for any , we define the bellman operator \to [ { \mathcal}p_{x , s } \to { \mathds{r}}] ] .then , by numerically solving , we get that the optimal leakage rate is is and the optimal battery charge distribution is 2. consider ] .hence , and are equivalent .implies that is a controlled markov process with control action . in the next section ,we obtain a dynamic programming decomposition for this problem . for the purpose of writing the dynamic program , it is more convenient to write the policy as note that these two representations are equivalent .any policy of the form is also a policy of the form ( that simply ignores ) ; any policy of the form can be written as a policy of the form by recursively substituting in terms of . since the two forms are equivalent , in the next section we assume that the policy is of the form .the model described in section [ sec : equiv ] above is similar to a pomdp ( partially observable markov decision process ) : the system state is partially observed by a decision maker who chooses action . however , in contrast to the standard cost model used in pomdps , the per - step cost depends on the observation history and _ past policy_. nonetheless , if we consider the belief state as the information state , the problem can be formulated as a standard mdp . for that matter , for any realization of past observations and any choice of past actions , define the belief state as follows : for and , if and are random variables , then the belief state is a -valued random variable .the belief state evolves in a state - like manner as follows .[ lem : pi ] for any realization of and of , is given by where is given by for ease of notation , we use to denote .similar interpretations hold for other expressions as well .consider now , consider the numerator of the right hand side .{\sum _ { ( x_t , s_t ) \in { \mathcal}x \times { \mathcal}s } } { \mathds{p}}(x_{t+1 } | x_t ) { \mathds{1 } } _ { s_{t+1 } } ( s_t + x_t - y_t ) \notag\\ & \qquad \times a_t(y_t | x_t , s_t ) { \mathds{1 } } _ { a_t } ( f_t(\pi_t ) ) \pi_t(x_t , s_t ) \label{eq : pi-2 } \end{aligned}\ ] ] substituting in ( and observing that the denominator of the right hand side of is the marginal of the numerator over ) , we get that can be written in terms of , and .note that if the term is , it cancels from both the numerator and the denominator ; if it is , we are conditioning on a null event in , so we can assign any valid distribution to the conditional probability .note that an immediate implication of the above result is that depends only on and not on the policy .this is the main reason that we are working with a policy of the form rather than .[ lem : cost ] the cost in can be written as \ ] ] where does not depend on the policy and is computed according to the standard formula \pi_t(x , s ) a_t(y \mid x , s ) \\\times \log \frac { a_t(y|x , s ) } { \strut \smashoperator { \sum\limits_{(\tilde x,\tilde s ) \in { \mathcal}x \times { \mathcal}s } } \pi_t(\tilde x,\tildes ) a_t(y \mid \tilde x,\tilde s ) } .\end{lgathered}\ ] ] by the law of iterated expectations , we have \bigg]\ ] ] now , from , each summand may be written as \\ & \quad = \sum_{\substack { x \in { \mathcal}x , s \in { \mathcal}s , \\ y \in { \mathcal}y } } \begin{lgathered}[t ] \pi_t(x , s ) a_t(y \mid x , s ) \\ \times \log \frac { a_t(y|x , s ) } { \strut \smashoperator { \sum\limits_{(\tilde x,\tilde s ) \in { \mathcal}x \times { \mathcal}s } } \pi_t(\tilde x,\tilde s ) a_t(y \mid \tilde x,\tilde s ) } \end{lgathered}\\ & \qquad = i(a_t ; \pi_t ) .\end{aligned}\ ] ] lemma [ lem : pi ] implies that is a controlled markov process with control action .in addition , lemma [ lem : cost ] implies that the objective function can be expressed in terms of the _ state _ and the action .consequently , in the equivalent optimization problem described in section [ sec : equiv ] , there is no loss of optimality to restrict attention to markovian policies of the form ; an optimal policy of this form is given by the dynamic programs of theorem [ thm : dp ] .proposition [ prop : equiv ] implies that this dynamic program also solves problem [ prob : original ] .note that the markov chain is irreducible and aperiodic and the per - step cost is bounded .therefore , the existence of the solution of the infinite horizon dynamic program can be shown by following the proof argument of .the dynamic program of theorem [ thm : dp ] , both state and action spaces are distribution valued ( and , therefore , subsets of euclidean space ) .although , an exact solution of the dynamic program is not possible , there are two approaches to obtain an approximate solution .the first is to treat it as a dynamic program of an mdp with continuous state and action spaces and use approximate dynamic programming .the second is to treat it as a dynamic program for a podmp and use point - based methods .the point - based methods rely on concavity of the value function , which we establish below .[ prop : concave ] the value functions defined in theorem [ thm : dp ] are concave .see appendix [ app : concave ] for proof .under ( a ) , the belief state can be decomposed into product form , where thus , in principle , we can simplify the dynamic program of theorem [ thm : dp ] by using as an information state .however , for reasons that will become apparent , we provide an alternative simplification that uses an information state .recall that which takes values in . for any realization of past observations and actions ,define as follows : for any , if and are random variables , then is a -valued random variable . as was the case for , it can be shown that does not depend on the choice of the policy .[ lem : equiv ] under ( a ) , and are related as follows : 1 .2 . . for part 1 ) : for part 2 ) : since , lemma [ lem : equiv ] suggests that one could simplify the dynamic program of theorem [ thm : dp ] by using as the information state instead of .for such a simplification to work , we would have to use charging policies of the form .we establish that restricting attention to such policies is without loss of optimality . for that matter ,define as follows : [ lem : iid ] given and , define the following : * as * as follows : for all * as follows : for all then under ( a ) , we have 1 .invariant transitions : for any , .lower cost : .therefore , in the sequential problem of sec .[ sec : equiv ] , there is no loss of optimality in restricting attention to actions . 1 .suppose and , , .we will compare when with when .given and , where ( a ) uses that for all , .marginalizing over , we get that . since , eq. also implies .therefore , .2 . let and . then .therefore , we have where the last inequality is the data - processing inequality . under , ,therefore , + now , by construction , the joint distribution of is the same under , , and .thus , note that can also be written as .the result follows by combining all the above relations .once attention is restricted to actions , the update of may be expressed in terms of as follows : [ lem : xi ] for any realization of and of , is given by where is given by the proof is similar to the proof of lemma [ lem : xi ] .for any and , let us define the bellman operator \to [ { \mathcal}p_w \to { \mathds{r}}] ] , which implies .part 4 ) follows from part 3 ) by setting .lemma [ prop : convex ] implies that defined in theorem [ thm : iid ] lies in .then , by lemma [ lem : converge ] , the constant - distribution policy ( where is given by theorem [ thm : iid ] ) , achieves the leakage rate . by lemma [ lem : mi ], is same as defined in theorem [ thm : iid ] .thus , is achievable starting from any initial state .we provide two converses .one is based on the dynamic program of theorem [ thm : iid - dp ] , which is presented in this section ; the other is based purely on information theoretic arguments , which is presented in the next section . in the dynamic programming converse ,we show that for given in theorem [ thm : iid ] , , and any , ({\xi } ) } , \quad \forall \xi \in { \mathcal}p_w,\ ] ] thus , is a lower bound of the optimal leakage rate ( see ) . to prove , pick any and .suppose , , , is independent of and and .then , ({\xi } ) } & = i(b;\xi ) + \smashoperator{\sum_{(w_1 , y_1 ) \in { \mathcal}w \times { \mathcal}y } } \xi(w_1 ) b(y_1|w_1 ) v^*(\hat \varphi(\xi , y_1 , b ) ) \notag \\ & = i(w_1;y_1 ) + h(w_2|y_1 ) \label{eq : dpc-1}\end{aligned}\ ] ] where the second equality is due to the definition of conditional entropy .consequently , ({\xi } ) } - v^*(\xi ) = h(w_2|y_1 ) - h(w_1|y_1 ) \notag \\ & = h(w_2 | y_1 ) - h(w_1 + y_1 | y_1 ) \notag \\ & \stackrel{\text{(a)}}= h(s_2 - x_2 | y_1 ) - h(s_2 | y_1)\notag \\ & \stackrel{\text{(b)}}\ge \min_{\theta_2 \in { \mathcal}p_s } \big [ h(\tilde s_2 - x_2 ) - h(\tilde s_2 ) \big ] , \qquad \tilde s_2 \sim \theta_2 \notag \\ & = j^ * \label{eq : dpc-2}\end{aligned}\ ] ] where ( a ) uses and ; ( b ) uses the fact that ] is a concave function of .we show the same for the second term .note that if a function is concave , then it s perspective is concave in the domain . the second term in the definition of the bellman operator v(\varphi(\pi , y , a ) ) \end{aligned}\ ] ]has this form because the numerator of is linear in and the denominator is ( and corresponds to in the definition of perspective ) .thus , for each , the summand is concave in , and the sum of concave functions is concave .hence , the second term of the bellman operator is concave in .thus we conclude that concavity is preserved under .the proof of the convergence of relies on a result on the convergence of partially observed markov chains due to kaijser that we restate below .[ thm : kaijser ] let , , be a finite state markov chain with transition matrix .the initial state is distributed according to probability mass function . given a finite set and an observation function ,define the following : * the process , , given by * the process , , given by * a square matrix , , given by {i , j } = \begin{cases } p^u_{ij } &\text { if } g(j ) = z\\ 0 & \text { otherwise } \end{cases } \qquad i , j \in { \mathcal}u.\ ] ] next , let and consider we will show that this satisfies the subrectangularity condition of theorem [ thm : kaijser ] .the basic idea is the following .consider any initial state and any final state .we will show that eqs . andshow that given the observation sequence , for _ any _ initial state there is a positive probability of observing _ any _ final state ., the final state must be of the form . ] hence , the matrix is subrectangular .consequently , by theorem [ thm : kaijser ] , the process converges in distribution to a limit that is independent of the initial distribution .let and denote the limit of and .suppose the initial condition is .since is an invariant policy , for all .therefore , the limits . given the initial state , define , and consider the sequence under this sequence of demands , consider the sequence of consumption , which is feasible because the state of the battery increases by for the first steps ( at which time it reaches ) and then remains constant for the remaining steps .therefore , since the sequnce of demands has a positive probability , therefore , which completes the proof. the proof is similar to the proof of . given the final state , define and consider the sequence under this sequnce of demains and the sequence of consumption given by , the state of the battery decreases by for the first steps ( at which time it reaches ) and then remains constant for the remaining steps .since has positive probability , we can complete the proof by following an argument similar to that in the proof of .a. predunzi , `` a neuron nets based procedure for identifying domestic appliances pattern - of - use from energy recordings at meter panel , '' in _ proc .ieee power eng .society winter meeting , new york _, jan . 2002 .h. y. lam , g. s. k. fung , and w. k. lee , `` a novel method to construct taxonomy of electrical appliances based on load signatures , '' ieee trans .consumer electronics , vol .653 - 660 , may 2007 .g. kalogridis , c. efthymiou , s. z. denic , t. a. lewis , and r. cepeda , `` privacy for smart meters : towards undetectable appliance load signatures , '' in proc .ieee smart grid commun .gaithersburg , maryland , 2010 .d. varodayan and a. khisti , `` smart meter privacy using a rechargeable battery : minimizing the rate of information leakage , '' in _ proc .speech sig .prague , czech republic _ , may 2011 .o. tan , d. gunduz , and h. v. poor , increasing smart meter privacy through energy harvesting and storage devices " , ieee journal on selected areas in communications , vol .31 , pp .1331 - 1341 , 2012 chicago l. sankar , s.r .rajagopalan , and h.v .poor , `` an information - theoretic approach to privacy , '' in proc .48th annual allerton conf . on commun . , control , and computing , monticello , il , sep .2010 , pp .1220 - 1227 .a. arapostathis , v. borkar , e. fernandez - gaucherand , m. k. gosh , and s. i. marcus , `` discrete - time controlled markov processes with average cost criterion : a survey , '' siam j. contr .282 - 344 , mar .1993 .s. li , a. khisti , and a. mahajan , `` structure of optimal privacy - preserving policies in smart - metered systems with a rechargeable battery , '' proceedings of the ieee spawc , pp. 1 - 5 , stockholm , sweden , june 28-july 1 , 2015 .
|
smart - metering systems report electricity usage of a user to the utility provider on almost real - time basis . this could leak private information about the user to the utility provider . in this work we investigate the use of a rechargeable battery in order to provide privacy to the user . we assume that the user load sequence is a first - order markov process , the battery satisfies ideal charge conservation , and that privacy is measured using normalized mutual information ( leakage rate ) between the user load and the battery output . we consider battery charging policies in this setup that satisfy the feasibility constraints . we propose a series reductions on the original problem and ultimately recast it as a markov decision process ( mdp ) that can be solved using a dynamic program . in the special case of i.i.d . demand , we explicitly characterize the optimal policy and show that the associated leakage rate can be expressed as a single - letter mutual information expression . in this case we show that the optimal charging policy admits an intuitive interpretation of preserving a certain invariance property of the state . interestingly an alternative proof of optimality can be provided that does not rely on the mdp approach , but is based on purely information theoretic reductions .
|
narratives have been a focus on study from several perspectives , most prominently from the viewpoint of language , literature , and computational linguistics ; see for instance , discourse analysis and computational narratology . from the viewpoint of commonsense reasoning , and closely related to the computational models of narrative perspective ,is the position of researchers in logics of _ action and change _ ; here , narratives are interpreted as `` _ _ a sequence of events about which we may have incomplete , conflicting or incorrect information _ _ '' . as per ,`` _ _ a narrative tells what happened , but any narrative can only tell a certain amount. a narrative will usually give facts about the future of a situation that are not just consequences of projection from an initial situation _the interpretation of narrative knowledge in this paper is based on these characterisations , especially in regard to the commonsense representation and reasoning tasks that accrue whilst modelling and reasoning about the perceptually grounded , narrativised epistemic state of an autonomous agent pertaining to _ space , actions , events , and change _ . in particular , this encompasses a range of inference patterns such as : ( a ) spatio - temporal abduction for scenario and narrative completion ; ( b ) integrated inductive - abductive reasoning with narrative knowledge ; ( c ) narrative - based postdiction for abnormality detection and planning .* perceptual narratives*.these are declarative models of visual , auditory , haptic and other observations in the real world that are obtained via artificial sensors and / or human input .declarative models of perceptual narratives can be used for interpretation and control tasks in the course of assistive technologies in everyday life and work scenarios , e.g. , behaviour interpretation , robotic plan generation , semantic model generation from video , ambient intelligence and smart environments ( e.g. , see narrative based models in ) . *high - level cognitive interpretation and control*.our research is especially concerned with large - scale cognitive interaction systems where high - level perceptual sense - making , planning , and control constitutes one of many ai sub - components guiding other low - level control and attention tasks . as an example , consider the _smart meeting cinematography _domain in listing 1 . in this domain , _ perceptual narratives _ as in fig . 1 are generated based on perceived spatial change interpreted as interactions of humans in the environment .such narratives explaining the ongoing activities are needed to anticipate changes in the environment , as well as to appropriately influence the real - time control of the camera system . to convey the meaning of the presentation and the speakers interactions with a projection , the camera has to capture the scene including the speakers gestures , slides , and the audience .e.g. , in fig .1 , when the speaker explains the slides , the camera has to capture the speaker and the corresponding information on the slides . to this end ,the camera records an overview shot capturing the speaker and the projection , and zooms on the particular element when the speaker explains it in detail , to allow the viewer to follow the presentation and to get the necessary information .when the speaker continues the talk , the camera focuses on the speaker to omit unnecessary and distracting information . to capture reactions of the audience , e.g. comments , questions or applause, the camera records an overview of the attending people or close - up shots of the commenting or asking person .systems that monitor and interact with an environment populated by humans and other artefacts require a formal means for representing and reasoning about spatio - temporal , event and action based phenomena that are grounded to real public and private scenarios ( e.g. , logistical processes , activities of everyday living ) of the environment being modelled .a fundamental requirement within such application domains is the representation of _ dynamic _ knowledge pertaining to the spatial aspects of the environment within which an agent , system , or robot is functional .this translates to the need to explicitly represent and reason about dynamic spatial configurations or scenes and , for real world problems , integrated reasoning about perceptual narratives of _ space , actions , and change _ . with these modelling primitives ,the ability to perform _ predictive _ and _ explanatory _ analyses on the basis of sensory data is crucial for creating a useful intelligent function within such environments . to understand the nature of perceptual narratives ( of space , and motion ) , consider the aforediscussed work - in - progress domain of _ smart meeting cinematography _( listing 1 ) .the particular infrastructural setup for the example presented herein consists of pan - tilt - zoom ( ptz ) capable cameras , depth sensors ( kinect ) , and a low - level vision module for people tracking ( whole body , hand gesture , movement ) customised on the basis of open - source algorithms and software . with respect to this setup , declaratively grounded perceptual narratives capturing the information in fig .[ fig : rotunde - activities ] is developed on the basis of a commonsense theory of _ qualitative space _( listing 2 ) , and interpretation of motion as qualitative spatial change .in particular , the overall model as depicted in fig .[ pn ] consists of : _ space and motion _ : a theory to declaratively reason about qualitative spatial relations ( e.g. , topology , orientation ) , and qualitative motion perceived in the environment and interpret changes as domain dependent observations in the context of everyday activities involving humans and artefacts. _ explanation of ( spatial ) change _ : hypothesising real - world ( inter)actions of individuals explaining the observations by integrating the qualitative theory with a learning method ( e.g. , bayesian and markov based ( logic ) learning ) to incorporate uncertainty in the interpretation of observation sequences ._ semantic characterisation _ : as a result of the aforementioned , real - time generation of declarative narratives of perceptual data ( e.g. , rgb - d ) obtained directly from people / object tracking algorithms .hypothesised object relations can be seen as building blocks to form complex interactions that are semantically interpreted as activities in the context of the domain . as an example consider the sequence of observations in the meeting environment depicted in fig .[ pn ] . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ region p elongates vertically , region p approaches region q from the right , region p partially overlaps with region q while p being further away from the observer than q , region p moves left , region p recedes from region q at the left , region p gets disconnected from region q , region p disappears at the left border of the field of view _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to explain these observations in the ` context ' of the meeting situation we make hypothesis about possible interactions in the real world . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ person p stands up , passes behind person q while moving towards the exit and leaves the room ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the * semantic interpretation of activities * from video , depth ( e.g. , time - of - flight devices such as kinect ) , and other forms of sensory input requires the representational and inferential mediation of qualitative abstractions of space , action , and change .such relational abstractions serve as a bridge between high - level domain - specific conceptual or activity theoretic knowledge , and low - level statistically acquired features and sensory grammar learning techniques .generation of perceptual narratives , and their access via the declarative interface of logic programming facilitates the integration of the overall framework in bigger projects concerned with cognitive vision , robotics , hybrid - intelligent systems etc .in the smart meeting cinematography domain the generated narratives are used to explain and understand the observations in the environment and anticipate interactions in it to allow for intelligent coordination and control of the involved ptz - cameras .the smart meeting cinematography scenario presented in this paper serves as a challenging benchmark to investigate narrative based high - level cognitive interpretation of everyday interactions .work is in progress to release certain aspects ( pertaining to space , motion , real - time high - level control ) emanating from the narrative model via the interface of constraint logic programming ( e.g. , as a prolog based library of depth we also plan to release general tools to perform management and visualisation of activity interpretation data .m. bhatt .reasoning about space , actions and change : a paradigm for applications of spatial reasoning . in _ qualitative spatial representation and reasoning : trends and future directions_. igi global , usa , 2012 .isbn isbn13 : 9781616928681 .m. bhatt and g. flanagan .spatio - temporal abduction for scenario and narrative completion . in m.bhatt , h. guesgen , and s. hazarika , editors , _ proceedings of the international workshop on spatio - temporal dynamics , co - located with ecai-10 _ , pages 3136 .ecai workshop proceedings . , august 2010 .m. bhatt , j. suchan , and c. freksa . a smart meeting cinematography initiative . in m.bhatt , h. guesgen , and d. cook , editors , _ proceedings of the aaai-2013 workshop on space , time , and ambient intelligence ( stami ) ._ , washington , us , 2013 .aaai press .j. mccarthy .logic - based artificial intelligence .chapter concept of logical ai , pages 3756 .kluwer academic publishers , norwell , ma , usa , 2000 .isbn 0 - 7923 - 7224 - 7 .url http://dl.acm.org/citation.cfm?id=566344.566348 .
|
we position a narrative - centred computational model for high - level knowledge representation and reasoning in the context of a range of assistive technologies concerned with _ visuo - spatial perception and cognition _ tasks . our proposed narrative model encompasses aspects such as _ space , events , actions , change , and interaction _ from the viewpoint of commonsense reasoning and learning in large - scale cognitive systems . the broad focus of this paper is on the domain of _ human - activity interpretation _ in smart environments , ambient intelligence etc . in the backdrop of a _ smart meeting cinematography _ domain , we position the proposed narrative model , preliminary work on perceptual narrativisation , and the immediate outlook on constructing general - purpose open - source tools for perceptual narrativisation .
|
video compressive sensing ( cs ) refers to the problem of recovering an unknown spatio - temporal volume from limited samples .it comes in two incarnations , namely , spatial cs and temporal cs .spatial video cs architectures stem from the well - known single - pixel - camera , which performs spatial multiplexing per measurement , and enable video recovery by expediting the capturing process .they either employ fast readout circuitry to capture information at video rates or parallelize the single - pixel architecture using multiple sensors , each one responsible for sampling a separate spatial area of the scene .in this work , we focus on temporal cs where multiplexing occurs across the time dimension .figure [ fig : measurementmodel ] depicts this process , where a spatio - temporal volume of size is modulated by binary random masks during the exposure time of a single capture , giving rise to a coded frame of size .we denote the vectorized versions of the unknown signal and the captured frame as and , respectively .each vectorized sampling mask is expressed as giving rise to the measurement model where : m_{f } \times n_{f} ] , $ ] and is the same for all blocks .the linear mapping we are after can be calculated as where is of size . intuitively , such an approach would not necessarily be expected to even provide a solution due to ill - posedness .however , it turns out that , if is sufficiently large and the matrix has at least one nonzero in each row ( i.e. , sampling each spatial location at least once over time ) , the estimation of s by the s provides surprisingly good performance .specifically , we obtain measurements from a test video sequence applying the same per video block and then reconstruct all blocks using the learnt .figure [ fig : wcomparison ] depicts the average peak signal - to - noise ratio ( psnr ) and structural similarity metric ( ssim ) for the reconstruction of video sequences using different realizations of the random binary matrix for varying percentages of nonzero elements .the empty bars for and of nonzeros in realizations and , respectively , refer to cases when there was no solution due to the lack of nonzeros at some spatial location . in these experiments selected as simulating the reconstruction of frames by a single captured frame and .based on the performance in figure [ fig : wcomparison ] , investigating the extension of the linear mapping in to a nonlinear mapping using deep networks seemed increasingly promising . in order for such an approach to be practical , though , reconstruction has to be performed on blocks and each block must be sampled with the same measurement matrix .furthermore , such a measurement matrix should be realizable in hardware .hence we propose constructing a which consists of repeated identical building blocks of size , as presented in figure [ fig : maskconstruction ] .such a matrix can be straightforwardly implemented on existing systems employing dmds , slms or lcos . at the same time , in systems utilizing translating masks , a repeated mask can be printed and shifted appropriately to produce the same effect . in the remainder of this paper, we select a building block of size as a random binary matrix containing of nonzero elements and set , such that and .therefore , the compression ratio is .in addition , for the proposed matrix , each block is the same allowing reconstruction for overlapping blocks of size with spatial overlap of .such overlap can usually aid at improving reconstruction quality .the selection of of nonzeros was just a random choice since the results of figure [ fig : wcomparison ] did not suggest that a specific percentage is particularly beneficial in terms of reconstruction quality . in this section ,we extend the linear formulation to mlps and investigate the performance in deeper structures .we consider an mlp architecture to learn a nonlinear function that maps a measured frame patch via several hidden layers to a video block , as illustrated in figure [ fig : network ] .each hidden layer , is defined as where is the bias vector and is the output weight matrix , containing linear filters . connects to the first hidden layer , while for the remaining hidden layers , .the last hidden layer is connected to the output layer via and without nonlinearity .the non - linear function is the rectified linear unit ( relu ) defined as , .the mlp architecture was chosen due to the following considerations ; there is a need for at least one fully - connected layer ( the first one ) that would provide a 3d signal from the compressed 2d measurements . following that , one could argue that the subsequent layers could be 3d convolutional layers .although that would sound reasonable , in practice , the small size of blocks used in this paper ( ) do not allow for convolutions to be effective .such small block sizes have provided good reconstruction quality in dictionary learning approaches used for cs video reconstruction .thus , mlps were considered more reasonable in our work and we found that when applied in blocks they capture the motion and spatial details of videos adequately .besides , increasing the size of blocks would dramatically increase the network complexity in 3d volumes such as in videos .to train the proposed mlp , we learn all the weights and biases of the model .the set of parameters is denoted as and is updated by the backpropagation algorithm minimizing the quadratic error between the set of training mapped measurements and the corresponding video blocks .the loss function is the mean squared error ( mse ) which is given by the mse was used in this work since our goal is to optimize the psnr which is directly related to the mse .we compare our proposed deep architecture with state - of - the - art approaches both quantitatively and qualitatively .the proposed approaches are evaluated assuming noiseless measurements or under the presence of measurement noise .finally , we investigate the performance of our methods under different network parameters ( e.g. , number of layers ) and size of training samples .the metrics used for evaluation were the psnr and ssim . for deep neural networks , increasingthe number of training samples is usually synonymous to improved performance .we collected a diverse set of training samples using high - definition videos from youtube , depicting natural scenes .the video sequences contain more than frames which were converted to grayscale .all videos are unrelated to the test set .we randomly extracted million video blocks of size while keeping the amount of blocks extracted per video proportional to its duration .this data was used as output while the corresponding input was obtained by multiplying each sample with the measurement matrix ( see subsection [ subsec : mask_construction ] for details ) .example frames from the video sequences used for training are shown in figure [ fig : trainingframes ] .our networks were trained for up to iterations using a mini - batch size of .we normalized the input per - feature to zero mean and standard deviation one .the weights of each layer were initialized to random values uniformly distributed in , where is the size of the previous layer .we used stochastic gradient descent ( sgd ) with a starting learning rate of , which was divided by after iterations .the momentum was set to 0.9 and we further used norm gradient clipping to keep the gradients in a certain range .gradient clipping is a widely used technique in recurrent neural networks to avoid exploding gradients .the threshold of gradient clipping was set to .we compare our method with the state - of - the - art video compressive sensing methods : * gmm - tp , a gaussian mixture model ( gmm)-based algorithm . *mmle - gmm , a maximum marginal likelihood estimator ( mmle ) , that maximizes the likelihood of the gmm of the underlying signals given only their linear compressive measurements .for temporal cs reconstruction , data driven models usually perform better than standard sparsity - based schemes .indeed , both gmm - tp and mmle - gmm have demonstrated superior performance compared to existing approaches in the literature such as total - variation ( tv ) or dictionary learning , hence we did not include experiments with the latter methods . in gmm - tp we followed the settings proposed by the authors and used our training data ( randomly selecting samples ) to train the underlying gmm parameters .we found that our training data provided better performance compared to the data used by the authors . in our experimentswe denote this method by gmm- to denote reconstruction of overlapping blocks with spatial overlap of pixels , as discussed in subsection [ subsec : mask_construction ] .mmle is a self - training method but it is sensitive to initialization . a satisfactory performance is obtained only when mmle is combined with a good starting point . in ,the gmm - tp with full overlapping patches ( denoted in our experiments as gmm-1 ) was used to initialize the mmle .we denote the combined method as gmm-1+mmle . for fairness, we also conducted experiments in the case where our method is used as a starting point for the mmle . in our methods , a collection of overlapping patches of size extracted by each coded measurement of size and subsequently reconstructed into video blocks of size .overlapping areas of the recovered video blocks are then averaged to obtain the final video reconstruction results , as depicted in figure [ fig : network ] .the step of the overlapping patches was set to due to the special construction of the utilized measurement matrix , as discussed in subsection [ subsec : mask_construction ] .we consider six different architectures : * w-10 m , a simple linear mapping ( equation ) trained on samples .* fc4 - 1 m , a mlp trained on samples ( randomly selected from our samples ) .* fc4 - 10 m , a mlp trained on samples .* fc7 - 1 m , a mlp trained on samples ( randomly selected from our samples ) .* fc7 - 10 m , a mlp trained on samples .* fc7 - 10m+mmle , a mlp trained on samples which is used as an initialization to the mmle method .note that the subset of randomly selected million samples used for training fc4 - 1 m and fc7 - 1 m was the same . our test set consists of video sequences .they involve a set of videos that were used for dictionary training in , provided by the authors , as well as the `` basketball '' video sequence used by .all video sequences are unrelated to the training set ( see subsection [ subsec : training_data ] for details ) . for fair comparisons ,the same measurement mask was used in all methods , according to subsection [ subsec : mask_construction ] .all code implementations are publicly available provided by the authors .quantitative reconstruction results for all video sequences with all tested algorithms are illustrated in table [ tb : psnr_table ] and average performance is summarized in figure [ fig : allalgorithmcomparison ] .the presented metrics refer to average performance for the reconstruction of the first frames of each video sequence , using consequtive captured coded frames through the video cs measurement model of equation . in both , table [ tb : psnr_table ] and figure [ fig : allalgorithmcomparison ] , results are divided in two parts .the first part lists reconstruction performance of the tested approaches without the mmle step , while the second compares the performance of the best candidate proposed and previous methods with a subsequent mmle step . in table[ tb : psnr_table ] the best performing algorithms are highlighted for each part while the bottom row presents average reconstruction time requirements for the recovery of video frames using captured coded frame . .] video sequences between the proposed method fc7 - 10 m and the previous method ggm-4 . ]our fc7 - 10 m and fc7 - 10m+mmle yield the highest psnr and ssim values for all video sequences .specifically , the average psnr improvement of fc7 - 10 m over the gmm-1 is db . when these two methods are used to initialize the mmle algorithm , the average psnr gain of fc7 - 10m+mmle over the gmm-1+mmle is db . notice also that the fc7 - 10 m achieves db higher than the combined gmm-1+mmle .the highest psnr and ssim values are reported in the fc7 - 10m+mmle method with db average psnr over all test sequences .however , the average reconstruction time for the reconstruction of frames using this method is almost two hours while for the second best , the fc7 - 10 m , is about seconds , with average psnr db .we conclude that , when time is critical , fc7 - 10 m should be the preferred reconstruction method .qualitative results of selected video frames are shown in figure [ fig : framesandinsets ] .the proposed mlp architectures , including the linear regression model , favorably recover motion while the additional hidden layers emphasize on improving the spatial resolution of the scene .one can clearly observe the sharper edges and high frequency details produced by the fc7 - 10 m and fc7 - 10m+mmle methods compared to previously proposed algorithms . due to the extremely long reconstruction times of previous methods ,the results presented in table [ tb : psnr_table ] and figure [ fig : allalgorithmcomparison ] refer to only the first frames of each video sequence , as mentioned above .figure [ fig : fullpsnrcomparison ] compares the psnr for all the frames of video sequences using our fc7 - 10 m algorithm and the fastest previous method gmm-4 , while figure [ fig : framesandinsetsfullvideo ] depicts representative snapshots for some of them . the varying psnr performance across the frames of a frame block is consistent for both algorithms and is reminiscent of the reconstruction tendency observed in other video cs papers in the literature .previously , we evaluated the proposed algorithms assuming noiseless measurements . in this subsection , we investigate the performance of the presented deep architectures under the presence of measurement noise .specifically , the measurement model of equation is now modified to where is the additive measurement noise vector .we employ our best architecture utilizing hidden layers and follow two different training schemes . in the first one , the network is trained on the samples , as discussed in subsection [ subsec : comparison ] ( i.e. , the same fc7 - 10 m network as before ) while in the second , the network is trained using the same data pairs after adding random gaussian noise to each vector .each vector was corrupted with a level of noise such that signal - to - noise ratio ( snr ) is uniformly selected in the range between db giving rise to a set of noisy samples for training .we denote the network trained on the noisy dataset as fc7n-10 m .we now compare the performance of the two proposed architectures with the previous methods gmm-4 and gmm-1 using measurement noise .we did not include experiments with the mmle counterparts of the algorithms since , as we observed earlier , the performance improvement is always related to the starting point of the mmle algorithm .figure [ fig : algorithmcomparisonnoise ] shows the average performance comparison for the reconstruction of the first frames of each tested video sequence under different levels of measurement noise while figure [ fig : algorithmcomparisonnoiseinsets ] depicts example reconstructed frames . .] as we can observe , the network trained on noiseless data ( fc7 - 10 m ) provides good performance for low measurement noise ( e.g. , db ) and reaches similar performance to gmm-1 for more severe noise levels ( e.g. , db ) .the network trained on noisy data ( fc7n-10 m ) , proves more robust to noise severity achieving better performance than gmm-1 under all tested noise levels . despite proving more robust to noise ,our algorithms in general recover motion favorably but , for high noise levels , there is additive noise throughout the reconstructed scene ( observe results for db noise level in figure [ fig : algorithmcomparisonnoiseinsets ] ) .such degradation could be combated by cascading our architecture with a denoising deep architecture ( e.g. , ) or denoising algorithm to remove the noise artifacts .ideally , for a specific camera system , data would be collected using this system and trained such that the deep architecture incorporates the noise characteristics of the underlying sensor .run time comparisons for several methods are illustrated at the bottom row of table [ tb : psnr_table ] .all previous approaches are implemented in matlab .our deep learning methods are implemented in caffe package and all algorithms were executed by the same machine .we observe that the deep learning approaches significantly outperform the previous approaches in order of several magnitudes .note that a direct comparison between the methods is not trivial due to the different implementations .nevertheless , previous methods solve an optimization problem during reconstruction while our mlp is a feed - forward network that requires only few matrix - vector multiplications . from figure[ fig : allalgorithmcomparison ] we observe that as the number of training samples increase the performance consistently improves .however , the improvement achieved by increasing the number of layers ( from to ) for architectures trained on small datasets ( e.g. , 1 m ) is not significant .this is perhaps expected as one may argue that in order to achieve higher performance with extra layers ( thus , more parameters to train ) more training data would be required . intuitively , adding hidden layersenables the network to learn more complex functions .indeed , reconstruction performance in our 10 million dataset is higher in fc7 - 10 m than in fc4 - 10 m . increasing the number of hidden layers furtherdid not help in our experiments as we did not observe any additional performance improvement .to the best of our knowledge , this work constitutes the first deep learning architecture for temporal video compressive sensing reconstruction .we demonstrated superior performance compared to existing algorithms while reducing reconstruction time to a few seconds . at the same time , we focused on the applicability of our framework on existing compressive camera architectures suggesting that their commercial use could be viable .we believe that this work can be extended in three directions : 1 ) exploring the performance of variant architectures such as rnns , 2 ) investigate the training of deeper architectures and 3 ) finally , examine the reconstruction performance in real video sequences acquired by a temporal compressive sensing camera .
|
in this work we present a deep learning framework for video compressive sensing . the proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches . our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results . we then extend the linear formulation to deep fully - connected networks and explore the performance gains using deeper architectures . our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures . extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively . finally , our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation .
|
the design of today s and future networks is characterised by a paradigm shift , from a host - centric communication architecture , towards an information centric networking ( icn ) one .the focus is on information itself , and how this can be best accessed . within this setting , network nodes are equipped with storage capacity , where data objects can be temporarily cached . in this way , information can be made available close to the user , it can be retrieved with minimum delay , and possibly with a quality adaptable to the users preferences , as envisioned for example in cases of multimedia files .the principal benefit of the approach is the reduction of traffic flow at the core network , by serving demands from intermediate nodes .this further results in congestion avoidance and a better exploitation of the backbone resources .the edge - nodes constitute a very important part of the architecture , since it is where the users directly have access to .when these nodes are equipped with storage capability , so that users can retrieve their data objects directly from them , average download path length can be minimised .caching at the edge definitely offers a potential increase in performance of icns , it comes however at the cost of a distributed implementation and management over a very vast area , where edge nodes are placed .if these nodes are chosen to be the base stations and small cells of a heterogeneous network , it is fairly clear that thousands of nodes within each city are considered , a number which increases by a factor of hundred ( or more ) if user equipment and other devices are also included as having storage potential .the large number of nodes , together with the relatively small memory size installed on each one , creates big challenges related to their cache management .we consider the wireless edge of a content centric network , which consists of a set of transmitting nodes taking fixed positions on a planar area , and a set of users dynamically arriving at this area and asking for service .the set of transmitters can refer to base stations ( bss ) of a cellular network , small stations of heterogeneous networks , wifi hotspots , or any other type of wireless nodes that can provide access to an arriving user who demands for a specific data object .a user can be covered by multiple of these nodes , but he / she will _choose only one _ to be served from .all nodes are equipped with memory of size objects , which offers the possibility to cache a fraction of the existing data .when the user s request is found in the cache of some covering station , then the user is served directly by this one .otherwise , the request is retrieved from the core network .an important question is _ how to maximise the hit probability _ , by managing the available edge - memories .the _ hit probability _ , is defined as the probability that a user will find her / his demand cached in the memory of one of the cells she / he is covered from . by _ managing _ , we mean to decide on : which objects to install in each cache ?how to update the cache inventories over time ?given the possibility for multi - coverage , cache management should target two , not necessarily conflicting , goals : on the one hand make popular objects , requested by the large bulk of demands , generously available at many geographical locations . on the other , make good use of multi - coverage , by filling the memory caches in a way that a user has access to as many different objects as possible , so that also less popular contents are served directly by the caches .additionally , since - as explained above - wireless nodes ( bss ) are scattered over a very large area and are of considerable number , related operations should be distributed as in , and centralised solutions should be avoided . there exists a variety of cache placement policies that apply to _ single caches _, when no coverage overlap is considered .these include the least frequently used ( lfu ) , the least recently used ( lru ) , and their variations .specifically lru has been extensively studied and approximations to the hit probability have been proposed , like the one from dan and towsley .che et al proposed in 2002 a decomposition and a simple approximation for the single - lru under the independent reference model ( irm ) , which results in an analytical formula for the hit probability with excellent fit to simulations .this fitness is theoretically explained by fricker et al in .application of the che approximation under more general traffic conditions , to variations of the lru for single caches as well as networks of caches , is provided by martina et al . in that work , andfurther in elayoubi and roberts , it is shown that for mobile networks , application of pre - filtering improves the performance of lru. there can be strong dependencies between content demands , objects can have a finite lifespan , and new ones can appear anytime .these phenomena constitute the _ temporal locality _ , not captured from the irm model . such type of traffic was studied for lru initially by jelenkovi and radovanovi , and recently using also statistics from user measurements , by traverso et al and olmos et al .the problem of optimal content placement , when network areas are covered by more than one station has also been recently studied in the literature .a number of pro - active caching policies have been proposed , where the cache inventories are pre - filled by content , based on knowledge of the content popularity distribution and additional network - related information .golrezaei et al find the optimal content placement that maximises hit probability , when full network information ( popularity , node and user positions ) is available .they formulate a binary optimisation problem and propose approximation and greedy algorithms for its solution . using reduced information ( content popularity , coverage probability ) , baszczyszyn and giovanidis provide a randomised strategy that maximises the hit probability .poularakis et al . formulate and solve the joint content placement and user association problem that maximises the fraction of content served by the caches of the edge - nodes .araldo et al . propose joint cache sizing / object placement / path selection policies that consider also the cost of content retrieval .recently , naveen et al . have formulated the problem in a way to include the bandwidth costs , and have proposed an online algorithm for its solution .further distributed replication strategies that use different system information are proposed by borst et al , and also by leconte et al .the problem of optimal request routing and content caching for minimum average content access delay in heterogeneous networks is studied by dehghan et al in . the cache management problem for cellular networkshas also been approached using point process modelling of the network node positions .bastug et al . find the outage probability and content delivery rate for a given cache placement .furthermore , tamoor - il - hassan et al find the optimal station density to achieve a given hit probability , using uniform replication .this work has the following contributions to the subject of caching at the network edge . it takes geometry explicitly into consideration for the analysis of caching policies . specifically, it investigates a three - dimensional model ( two - dimensional space and time ) .in this , stations have a certain spatial distribution ( modelled by point processes ) and coverage areas may overlap , allowing for multi - coverage .furthermore , it is a dynamic model , where users with demands arrive over time at different geographic locations ( sec .[ sec:3_network ] ) . it introduces ( sec . [ sec:2_cache ] ) a family of decentralised caching policies , which exploit multi - coverage , called _ spatial multi - lru_. specifically , two variations of this family are studied , namely multi - lru - one and -all .these policies constitute an extension of the classical single - lru , to cases where objects can be retrieved by more than one cache .the work investigates how to best choose the actions of update , insertion and eviction of content in the multiple caches and how this can be made beneficial for the performance . the hit probability of the new policies , is analysed using the che approximation ( sec .[ sec:4_che ] ) .two additional approximations made here , namely the cache independence approximation ( cia ) for multi - lru - one , and the cache similarity approximation ( csa ) for multi - lru - all , allow to derive simple analytical formulas for the spatial dynamic model , under irm traffic . verification for the che - like approximations and further comparison of the multi - lru policies , with other ones from the literature are provided in sec .[ sec:5_simul ] by simulations .the comparison considers policies both with distributed and with centralised implementation , that use various amount of network information . for irm ,the multi - lru - one outperforms the -all variation . in sec [ sec:5_4_temploc ]the policies are evaluated for traffic with temporal locality , where it is shown that multi - lru - all can perform better than -one .caching policies can profit from the availability of system information .such information can be related to user traffic , node positions and coverage areas , as well as the possibility for a bs to have knowledge over the cache content of its neighbours . in general , _ the more the available information , the higher the hit - performance _ , if the management policy is adapted to it . in general , we can group caching policies as follows .* ( i ) poq ( policies with per - request updates ) : * for these , updates of the cache content are done on a per - request basis and depend on whether the requested object is found or not .information on file popularity is _ not available_. neither is information over the network structure .the actions are taken _ locally _ at each node , and are triggered by the user , in other words these policies do not require centralised implementation .the lru policies belong to this category .- _ lru : _ it leaves in each cache the most recently demanded objects .the first position of the cache is called most recently used ( mru ) and the last one least recently used ( lru ) .when a new demand arrives , there are two options .( a. _ update _ ) the object demanded is already in the cache and the policy updates the object order by moving it to the mru position . or , ( b. _ insertion _ ) the object is not in the cache and it is inserted as new at the mru position , while the object in the lru position is _ evicted_. in this work we will call this policy , _ single - lru_. - _ q - lru : _ it is a variation of the single - lru , with a difference in the insertion phase . when the object demanded is not in the cache , it is inserted with probability .the eviction and order updates are the same as before . *( ii ) pop ( policies with popularity updates ) : * here , exact information over the content popularities is available .these are static policies , for which the content of caches is updated in an infrequent manner , depending on the popularity changes of the catalogue .the following three belong to this category .- _ lfu : _ the policy statically stores in each cache the most popular contents from the set of all existing ones .lfu is known to provide optimal performance for a single cache under the independent reference model ( irm ) .the next two pop policies are solutions of optimisation problems , that require a - priori knowledge of more system information additional to popularity .- _ greedy full information ( gfi ) : _ the policy is proposed in . it assumes a - priori central knowledge of all station and user positions , their connectivity graph , and the content popularities . using this, it greedily fills the cache memories of all stations , so that at each step of the iteration , insertion of an object at a cache is the most beneficial choice for the objective function ( hit probability ) .- _ probabilistic block placement ( pbp ) : _ this policy is found in and is similar to the gfi , with the difference that it requires less system information : the coverage number probability and the content popularities .the policy randomly assigns blocks of contents to each cache , in a way that the probability of finding a specific content somewhere in the network comes from the optimal solution of a hit maximisation problem .pbp has considerably lower computational complexity compared to gfi .[ [ section ] ] we propose here a novel family of distributed cache management poq policies , that can profit from multi - coverage .we name these _ spatial multi - lru _ policies and are based on the single - lru policy presented previously .the idea is that , since a user can check all the caches of covering bss for the demanded object , and download it from any one that has it in its inventory , cache updates and object insertions can be done in a more efficient way than just applying single - lru independently to all caches .the multi - lru policies take into account , whether a user has found the object in _ any _ of the covering stations , and each cache adapts its action based on this information .most importantly , it is the user who triggers a cache s update / insertion action , and in this way she / he indirectly informs each cache about the inventory content of its neighbours .we propose here variations of the multi - lru family , that differ in the number of inserted contents in the network , after a missed content demand .differences appear also in the update phase . * multi - lru - one : * action is taken only in _one _ cache out of .( a. update ) if the content is found in a non - empty subset of the caches , only one cache from the subset is updated .( b. insertion ) if the object is not found in _ any _ cache , it is inserted only in one .this one can be chosen as the cache closest to the user , or a random cache , or one from some other criterion .( in this work , we will use the choice of the _ closest _ node , to make use of the spatial independence of poisson traffic ) . * multi - lru - all : * insertion action is taken in _ all _ caches .( a. update ) if the content is found in a non - empty subset of the caches , all caches from this subset are updated .( b. insertion ) if the object is not found in _ any _ cache it is inserted in all . we can also propose another variation based on q - lru . * q - multi - lru - all : * this variation differs from the multi - lru - all only in the insertion phase .the object is inserted in each cache with probability .the motivation behind the different variations of the multi - lru policies is the following .when a user has more than one opportunity to be served due to multi - coverage , she / he can benefit from a larger cache memory ( the sum of memory sizes from covering nodes . herewe assume that the user is satisfied as long as she / he is covered , without preference over a specific station ) .in this setting , the optimal insertion of new content and update actions are not yet clear .if multi - lru - one is applied , a single replica of the missed content is left down in one of the caches , thus favouring diversity among neighbouring caches .if multi - lru - all is used , replicas are left down , one in each cache , thus spreading the new content over a larger geographic area ( the union of covering cells ) , at the cost of diversity .q - multi - lru - all is in - between the two , leaving down a smaller than number of replicas .a - priori , it is unclear which one will perform better with respect to hit probability .the performance largely depends on the type of incoming traffic . forfixed object catalogue and stationary traffic , diversity in the cache inventories can be beneficial , whereas for time - dependent traffic with varying catalogue , performance can be improved when many replicas of the same object are available , before its popularity perishes . in this workthe main focus will be on spatial irm input traffic , but a short evaluation of the policies under traffic with temporal locality will also be provided .for the analysis , the positions of transmitters coincide with the atoms from the realisation of a 2-dimensional _ stationary _ point process ( pp ) , , indexed by , with intensity in ] ( to make stationary ) .its intensity is equal to .there are two different planar areas ( _ cells _ ) associated with each atom ( bs ) .the first one is the _ voronoi cell _ .given a pp , the voronoi tessellation divides the plane into _ non - overlapping _ planar subsets , each one associated with a single atom .a planar point belongs to , if atom is the closest atom of the process to . in other words , .the second one is the _ coverage cell _ .each transmitter node has a possibly random area of wireless coverage associated with it .when users arrive inside the coverage cell of they can be served by it , by downlink transmission . in general different from .coverage cells can overlap , so that a user at a random location may be covered by multiple bss , or may not be covered at all .the total coverage area from all bss with their coverage cells is ( see ( * ? ? ?* ch.3 ) ) . due to stationarity of the pp , any planar location can be chosen as reference for the performance evaluation of the wireless model .this is called the _ typical location _ , and for convenience we use the cartesian origin .because of the random realisation of the bs positions and the random choice of the reference location , the number of bs cells covering is also random . the _ coverage number _ ( as in , ) is the number of cells that covers the typical location .it is a random variable ( r.v . ) that depends on the pp and the downlink transmission scheme .it has mass function , & & m=0,1,\ldots , m,\end{aligned}\ ] ] where .it holds , the choice of the coverage model determines the shape of the coverage cells and consequently the values of the coverage probabilities . in this workthe choice of is left to be general . for the evaluation ,specific models are considered .special cases include : ( 1 ) the _ model _ and ( 2 ) the _ or boolean model_. both models consider the coverage cell of , as the set of planar points for which the received signal quality from exceeds some threshold value .the motivation is that t is a predefined signal quality , above which the user gets satisfactory quality - of - service .the difference between these two is that the model refers to networks with interference ( e.g. when bss serve on the same ofdma frequency sub - slot ) , whereas the model , to networks that are noise - limited ( e.g. by use of frequency reuse , neighbouring stations do not operate on the same bandwidth ) . for the boolean modelthe is a ball of fixed radius centred at .it coincides with the model , when no randomness of signal fading over the wireless channel is considered ( or when an equivalence - type argument is used to transform the analysis of networks with random fading into equivalent ones without it , as in ) .a more detailed presentation of the different coverage models can be found in appendix [ app : a ] .each user served from the network is assumed to arrive independently at some planar location , stay there during service and then leave .we model the users by a _ homogeneous space - time _ ppp in , , where takes values on the euclidean plane , and the time of arrival occurs at some point on the infinite time axis .the ppp intensity is in ] .the time between two consecutive arrivals in is exponentially distributed with mean ] , which results from an independent thinning of .the way we have modelled user traffic ( using irm ) ignores temporal and/or spatial correlations in the sequence of user requests , since it assumes independence in all dimensions . in realityhowever , when an object is requested by a user , it is more likely to be requested again at some near future in a neighbouring location .this is called time - locality , and space - locality .the presented pp model has the flexibility to be adapted to such traffic behaviour. we will not give much details about this type of traffic in this paper ( the reader is referred to the related references ) .some first simulations for the policies under study are however provided in sec .[ sec:5_4_temploc ] . further research on this is the subject of our ongoing work .we consider the case where a cache memory of size is installed and available on each transmitter node of .the memory inventory of node at time is denoted by and is a ( possibly varying over time ) subset of , with number of elements not greater than .the accuracy of the approximations in the two - cache network is shown in fig.[che2cache ] . the che - cia approximation for multi - lru - one - although not accurate - performsreasonably well in the two - cache network .the che - csa approximation for the multi - lru - all , is exact .
|
this article introduces a novel family of decentralised caching policies , applicable to wireless networks with finite storage at the edge - nodes ( stations ) . these policies are based on the _ least - recently - used _ replacement principle , and are , here , referred to as spatial _ multi - lru_. based on these , cache inventories are updated in a way that provides content diversity to users who are covered by , and thus have access to , more than one station . two variations are proposed , namely the _ multi - lru - one _ and _ -all _ , which differ in the number of replicas inserted in the involved caches . by introducing spatial approximations , we propose a che - like method to predict the hit probability , which gives very accurate results under the independent reference model ( irm ) . it is shown that the performance of multi - lru increases the more the multi - coverage areas increase , and it approaches the performance of other proposed centralised policies , when multi - coverage is sufficient . for irm traffic multi - lru - one outperforms multi - lru - all , whereas when the traffic exhibits temporal locality the -all variation can perform better . [ wireless communication , network topology , distributed networks ]
|
the concept of persistent excitation is well - known in the control community . since the pioneers works of anderson and moore , this concept has been successfullly used in a large variety of fields of application going from economics , , to adaptive or learning control , , , , or mechanical engineering and robotics , , and . in this paper, we use a persistently excited adaptive tracking control in the multidimensional arx framework .it allows us to avoid the strong controllability assumption recently proposed by bercu and vazquez , .more precisely , we shall establish the almost sure convergence for both least squares ( ls ) and weighted least squares ( wls ) estimators of the unknown parameters of arx model .the asymptotic normality as well as a law of iterated logarithm are also provided .consider the -dimensional autoregressive process with adaptive control of order , for short , given for all by where stands for the shift - back operator and and are the system output , input and driven noise , respectively .the polynomials and are given for all by where and are unknown square matrices of order and is the identity matrix .relation ( [ arx ] ) may be rewritten in the compact form where the regression vector with and the unknown parameter is given by in all the sequel , we shall assume that the driven noise is a martingale difference sequence adapted to the filtration where stands for the -algebra of the events occurring up to time .moreover , we also assume that , for all , =\gamma $ ] a.s . where is a positive definite deterministic covariance matrix .in addition , we suppose that the driven noise satisfies the strong law of large numbers i.e. if then the sequence converges to a.s .that is the case if , for example , is a white noise or if has a finite conditional moment of order . + the paper is organized as follows .section deals with the parameter estimation and the persistently excited adaptive tracking control .section is devoted to the introduction of the schur complement approach together with some linear algebra calculations . in section , we propose some usefull almost sure convergence properties together with a central limit theorem ( clt ) and a law of iterated logarithm ( lil ) for both ls and wls estimators .some numerical simulations are also provided in section .finally , a short conclusion is given in section .in the arx tracking framework , we must deal with two objectives simultaneously . on the one hand , it is necessary to estimate the unknown parameter . on the other hand ,the output has to track , step by step , a predictable reference trajectory .first , we focus our attention on the estimation of the parameter .we shall make use of the wls algorithm which satisfies , for all , where the initial value may be arbitrarily chosen and where the identity matrix with is added in order to avoid the useless invertibility assumption .the choice of the weighted sequence is crucial . if we find the standard ls estimator , while if , we obtain the wls estimator introduced by bercu and duflo , .next , we are concern with the choice of the adaptive control sequence .the crucial role played by is to regulate the dynamic of the process by forcing to track a predictable reference trajectory .we propose to make use of the persistently excited adaptive tracking control given , for all , by where is an exogenous noise of dimension , adapted to , with mean and positive definite covariance matrix .in addition , we assume that is independent of , of , and of the initial state of the system . moreover , we suppose that satisfies the strong law of large numbers . consequently , if then the sequence converges to a.s . by substituting ( [ control ] ) into ( [ mod ] ) , we obtain the closed - loop system where the prediction error . furthermore , we assume in all the sequel that the reference trajectory satisfies finally , let be the average cost matrix sequence defined by the tracking is said to be residually optimal if converges to a.s .in all the sequel , we shall make use of the well - known causality assumption on .more precisely , we assume that for all with in other words , the polynomial only has zeros with modulus . consequently ,if is strictly less than the smallest modulus of the zeros of , then is invertible in the ball with center zero and radius and is a holomorphic function ( see e.g. page 155 ) .hence , for all with , we have where all the matrices can be explicitly calculated via the recursive equations and , for all in a similar way , for all such that , we shall denote all the matrices may be explicitly calculated as functions of the matrices and . as a matter of fact , for all for all , denote by be the square matrix of order where , for all , with .in addition , let be the symmetric square matrix of order for all , let with and denote by the rectangular matrix of dimension given , if , by while , if , by finally , let be the block diagonal matrix of order denote by the symmetric square matrix of order this lemma is the keystone of all our asymptotic results .+ [ mainlemma ] let be the schur complement of in if is causal , then and are invertible and + the proof is given in appendixa. one can see the usefulness of persistent excitation in arx tracking .as we make use of a persistently excited adaptive tracking control given , it is possible to get ride of the strong controllability assumption recently proposed by bercu and vazquez , . on the other hand , we will see in the next section that the tracking is not optimal but it is residually optimal .it is necessary to make a compromise between estimation and tracking optimality .our first result concerns to the a.s .asymptotic properties of the ls estimator .[ aspls ] assume that is causal and that has finite conditional moment of order .then , for the ls estimator , we have where the limiting matrix is given by ( [ deflambda ] ) .in addition , the tracking is residually optimal finally , converges almost surely to the proof is given in appendixb .our second result is related to the almost sure properties of the wls estimator .[ aspwls ] assume that is causal .in addition , suppose that either is a white noise or has finite conditional moment of order .then , for the wls estimator , we have where the limiting matrix is given by ( [ deflambda ] ) .in addition , the tracking is residually optimal finally , converges almost surely to the proof is given in appendixc .finally , we present the clt and the lil for both ls and wls estimators .[ cltlil ] assume that is causal and that and have both finite conditional moments of order .in addition , suppose that satisfies for some then , the ls and wls estimators share the same central limit theorem where the inverse matrix is given by ( [ invlambda ] ) and the symbol stands for the matrix kronecker product .in addition , for any vectors and , they also share the same law of iterated logarithm in particular , where and are the minimum and the maximum eigenvalues of . the proof is given in appendixd .the goal of this section is to illustrate via some numerical experiments the main results of this paper . in order to keep this section brief ,we consider a causal model in dimension with and .moreover , the reference trajectory is chosen to be identically zero and the driven and exogenous noises and are gaussian white noises . finally our numerical simulations are based on realizations of sample size .consider the model where first of all , it is easy to see that this process is not strongly controllable , , because . consequently , if we use an adaptive tracking control without persistent excitation , then only the matrix and the first diagonal term of the matrix can be properly estimated as one can see in figure 1 .+ next , we make use of the persistently excited adaptive tracking control given by for all , we have and which clearly implies that since the matrices and are both diagonal , we find that consequently , we obtain that therefore , the limiting matrix given by ( [ deflambda ] ) is it is not hard to see that .one can observe in figure 2 the almost sure convergence of the ls estimator to the four diagonal coordinates of .one can conclude that performs very well in the estimation of .+ + figure 3 shows the clt for the four coordinates of one can realize that each component of has distribution as expected .via the use of a persistently excited adaptive tracking control , we have shown that it was possible to get ride of the strong controllability assumption recently proposed by bercu and vazquez , .we have established the almost sure convergence for the ls and wls estimators in the multidimensional arx framework . in addition, we have shown the residual optimality of the adaptive tracking . moreover , both ls and the wls estimators share the same clt and lil .we hope that similar analysis could be extended to the armax framework .proof of lemma [ mainlemma ] let and be the infinite - dimensional diagonal square matrices given by moreover , denote by and the infinite - dimensional rectangular matrices with rows and an infinite number of columns , respectively given , if , by while , if , by furthermore , let and denote by the block diagonal matrix of order one can observe that is a positive definite matrix . finally ,if , denote by the matrix with rows and columns given by while , if , the upper triangular square matrix of order given by on the one hand , if , we can deduce from ( [ schur ] ) after some straightforward , although rather lengthy , linear algebra calculations that we shall focus our attention on the last term in ( [ schurdec ] ) . since the matrix is positive definite , it immediately follows that is also positive definite .consequently , the schur complement is invertible . on the other hand ,if , we can see from ( [ schur ] ) that where is the symmetric square matrix of order where stands for the zeros matrix of order and is the block diagonal matrix of order taking into account the fact that and are both positive definite matrices , we obtain that is also positive definite which implies that is invertible . finally , we infer from ( [ deflambda ] ) that consequently , we deduce from ( [ decdet ] ) that is invertible and formula ( [ invlambda ] ) can be found in page 18 , which completes the proof of lemma [ mainlemma]. of theorem [ aspls ] in order to prove theorem [ aspls ] , we shall make use of the same approach than bercu or guo and chen .first of all , we recall that for all it follows from ( [ eqbasis ] ) together with the strong law of large numbers for martingales ( see e.g. corollary 1.3.25 of ) that a.s .moreover , by theorem 1 of or lemma 1 of , we have where . hence ,if has finite conditional moment of order , we can show by the causality assumption on the matrix polynomial together with ( [ sumpif ] ) that a.s . for all .in addition , let and .it is well - known that and tends to zero a.s .consequently , as we infer from from ( [ sumpif ] ) that therefore , we obtain from ( [ ct ] ) , ( [ eqbasis ] ) and ( [ spi ] ) that furthermore , as is causal , we find from relation ( [ arx ] ) that which implies by ( [ sx ] ) that it remains to put together the two contributions ( [ sx ] ) and ( [ su ] ) to deduce that a.s . leading to a.s . hence , it follows from ( [ spi ] ) that consequently , we obtain from ( [ ct ] ) , ( [ eqbasis ] ) , ( [ sp ] ) and the strong law of large numbers for martingales ( see e.g. theorem 4.3.16 of ) that and , for all , which implies that where is given by ( [ defl ] ) .furthermore , it follows from ( [ arx ] ) , ( [ eqbasis ] ) and ( [ sbu ] ) that for all where consequently , we deduce from the cauchy - schwarz inequality together with ( [ ct ] ) , ( [ sp ] ) , and the strong law of large numbers for martingales ( see e.g. theorem 4.3.16 of ) that for all which ensures that where is given by ( [ defh ] ) . via the same lines, we also find that therefore , it follows from the conjunction of ( [ cvgx ] ) , ( [ cvgu ] ) and ( [ cvgxu ] ) that where the limiting matrix is given by ( [ deflambda ] ) .thanks to lemma [ mainlemma ] , the matrix is invertible .this is the key point for the rest of the proof . on the one hand, it follows from ( [ cvgfin ] ) that , a.s . which implies that tends to zero a.s .hence , by ( [ sumpif ] ) , we find that on the other hand , we obviously have from ( [ eqbasis ] ) consequently , we immediately obtain the tracking residual optimality ( [ th12 ] ) from ( [ pifin ] ) and ( [ costpi ] ) .furthermore , by a well - known result of lai and wei on the ls estimator , we also have hence ( [ th14 ] ) clearly follows from ( [ cvgfin ] ) and ( [ rls ] ) , which completes the proof of theorem [ aspls ] . of theorem [ aspwls ] by theorem 1 of , we have where the coefficient .then , as the weighted sequence is given by with , we clearly have a.s .hence , it follows from ( [ sumpifa ] ) together with kronecker s lemma given e.g. by lemma 1.3.14 of that therefore , we obtain from ( [ ct ] ) , ( [ eqbasis ] ) , ( [ spia ] ) and the strong law of large numbers for martingales ( see e.g. theorem 4.3.16 of ) that in addition , we also deduce from the causality assumption on the matrix polynomial that consequently , we immediately infer from ( [ sxa ] ) and ( [ sua ] ) that so a.s .hence , ( [ spia ] ) implies that proceeding exactly as in appendix a , we find from ( [ spa ] ) that via an abel transform , it ensures that we obviously have from ( [ cvgfina ] ) that tends to zero a.s .consequently , we obtain from ( [ sumpifa ] ) and kronecker s lemma that then , ( [ th22 ] ) clearly follows from ( [ costpi ] ) and ( [ pifina ] ) .finally , by theorem 1 of hence , we obtain ( [ th23 ] ) from ( [ cvgfina ] ) and ( [ rwls ] ) , which completes the proof of theorem [ aspwls ] . of theorem [ cltlil ] first of all , it follows from ( [ mod ] ) and ( [ wls ] ) that for all where we now make use of the clt for multivariate martingales given e.g. by lemma c.1 of , see also . on the one hand , for the ls algorithm , we clearly deduce ( [ th31 ] ) from convergence ( [ th11 ] ) and decomposition ( [ decwls ] ) . on the other hand , for the wls algorithm, we also infer ( [ th31 ] ) from convergence ( [ th21 ] ) and ( [ decwls ] ) .next , we make use of the lil for multivariate martingales given e.g. by lemma c.2 of , see also , .for the ls algorithm , since has finite conditional moment of order , we obtain from chow s lemma given e.g. by corollary 2.8.5 of that for all the exogenous noise shares the same regularity in norm than which means that for all consequently , as the reference trajectory satisfies ( [ regnorm ] ) , we deduce from ( [ eqbasis ] ) together with ( [ pifin ] ) , ( [ normeps ] ) and ( [ normxi ] ) that for some furthermore , it follows from ( [ sbu ] ) and ( [ normx ] ) that hence , we clearly obtain from ( [ normx ] ) and ( [ normu ] ) that therefore , as , ( [ normphi ] ) immediately implies that finally , lemma c.2 of together with convergence ( [ th11 ] ) and ( [ decwls ] ) lead to ( [ th32 ] ) .the proof for the wls algorithm is left to the reader because it follows essentially the same arguments than the proof for the ls algorithm .it is only necessary to add the weighted sequence and to make use of convergence ( [ th21 ] ) . . aggelogiannaki , p. doganis and h. sarimveis , an adaptive model predictive control configuration for production - inventory systems , internationa journal of production economics , vol .165 - 178 , 2008 ,
|
the usefulness of persistent excitation is well - known in the control community . thanks to a persistently excited adaptive tracking control , we show that it is possible to avoid the strong controllability assumption recently proposed in the multidimensional arx framework . we establish the almost sure convergence for both least squares and weighted least squares estimators of the unknown parameters . a central limit theorem and a law of iterated logarithm are also provided . all this asymptotical analysis is related to the schur complement of a suitable limiting matrix .
|
the well - known hodgkin - huxley model of the squid giant axon ( ) represented a huge leap forward compared to earlier models of excitable systems built from abstract sets of equations or from electrical circuits including non - linear components , e.g. .the pioneering work of the group of denis noble made the transition from neuronal excitability models , characterized by na+ and k+ conductances with fast gating kinetics , to cardiomyocyte electrophysiology models , a field expanding steadily for over five decades ( ) .nowadays , complex models accurately reproducing transmembrane voltage changes as well as ion concentration dynamics between various subcellular compartments and buffering systems are incorporated into detailed anatomical models of the entire heart ( ) .the luo - rudy i model of isolated guinea pig ventricular cardiomyocyte ( ) was developed in the early 1990s starting from the beeler - reuter model ( ) .it includes more recent experimental data related to gating and permeation properties of several types of ion channels , obtained in the late 1980s with the advent of the patch - clamp technique ( ) .the model comprises only three time and voltage - dependent ion currents ( fast sodium current , slow inward current , time - dependent potassium current ) plus three background currents ( time - independent and plateau potassium current , background current ) , their dynamics being described by hodgkin - huxley type equations .this apparent simplicity , compared to more recent multicompartment models , renders it adequate for mathematical analysis using methods of linear stability and bifurcation theory . nowadays , thereexist numerous software packages for the numerical study of finite - dimensional dynamical systems , for example matcont , cl , cl ( , ) , auto . in , , , ,the periodic boundary value problems used to locate limit cycles are approximated using orthogonal collocation method .finite differences method is also considered . in thispaper , limit cycles are obtained for the dynamical system associated to the luo - rudy i model by using finite element method time approximation ( fem ) .the mathematical problem governing the membrane excitability of a ventricular cardiomyocyte , according to the luo - rudy i model ( ) , is a cauchy problem for the system of first order ordinary differential equations where , {i} ] , {i} ] , {i} ] , , , , , , , parameters , , , , , , {0} ] , {0} ] , , , , constants , functions , , , , , , , , , , default values of parameters and initial values of variables in the luo - rudy i model , the reader is referred to .the reader is also referred to for the continuity of the model , and to for the treatment of the vector field singularities . is of class on the domain of interest .we performed the study of the dynamical system associated with the cauchy problem ( [ bichircl_e01 ] ) , ( [ bichircl_e02 ] ) by considering only the parameter and fixing the rest of parameters .denote and the vector of the fixed values of .let , , .consider the dynamical system associated with the cauchy problem ( [ bichircl_e01 ] ) , ( [ bichircl_e06 ] ) , where the equilibrium points of this problem are solutions of the equation the existence of the solutions and the number were established by graphical representation in , for the domain of interest .the equilibrium curve ( the bifurcation diagram ) was obtained in , via an arc - length - continuation method ( ) and newton s method ( ) , starting from a solution obtained by solving a nonlinear least - squares problem ( ) for a value of for which the system has one solution . in ,the results are obtained by reducing ( [ bichircl_e07 ] ) to a system of two equations in {i}) ] in subintervals ] .let us approximate the spaces and by the spaces \rightarrow \mathbb{r } ; \ , v \in c[0,1 ] , \ ,v(0)=v(1 ) , \ , v|_{k } \in p_{k}(k ) , \ , \forall k \in \mathcal{t}_{h } \ } \ , , \label{bichircl_e41 } \\ & \ & x_{h}=\ { x : [ 0,1 ] \rightarrow \mathbb{r}^{8 } ; \ , x=(x_{1},\ldots , x_{8 } ) , \ , x_{i } \in v_{h } , \ , i=1,\ldots,8 \ } \ , , \nonumber\end{aligned}\ ] ] respectively , where is the space of polynomials in of degree less than or equal to defined on , .let .an element has three nodal points . to obtain a function reduces to obtain a function . in order to obtain a function , we use a basis of functions of .let be the local numeration for the nodes of , where correspond to respectively and corresponds to a node between and .let be the local quadratic basis of functions on corresponding to the local nodes .let be the global numeration for the nodes of $ ] .the two numerations are related by a matrix whose elements are the elements .its rows are indexed by the elements ( by the number of the element in a certain fixed numeration with elements from the set ) and its columns , by the local numeration , that is .a function is defined by its values from the nodes , and a function is defined by its values from the nodes , so , an unknown function , , is reduced to the unknowns , , , , . in ( [ bichircl_e36 ] ) , ( [ bichircl_e37 ] ) , ( [ bichircl_e38 ] ) , approximate , , by , , .taking , given by ( [ bichircl_e43 ] ) , and , for all , for all , we obtain the discrete variant of problem ( [ bichircl_e36 ] ) , ( [ bichircl_e37 ] ) , ( [ bichircl_e38 ] ) as the following problem in , , , , , written suitable for the assembly process , for all , for all . ' ' ) .the hopf bifurcation point is marked by `` ''.,title="fig:",width=529,height=340 ] ' ' ) .the hopf bifurcation point is marked by `` ''.,title="fig:",width=529,height=340 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ] and for ( marked by `` x '' ) ( 20 elements , 41 nodes).,title="fig:",width=226,height=151 ]based on and on the computer programs for and , relations ( [ bichircl_e44 ] ) , ( [ bichircl_e44bis ] ) , ( [ bichircl_e45 ] ) , ( [ bichircl_e46 ] ) and the algorithm at the end of section [ bichircl_sect_v ] furnished the numerical results of this section .let be the hopf bifurcation point located during the construction of the equilibrium curve by a continuation procedure in .the solution of ( [ bichircl_e24 ] ) , calculated in , is , , , , , , , , , , , , , , , , , , , , , , , , , .( the eigenvalues of the jacobian matrix , calculated by the qr algorithm , are , , , , , , ) .these data are considered in the step 1 of the algorithm at the end of section [ bichircl_sect_v ] . in order to solve ( [ bichircl_e25 ] ) numerically by the algorithm at the end of section [ bichircl_sect_v ] and by ( [ bichircl_e44 ] ) , ( [ bichircl_e44bis ] ) , ( [ bichircl_e45 ] ) , ( [ bichircl_e46 ] ) , we performed calculations using and 500 iterations in the continuation process .integrals were calculated using gauss integration formula with three integration points .figure [ bichircl_proj_lc ] and [ bichircl_proj_lc_20elem ] present some results obtained using 20 elements ( 41 nodes ) ( , in section [ bichircl_sect_vii ] ) .the curves of the projections of the limit cycles , on the planes indicates in figure , are plots generated from values calculated in the nodes , corresponding to a fixed value of the parameter . in figure [ bichircl_proj_lc_20elem ] , there are represented the projections of the plots of two limit cycles calculated for ( iteration 148 ) and for ( iteration 248 , marked by `` x '' in figure ) .the results obtained are relevant from a biological point of view , pointing to unstable electrical behavior of the modeled system in certain conditions , translated into oscillatory regimes such as early afterdepolarizations ( ) or self - sustained oscillations ( ) , which may in turn synchronize , resulting in life - threatening arrhythmias : premature ventricular complexes or torsades - de - pointes , degenerating in rapid polymorphic ventricular tacycardia or fibrillation ( ) . c. l. bichir , b. amuzescu , a. georgescu , m. popescu , ghe .nistor , i. svab , m. l. flonta , a. d. corlan , _ stability and self - sustained oscillations in a ventricular cardiomyocyte model _ , submitted to interdisciplinary sciences - computational life sciences , springer . c. l. bichir , a. georgescu , b. amuzescu , ghe .nistor , m. popescu , m. l. flonta , a. d. corlan , i. svab , _ limit points and hopf bifurcation points for a one - parameter dynamical system associated to the luo - rudy i model _ , submitted to mathematics and its applications , http://www.mathematics-and-its-applications.com .e. doedel , _ lecture notes on numerical analysis of nonlinear equations _ , 2007 , http://cmvl.cs.concordia.ca/publications/notes.ps.gz , from the home page of the auto web site , http://indy.cs.concordia.ca / auto/. w. govaerts , yu . a. kuznetsov r. khoshsiar ghaziani , h.g.e .meijer , _ cl : a toolbox for continuation and bifurcation of cycles of maps _ , 2008 , http://www.matcont.ugent.be/doc.pdf d. sato , l. h. xie , a. a. sovari , d. x. tran , n. morita , f. xie , h. karagueuzian , a. garfinkel , j. n. weiss , z. qu , _ synchronization of chaotic early afterdepolarizations in the genesis of cardiac arrhythmias _ ,usa , * 106 * ( 2009 ) , 2983 - 2988 .epub 2009 feb 2913 .r. seydel , _ nonlinear computation _, invited lecture and paper presented at the distinguished plenary lecture session on nonlinear science in the 21st century , 4th ieee international workshops on cellular neural networks and applications , and nonlinear dynamics of electronic systems , sevilla , june , 26 , 1996 .d. x , tran , d. sato , a. yochelis , j. n. weiss , a. garfinkel , z. qu , _ bifurcation and chaos in a model of cardiac early afterdepolarizations _ , phys .* 102:258103 * ( 2009 ) .epub 252009 jun 258125 .
|
an one - parameter dynamical system is associated to the mathematical problem governing the membrane excitability of a ventricular cardiomyocyte , according to the luo - rudy i model . limit cycles are described by the solutions of an extended system . a finite element method time approximation ( fem ) is used in order to formulate the approximate problem . starting from a hopf bifurcation point , approximate limit cycles are obtained , step by step , using an arc - length - continuation method and newton s method . some numerical results are presented . + * key words * : limit cycle , finite element method time approximation , luo - rudy i model , arc - length - continuation method , newton s method . + * 2000 ams subject classifications * : 37n25 37g15 37m20 65l60 90c53 37j25 .
|
data can produce consensus among bayesian agents who initially disagree .it can also test and reject opinions .we relate these two critical uses of data in a model where agents may strategically misrepresent what they know . in each period , either or is observed .let and be two probability measures on such that is absolutely continuous with respect to . if and are -additive then , as shown by , the conditional probabilities of and merge , in the sense that the two posteriors become uniformly close as the amount of observations increases ( -almost surely ) .so , repeated applications of bayes rule lead to consensus among bayesian agents , provided that their opinions were initially compatible .now consider savage s axiomatization of subjective probability .he proposed postulates that characterize a preference relation over bets in terms of a nonatomic finitely additive probability .call such , for short , an _opinion_. savage s framework allows for finitely additive probabilities that are not -additive .in particular , the conclusions of the blackwell and dubins theorem hold for some , but not all , opinions .this flexibility makes savage s framework an ideal candidate to study the connection between the merging and the testing of opinions .we say that an opinion satisfies the _ blackwell dubins property _ if whenever is an opinion absolutely continuous with respect to , the two conditional probabilities merge . by definition , in this subframework , sufficient data produces agreement among bayesian agents who have compatible initial opinions . outside this subframework, bayesian agents may satisfy savage s axioms , have compatible initial opinions and yet persistently disagree .see the for an example .any opinion , whether or not it satisfies the blackwell and dubins property , can be tested and rejected . to reject an opinion , it suffices to find an event that has low probability according to and then reject it if this event is observed .thus , if opinions are honestly reported then the connection between merging and testing opinions is weak . in the absence of incentive problems ,subjective probabilities can be tested and rejected whether or not data produces consensus .now consider the case in which a self - proclaimed expert , named bob , may strategically misrepresent what he knows .let alice be a tester who wants to determine whether bob is an informed expert who honestly reports what he believes or he is an uninformed , but strategic , expert who has reputational concerns and wants to pass alice s test .alice faces an adverse selection problem and uses data to screen the two types of experts .a test is likely to _ control for type i error _ if an informed expert expects to pass the test by truthfully reporting what he believes .a test can be _ manipulated _ if even completely uninformed experts are likely to pass the test , no matter how the data unfolds in the future .the word `` likely '' refers to a possible randomization by the strategic expert to manipulate the test . only nonmanipulable tests that control for type i error passinformed experts and may fail uninformed ones .our main results are : in the presence of incentive problems , if opinions must satisfy the blackwell dubins property then there exists a test that controls for type i error and can not be manipulated .if , instead , any opinion is allowed then every test that controls for type i error can be manipulated .thus , in savage s framework strategic experts can not be discredited .however , strategic experts can be discredited if opinions are restricted to a subframework where data produces consensus among bayesian agents with initially compatible views .these results show a strong connection between the merging and the testing of opinions but only under incentive problems .the blackwell dubins theorem has an additional interpretation . in this interpretation , is referred to as the data generating process and is an agent s belief initially compatible with .when the conclusions of the blackwell dubins theorem hold , then and merge and so , the agent s predictions are eventually accurate .thus , multiple repetitions of bayes rule transforms the available evidence into a near perfect guide to the future .it follows that our main results also have an additional interpretation . under incentive problems , strategic expertscan only be discredited if they are restricted to a subframework where opinions that are compatible with the data generating process are eventually accurate .finally , our results relate the literatures on bayesian learning and the literature on testing strategic experts ( see the next section for references ) .they show a strong connection between the framework under which bayesian learning leads to accurate opinions and the framework under which strategic experts can be discredited . the paper is organized as follows .section [ sec2 ] describes the model .section [ sec3 ] reviews the blackwell dubins theorem and defines the blackwell dubins property .section [ sec4 ] contains our main results .section [ sec5 ] relates our results and category tests .section [ sec6 ] considers the case where the set of per - period outcome may be infinite .the contains all proofs and a formal example of a probability that does not satisfy the blackwell dubins property .blackwell and dubins idea of merging of opinions is central in the theory of bayesian learning and bayesian statistics . in bayesian nonparametric statistics ,see the seminal work of , and the more recent work by walker , lijoi and pruenster ( ) . in the theory of bayesian learning , see .we refer to for a connection with the theory of calibration . in game theory, the blackwell dubins theorem is central in the study of convergence to nash equilibrium in repeated games .the main objective is to understand the conditions under which bayesian learning leads to a nash equilibrium [ see , among many contributions , foster and young ( , ) , , fudenberg and levine ( ) , , , kalai and lehrer ( ) , lehrer and smorodinsky ( ) , , nachbar ( , , ) , and young ( , ) ] .a series of papers investigate whether empirical tests can be manipulated . in statistics , see , cesa - bianchi and lugosi ( ) , and olszewski and sandroni ( ) . in economics , see among several contributions , , al - najjar et al .( ) , babaioff et al .( ) , , , , , , , , hu and shmaya ( ) , , olszewski and peski ( ) , olszewski and sandroni ( ) , , , shmaya ( ) , . for a review ,see and .see also for a companion paper .in every period an outcome , or , is observed ( all results generalize to the case of finitely many outcomes ) .a _ path _ is an infinite sequence of outcomes and is the set of all paths . given a path and a period , let be the cylinder of length with base .that is , is the set of all paths which coincide with in the first periods .the set of all paths is endowed with a -algebra of _ events _ containing all cylinders .the set is endowed with the product topology . in this topology ,a set is open if and only if it is a countable union of cylinders .we denote by the set of all open subsets of and by the borel -algebra generated by the topology .note that .let be the set of all finitely additive probabilities on .a probability is strongly nonatomic , or _savagean _ , if for every event and every ] if there is a strategy such that for every , if a test is manipulable with high probability , then a uninformed , but strategic expert is likely to pass the test regardless of how the data unfolds and how much data is available . [ de4 ] a test is _ nonmanipulable _ if for every strategy there is a cylinder such that for every path , nonmanipulable tests can reject uninformed experts . no matterwhich strategy bob employs , there is a finite history that , if observed , discredits him .these are the only tests that are likely to pass informed experts and may reject uninformed ones .we now review the main concepts behind the blackwell dubins theorem .[ de5 ] let .the probability _ merges _ with if for every the expression is the distance between the forecasts of and , conditional on the evidence available at time and along the path . the probability merges with if , under , this distance goes to in probability . in particular ,if accurately describes the data generating process then the predictions of are eventually accurate with high probability . in this paper, merging is formulated in terms of convergence in probability rather than almost sure convergence [ as in ] . as is well known ,convergence in probability is particularly convenient in the context of finitely additive probabilities .see , for instance , the discussion in .it is clear that for merging to occur , and must be compatible _ ex - ante_. the notion of absolute continuity formalizes this intuition .[ de6 ] let .the probability is _ absolutely continuous _ with respect to , that is , , if for every sequence of events , if is -additive , then the definition is equivalent to requiring that every event null under is also null under .moreover , if is a savagean probability and is a probability satisfying , then is savagean as well .absolute continuity is ( essentially ) necessary for merging .[ pr1 ] let and for every cylinder . if merges with then . in their seminal paper , blackwell and dubins show that when and are -additive then absolute continuity suffices for merging .[ th1 ] let and be -additive probability measures on .if , then merges with .one interpretation of the blackwell dubins theorem is that multiple repetitions of bayes rule lead to an agreement among agents who initially hold compatible opinions .another interpretation is that the predictions of bayesian learners will eventually be accurate ( provided that absolute continuity holds ) .however , the blackwell dubins theorem does not extend to all opinions .this motivates the next definition .[ de7 ] a probability satisfies the _ blackwell dubins property _ if for every , let be the set of all opinions that satisfy the blackwell dubins property .so , an opinion satisfies the blackwell dubins property if it merges to any compatible opinion .we show in the that is _ strictly _ contained in the set of all opinions .that is , some opinions satisfy the blackwell dubins property , while others do not .we also show that the set of probabilities satisfying the blackwell dubins property strictly contains the set of -additive probabilities .we refer the reader to example [ ex1 ] and theorem [ th10 ] , respectively .any exogenously given ( or honestly reported ) opinion can be tested and rejected , whether or not the blackwell dubins property holds .thus , in the absence of strategic considerations , the connection between the merging and the testing of opinions is weak .we now show that this connection is much stronger when there are incentive problems . in this subsection, we describe of the set of opinions on that satisfy the blackwell dubins property .we first show that absolute continuity is essentially a necessary condition for merging .proof of proposition [ pr1 ] assume is not absolutely continuous with respect to .then there exists a sequence of events and some such that but for every .suppose merges with .for every , let be a collection of pairwise disjoint cylinders of length such that for every and .fix .there exists a time large enough such that for every there is a subset such that and for every . because for every , the expression is well defined . for every event , hence , .define we have hence , .now let be large enough such that for every .then , for every , to summarize , for every . therefore , . by definition , ; hence , . a contradiction .hence , does not merge with .our main result is a characterization of the set of opinions that satisfy the blackwell dubins property .we first recall some results on extensions of finitely additive probabilities .let and be two algebras of subsets of such that .given and , call an _ extension of _ _ from _ _ to _ if for every .let be the set of extensions of from to .as is well known , the set is nonempty .moreover , it is a convex and compact subset of .the set of extreme points of has been studied in great generality .we refer the reader to and for further results and references .[ th7 ] fix two algebras and .a probability is an extreme point of if and only if for every and there exists such that .let be the algebra generated by all cylinders of .an event belongs to if and only if it is a finite union of ( pairwise disjoint ) cylinders . recall that is the borel -algebra induced on by the product topology. then .for every let be the restriction of on .it is easy to see that is -additive . by carathodory theorem ,it admits a -additive extension from to , denoted by . the next result is well known .[ le1 ] let . for all and in ,if then . we can now state our main result on merging .[ th8 ] let . for every following are equivalent : is an extreme point of . satisfies the blackwell dubins property . for all , if for some then merges with and .( 1)(2 ) .let be an extreme point of .if , then by lemma [ le1 ] . by the blackwell dubins theorem , in particular , the sequence of random variables converges to , -almost surely .therefore , the sequence converges in probability . for each , since the last expression only involves events belonging to , the proof is complete by showing that for every such that . to this end , fix an event and a cylinder such that . by plachky s theorem, there exists a sequence of events in such that as . for each , the inequality implies because and , it follows that and .absolute continuity implies .therefore , thus as claimed .( 2)(3 ) . if then , hence merges with .( 3)(1 ). assume by way of contradiction that is not an extreme point of .then there exist in such that , and . by assumption , merges with .let be a collection of pairwise disjoint cylinders of length such that for every and .for every large enough , there exists a subset such that and for every . for every event , where the first two equalities follow from the fact that for all .since and are arbitrary , we have . a contradiction .therefore , must be an extreme point of .the next result shows a useful property of opinions that satisfy the blackwell dubins property .[ th9 ] let . for every and every ,there exists a partition of such that for each , is a cylinder and .let and fix . because is strongly nonatomic, then there exists a partition of events such that for every . by theorem [ th8 ], is an extreme point of . by plachky s theorem , for each we can find a sequence in such that as .choose large enough such that for each .let and define for each .let and consider the partition .it satisfies for each .moreover , therefore , for each . because each is a finite union of pairwise disjoint cylinders , the proof is complete .the next theorem shows that for every sigma additive probability we can find a continuum of probabilities that agree with on every cylinder , fail -additivity but satisfy the blackwell dubins property .a related result appears in .[ th10 ] let . for every -additive probability the set has cardinality at least .fix a collection \ } ] , let be the algebra generated by .that is , let be defined as for every . because is dense , then for every .hence , is well defined .it satisfies . for each ] and . because , then .hence , for every .therefore , .by assumption , is an extreme point of .hence , .this concludes the proof of the claim . by theorem [ th8 ] , each satisfies the blackwell dubins property .moreover , each agrees with on every cylinder .hence , each -additive must agree with on every borel sets . because the sets \ } ] agrees with on every borel set .thus , there exists at most one -additive probability in \ } ] , which is included in , has cardinality .this completes the proof .[ th2 ] consider the case where any opinion is allowed .let be a test that -controls for type i errors with probability .the test can be manipulated with probability , for every ] . there exists a test that -controls for type i error with probability andis nonmanipulable .if bob is free to announce any opinion , then he can not be meaningfully tested and discredited .given any test that controls for type i error , bob can design a strategy which prevents rejection .however , if bob is required to announce opinions satisfying the blackwell dubins property , then it is possible to test and discredit him .these results show a strong connection between the merging and the testing of opinions , but only when there are incentive problems and agents may misrepresent what they know .we now illustrate the basic ideas behind the proof of the two results .the proof of theorem [ th3 ] relies on a characterization of the set of probabilities that satisfy the blackwell dubins property .this characterization is also crucial for the proof of theorem [ th4 ] below .we show that satisfies the blackwell dubins property if and only if it is an extreme point of the set of probabilities which agree with on every cylinder .the proof of necessity in this characterization is simple .suppose , by contradiction , that can be written as the convex combination , where and belong to .clearly , both and are absolutely continuous with respect to .however , does not merge to or .the intuition is that and agree on every finite history and so , the available data delivers equal support to them .the converse requires a deeper argument and relies on plachky s ( ) theorem , which states that is an extreme point of if and only if the probability of every event can be approximated by the probabilities of cylinders .given our characterization , the proof of theorem [ th3 ] can be sketched as follows : let be an opinion satisfying the blackwell dubins property .given that is strongly nonatomic , we can divide into a partition of events such that each of them has probability less than .this property is a direct implication of savage s postulate p6 and plays an important role in our result . for general opinions ,the events may have no useful structure and may not even be borel sets .however , since is an extreme point of , we can invoke plachky s theorem a second time and show that each can be chosen to be a cylinder . nowfix a path .let us say it belongs to . by definition, there is a time such that .we now define a test such that ( note that the period depends on the opinion because the partition depends on it ) . by definition , the test -controls for type i errors with probability .furthermore , it is a nonmanipulable test . given any strategy we can find a period large enough such that rejects all opinions in the ( finite ) support of .therefore , in , the probability of passing the test under is .we now sketch the proof of theorem [ th2 ] .consider a zero - sum game between nature and the expert .nature chooses an opinion and the expert chooses a strategy ( a random device producing opinions ) .the payoff of the expert is the probability of passing the test . for each opinion chosen by nature there exists a strategy for the expert ( to report ) that gives him a payoff of at least . if fan s ( )minmax theorem applies then there exists a strategy guarantees the expert a payoff of at least for _ every _ opinion chosen by nature . in this case, the test is manipulable .fan s minmax theorem requires nature s action space to be compact and her payoffs to be ( lower semi ) continuous .the main difficulty is that the set of opinions is not compact in the natural topology , the weak * topology .hence , fan s minmax theorem can not be directly applied .we consider a new game , defined as above except that nature can choose any probability in ( not necessarily savagean ) . by the riesz representation and the banach alaoglu theorems , the set of all finitely additive probabilities satisfy the necessary continuity and compactness conditions for fan s minmax theorem . however , if now nature chooses a _non_-savagean probability , the expert can not replicate her choice because he is restricted to opinions .based on the celebrated hammer sobczyk decomposition theorem , we show the following approximation result : for every there is an opinion such that for every union of cylinders .thus , .it follows that if natures chooses and the expert chooses then he passes the test with probability at least .the proof is now concluded invoking fan s minmax theorem .theorem [ th3 ] provides conditions under which it is feasible to discredit strategic experts .however , even a nonmanipulable test can be strategically passed on some paths . under -additivity , and olszewski and sandroni ( ) construct nonmanipulable _ category _ tests , where uninformed experts fail in all , but a topologically small ( i.e. , _ meager _ ) set of paths .we now show a difficulty in following this approach in the general case of opinions that satisfy the blackwell dubins property .[ de8 ] a collection of subsets of is a _ strictly proper ideal _ if it satisfies the following properties : if and then ; if then ; and no cylinder belongs to .a strictly proper ideal is a collection of sets which can be regarded as `` small . ''property ( 1 ) is the natural requirement that if a set is considered small then a set contained in must also be considered small . properties ( 2 ) and ( 3 ) are satisfied by most commonly used notions of `` small '' sets , such as countable , nowhere dense , meager , sets of lebesgue measure zero and shy sets . to clarify our terminology , recall that an ideal is a collection of subsets satisfying properties ( 1 ) and ( 2 ) .an ideal is proper if does not belong to it .we refer to the elements of a strictly proper ideal as _ small _ sets and to their complements as _ large _ sets .strictly proper ideals can be defined in terms of probabilities .given , define a set to be -_null _ if there exists an event that satisfies and .the collection of -null sets is a strictly proper ideal whenever satisfies for every cylinder .[ th4 ] let be a strictly proper ideal .there exists an opinion such that for every event in .there exists an opinion that satisfies the blackwell dubins property and finds all small events to be negligible .the proof of this result relies on the characterization of the set of opinions satisfying the blackwell dubins property that we discussed in the previous section .theorem [ th4 ] shows a basic tension between the control of type i errors and the use of genericity arguments .suppose alice intends to design a test that discredits bob on a large set of paths , irrespectively of his strategy .then the set of paths that do not reject opinion must be small [ otherwise bob could simply announce opinion and pass the test on , a nonsmall set of realizations ] .but if is the opinion obtained from theorem [ th4 ] , we must have .so , the test can not control for type i errors .we have just proved the following corollary .[ co1 ] let be a strictly proper ideal .for every test which -controls type i errors with positive probability there exists a strategy such that the set is not small .thus , the stronger nonmanipulable tests in and olszewski and sandroni ( ) can not be obtained in the general case of opinions that satisfy the blackwell dubins property .in this section , we extend our analysis to the case where the set of per - period outcomes may be infinite .let be a separable metric space of outcomes and denote by the set of paths .as before , is the cylinder of length with base ( in particular ) .the set is endowed with the product topology and a -algebra containing all open sets .we denote by the set of finitely additive probabilities on and by the subset of opinions ( i.e. , strongly nonatomic probabilities ) .let be the set of all cylinders .we say that a function \ ] ] is a _ conditional probability _ if for every and : ; ; and for any event and .the definition of conditional probability follows , where properties ( 1)(3 ) are justified on the basis of de finetti s _ coherence _ principle : a real function defined on satisfies properties ( 1)(3 ) if and only if a bookie , who sets as the price of a conditional bet on event , can not incur in a dutch book .we refer the reader to regazzini ( ) , and for a precise statement and a formal discussion .a conditional probability is a _conditional opinion _ if is strongly nonatomic .we denote by and by the sets of conditional probabilities and conditional opinions , respectively . to simplify the exposition , given an event and a conditional probability , we use the notation instead of the more precise . at time , bob is required to announce a conditional opinion .so , a test is now a function mapping each conditional opinion to an open subset of .the definitions of type i errors , manipulable and nonmanipulable tests are analogous to the definitions of section [ sec2 ] and can be obtained by replacing with .we now extend the definition of merging of opinions .we say that is a _ discrete space _ if it is countable and endowed with the discrete topology .[ de9 ] let be a discrete space . if , the conditional probability _ merges _ with if for every , the next definition is based on .[ de10 ] let be a discrete space .a conditional probability satisfies the _ blackwell dubins property _ if for every probability such that such that let be the set of all conditional opinions that satisfy the blackwell dubins property .we now show that the connection between testability and merging of opinions extends to this setup .[ th5 ] let be a separable metric space .consider the case where any conditional opinion is allowed .let be a test that -controls for type i errors with probability .the test can be manipulated with probability , for every ] .there exists a test that -controls for type i error with probability and is nonmanipulable .if it is possible for bob to announce any conditional opinion , then he can not be meaningfully tested and discredited .if bob is restricted to conditional opinions satisfying the blackwell dubins property , then it is possible to test and discredit him .the proof of theorem [ th5 ] follows the proof of theorem [ th2 ] .the proof of theorem [ th6 ] is based on the following result : a conditional opinion satisfies for every path .this step requires a new argument , because the characterization of blackwell dubins property used in the proof of theorem [ th3 ] does not readily extend to the case where is infinite .once this continuity property is shown to hold , the proof continues as in theorem [ th3 ] .we fix a path and for each we choose a large enough period such that . because is assumed to be a discrete space , each cylinder is open .therefore , we can define test such that for every . following the proof of theorem [ th3 ] ,we show that is nonmanipulable .[ app ] [ [ section ] ] we now provide an example of an opinion that violatesthe blackwell dubins property . [ ex1 ]let and . denote by the coordinate projections on .for every , let be the -additive probability defined as and let be the -additive probability defined as for all .consider the opinion , where is a finitely additive probability on such that for every .the finite additivity of the mixture may reflect the difficulty of predicting _ when _ the per - period probability of observing the outcome will change from to . clearly , .however , does not merge with .to this end , let be the set of all paths where the outcome appears infinitely often .then for every and .for every cylinder , we have and moreover , for every and every .thus , does not merge with . to minimize repetitions , throughout the stands for either or .for every algebra of subsets of denote by the space of finitely additive probabilities defined on .when , we write instead of . we denote by the set of opinions ( strongly nonatomic probabilities ) .the space is endowed with the weak * topology .it is the coarsest topology for which the functional is continuous for every function that has finite range and is measurable with respect to .this should not be confused with the more common weak * topology generated by bounded _continuous _ functions .we now provide a technical result important for the proofs of theorems [ th2 ] and [ th5 ] . throughout this subsection , .a -probability _ _ is a probability that satisfies for every .[ th11 ] let be closed under finite intersection .there exists a -probability such that for every .this is a corollary of the ultrafilter theorem .see , for instance , aliprantis and border ( ) , theorem 2.19. every can be decomposed into a strongly nonatomic part and a mixture of countably many -probabilities .[ th12 ] for every there exists an opinion and a sequence of -probabilities such that where ] as for every .it follows from the additivity of the integral that . for every open set , if then .hence , . it remains to prove that is strongly nonatomic . by theorem [ th13 ] ,it is enough to prove it is strongly continuous .fix .since is strongly continuous , we can find a partition of such that for every .consider now the partition of .for every and , we have where is the indicator function of . therefore , and .this proves that is strongly continuous .now let be any finitely additive probability . by the hammer sobczyk decomposition , we can write as the convex combination where is strongly nonatomic and each is a -probability . for each ,let be an opinion such that for every open set .define .it is easy to see that is strongly continuous . by theorem [ th13 ], it is an opinion . for every open set , we have as desired .[ th15 ] let and be convex subsets of two vector spaces .let .if is compact hausdorff and is concave with respect to and convex and lower semicontinuous with respect to , then see for a more general version of this theorem .proof of theorem [ th2 ] define the function as for every .the function is affine in each variable and continuous with respect to .the weak * topology is hausdorff .moreover , it follows from the riesz representation and banach alaoglu theorems that is compact [ see aliprantis and border ( ) , theorems 14.4 and 6.21 ] .all the conditions of fan s minmax theorem are verified , therefore , by theorem [ th14 ] , for every there exists such that for every open set . because is an open set , then .that is , .thus , for every ] as for each .we verify that is well defined . using the notation in ( [ eqd2 ] ) , let .equivalently , therefore , .hence , is small . but also is small , hence is small .the set is either empty or a union of cylinders . by the definition of strictly proper ideal be empty . similarly , .hence , , and .we prove is additive .let and be two disjoint sets belonging to .the sets and are disjoint . to see this , notice that implies .the set is either empty or a union of cylinders .since is small , it must be empty .let and . similar to ( [ eqd3 ] ) , we have therefore , is a finitely additive probability defined on . by construction, it satisfies for every .consider the set of extensions and let be one of its extreme points .we prove that is an extreme point of .write as with .let and be the restriction of and on .since is an extension of , we have .we claim that . for every , since , then . therefore , for every event .the same is true for .therefore , .this proves that . because is an extreme point of ,then .this concludes the proof that is an extreme point of . by theorem [ th8 ] , satisfies the blackwell dubins property .it remains to prove that is strongly nonatomic .since for every , is strongly continuous . a fortiori , is strongly continuous and also strongly nonatomic by theorem [ th13 ] .proof of theorem [ th5 ] define the function as for all . given , by theorem [ th14 ]there exists an opinion such that for every open set . by theorem 4 in , we can find a conditional opinion such that .then .the proof is complete by replicating the argument used in the proof of theorem [ th2 ] .proof of theorem [ th6 ] we first prove that for every and every path , .we argue by contradiction .let be a path such that .fix a sequence of positive real numbers such that for every .fix . because is strongly nonatomic , for every time we can find an event such that . for every , we have we can therefore fix large enough such that satisfies for every .let be the opinion defined as for every event . then . by theorem 4 in , we can find a conditional opinion satisfying .the proof of the claim will be concluded by showing that does not merge with .note that for every hence , for all moreover , for every where the first equality follows from and the second equality follows from . because the sequence is bounded away from , does not merge to .therefore , we can conclude that for every and every path , .now fix a path and .we can find for every a time such that . because is endowed with the discrete topology, is an open set .let for every .the test -controls for type i error with probability .the same argument in the proof of theorem [ th3 ] shows that is nonmanipulable .we thank ehud kalai , wojciech olszewski , eran shmaya , marciano siniscalchi and rakesh vohra for useful discussions .we are grateful to the editor and the referees for their thoughtful comments , for simplifying example [ ex1 ] and for stimulating the results in section [ sec6 ] .we also thank the seminar audiences at the fifth transatlantic theory workshop , the summer meeting of the econometric society 2012 , xiii latin american workshop in economic theory , the 4th workshop on stochastic methods in game theory , the washington university seminar series and the paris game theory seminar .all errors are ours .
|
we study the merging and the testing of opinions in the context of a prediction model . in the absence of incentive problems , opinions can be tested and rejected , regardless of whether or not data produces consensus among bayesian agents . in contrast , in the presence of incentive problems , opinions can only be tested and rejected when data produces consensus among bayesian agents . these results show a strong connection between the testing and the merging of opinions . they also relate the literature on bayesian learning and the literature on testing strategic experts . ,
|
the function of proteins often involves conformational changes during the binding or unbinding of ligand molecules .central questions are how these conformational changes are coupled to the binding processes , and how they affect the function of the proteins and the inhibition of this function by drug molecules . for some proteins , a conformational change has been proposed to occur _ predominantly after _ a binding or unbinding process , apparently ` induced ' by this process .for other proteins , a conformational change has been suggested to occur _ predominantly prior _ to a binding or unbinding process , which has been termed ` conformational selection ' since the ligand appears to select a conformation for binding or unbinding .binding _ via _ conformational selection implies induced - change unbinding , and _ vice versa _ , since the ordering of events is reversed in the binding and unbinding direction . * figure 1 * : 7-state model for catalysis and inhibition of an enzyme with induced - fit binding mechanism . in this model , substrate molecules andinhibitor molecules first bind in conformation of the enzyme .these binding events induce changes into conformation in which the substrate is converted into the product . in the case of the hiv-1 protease, the conformation corresponds to the semi - open conformation , and the conformation to the closed conformation .[ figure_7states ] in this article , we extend classical models of enzyme catalysis and inhibition by including a conformational change during the binding and unbinding of substrate , product , or inhibitor molecules . our aim is to investigate how the conformational change affects the catalytic rates in the presence and absence of inhibitor molecules , and how non - active - site mutations that shift the conformational equilibrium alter these catalytic rates .we focus on enzymes with induced - change binding mechanism since many enzymes close rather tightly over substrate or inhibitor molecules during binding .binding via an induced - change mechanism , i.e. prior to the change from the ` open ' to the ` closed ' conformation of these enzymes , is required if the entry and exit of the ligand molecules are sterically obstructed in the closed conformation . in the reverse direction ,the ligands then unbind _ via _ conformational selection because the change from the closed to open conformation has to occur prior to the unbinding process .classical models of competitive inhibition , in which the inhibitor binds to the same site as the substrate , involve four states : an empty state of the enzyme , and three states , , and in which the enzyme is bound to a substrate molecule s , a product molecule p , or an inhibitor molecule i . in our extended model for enzymes withinduced - change binding mechanism , the enzyme can adopt two conformations , an ` open ' conformation 1 , and a ` closed ' conformation 2 .the model has seven states because induced - change binding involves an open state and a closed state with bound substrate , two such states and with bound product , and two states and with bound inhibitor , besides the empty state ( see fig . 1 ) . with our extended model ,we derive general expressions for the catalytic rates of enzymes with induced - change binding mechanism .the catalytic rates of these enzymes depend on ( i ) the binding and unbinding rates of substrate , product , and inhibitor molecules in the open conformation 1 , ( ii ) the forward and backward rates of the catalytic step , and ( iii ) the transition rates between the two conformations in the bound states of the enzyme ( see eqs .( [ michaelismenten ] ) to ( [ pof ] ) ) .our general expressions for the catalytic rates lead to an effective four - state model with a single substrate - bound state , a single product - bound , and a single inhibitor - bound state ( see fig .2 ) , but with effective on- and off - rates of substrate and product molecules that depend on the conformational transition rates ( see eqs . ( [ son ] ) to ( [ pof ] ) ) .the role of the conformational changes for catalysis and inhibition can be revealed by non - active - site mutations that slightly shift the conformational equilibrium , but do not interfere directly with binding and catalysis in the active site of the enzymes .several groups have suggested that such shifts in the conformational equilibrium might explain why non - active - site mutations can contribute to multi - drug resistance , i.e. to an increase of catalytic rates in the presence of different inhibitory drugs .based on our general results for enzymes with induced - change binding mechanism , we investigate how these mutations affect catalysis and inhibition , and distinguish two cases . in case 1 , the maximum catalytic rate of the enzyme is limited by the unbinding of the product .we find that the catalytic rate in the presence of inhibitors depends exponentially on the mutation - induced change of the free - energy difference between the two conformations of the enzyme in this case ( see eqs .( [ vitality_def ] ) and ( [ vitality1 ] ) ) .non - active - site mutations with that slightly destabilize the closed conformation 2 relative to the open conformation 1 of the enzyme lead to an increase in the catalytic rate , irrespective of the inhibitor .such non - active - site mutations thus contribute to a multi - drug - resistance of the enzyme . in case 2, the maximum catalytic rate of the enzyme is limited by the forward rate of the catalytic step . in this case ,mutation - induced changes of the conformational equilibrium have no effect on the catalytic rate in the presence of inhibitors .a comparison with experimental data for the non - active - site mutation l90 m of the hiv-1 protease indicates that this enzyme appears to follow case 1 , which implies that non - active - site mutations that slightly destabilize the closed conformation contribute to multi - drug resistance .we consider an enzyme with two conformations and that binds its substrate _ via _ an induced - change mechanism .we assume that the catalytic step occurs in conformation 2 of the enzyme , and that the inhibitor binds to the same site as the substrate ( ` competitive inhibition ' ) .the catalytic cycle of the enzyme and the inhibition of this cycle then can be described by the 7-state model shown in fig . 1 .the catalytic rate depends on the 14 forward and backward rates between the 7 states of the model .these rates are : \(i ) the _ binding rates _ ] , and ] , ] denote the concentrations of these molecules .\(ii ) the forward and reverse _ rates of the catalytic step _ and .\(iii ) the _ conformational transition rates _ , , , , and between the substrate - bound states and , product - bound states and , and inhibitor - bound states and .we assume that the bound states , , and with conformation 2 of the enzyme are the ground - state conformations , while the conformations , , and with conformation 1 of the enzyme are excited - state conformations .this assumption is valid if experimental structures indicate that an enzyme adopts conformation 1 in its unbound state and conformation 2 in its bound states , since the experimental structures correspond to ground - state conformations .the assumption implies i.e. the excitation rates , , and are much smaller than the corresponding ground - state relaxation rates , , and .* figure 2 * : effective 4-state model for catalysis and inhibition of an enzyme with induced - fit binding mechanism .the effective binding and unbinding rates of substrate and product given in eqs .( [ son ] ) to ( [ pof ] ) connect this model to the 7-state model of fig . 1 .[ figure_4states ] one of our main results is that the catalytic rate can be written in the michaelis - menten form ( see appendix a ) }{(1 + [ i]/k_i)k_m + [ s ] } \label{michaelismenten}\ ] ] with and for negligible product concentration ] . in deriving eqs .( [ michaelismenten ] ) to ( [ pof ] ) from the exact solution , we have assumed that the excitation rate in the product - bound state is much smaller than the relaxation rate in the substrate - bound state , besides eq .( [ transition_rates ] ) ( see appendix a ) .this assumption is reasonable if the relaxation rates and in the substrate- and product - bound states are of similar magnitude , since ( see eq .( [ transition_rates ] ) ) then implies also . our general results for the 7-state model of fig .1 lead to an effective 4-state model of catalysis and inhibition , see fig . 2. the catalytic rate of this 4-state model is described by eqs .( [ michaelismenten ] ) to ( [ km ] ) with for negligible product concentration ] and ] and / k_i \gg [ s] ] . for \ll k_i ] , in contrast , the catalytic rate of the enzyme is independent of ] and ^\prime / k_i^\prime \gg [ s] ] .the derivation in the general case with > 0 ] , the two states and have zero probability .the 7-state model thus reduces to a model with 5 states .the steady - state probabilities , , , , and of these 5 states can be calculated from the 5 equations ( see e.g. ) = 0 \label{eq2}\ ] ] + p(e_2s ) s_{21 } - p(e_1s ) ( s_{12}+ s_{- } ) = 0\ ] ] while eq .( [ eq1 ] ) ensures probability normalization , the eqs .( [ eq2 ] ) to ( [ eq5 ] ) are the flux balance conditions for the states , , , and in the steady state , in which the inward flux into a state equals the outward flux from this state .we have assumed that the product concentration ] are negligible in eq .( [ eq2 ] ) .the catalytic flux of the enzyme is defined as the steady - state flux along the cycle : from eqs .( [ eq1 ] ) to ( [ catalytic_flux ] ) , we obtain the exact result for catalytic flux }{b + c [ s]}\ ] ] with for and ( see eq .( [ transition_rates ] ) ) , eq .( [ c_exact ] ) can be simplified to if we further assume ( see text after eq .( [ pof ] ) for justification ) , we obtain the eqs .( [ a_exact ] ) and ( [ c_simplified ] ) now lead to the maximal catalytic flux this expression for the maximal catalytic flux is identical to eq .( [ kmax ] ) with given in eq .( [ pof ] ) .similarly , the eqs .( [ b_exact ] ) and ( [ c_simplified ] ) lead to an expression for that is identical to eq .( [ km ] ) with eqs .( [ son ] ) to ( [ pof ] ) .we consider here the induced - fit binding process of an inhibitor to an enzyme that can adopt two conformations and : \rightleftharpoons i_{-} i_{12} \rightleftharpoons i_{21} ] of the inhibitor is much larger than the enzyme concentration , which implies pseudo - first - order kinetics with approximately constant inhibitor concentration and , thus , constant binding rate $ ] .the effective on- and off - rates of the two - step process ( [ inhibition ] ) can be determined from the dominant relaxation rate of this process .since the excitation rate of the bound excited state is much smaller than the relaxation rate into the bound ground state , the effective off - rate from to is : the effective on - rate follows from the condition that the effective equilibrium constant of the two - step process has to be equal to the product of the equilibrium constants for the two substeps . with eq .( [ dof_full ] ) , this condition leads to m. elinder , p. selhorst , g. vanham , b. oberg , l. vrang , u. h. danielson , inhibition of hiv-1 by non - nucleoside reverse transcriptase inhibitors via an induced fit mechanism - importance of slow dissociation and relaxation rates for antiviral efficacy , biochem .80 ( 2010 ) 11331140 .s. fieulaine , a. boularot , i. artaud , m. desmadril , f. dardel , t. meinnel , c. giglione , trapping conformational states along ligand - binding dynamics of peptide deformylase : the impact of induced fit on enzyme catalysis , plos biol . 9( 2011 ) e1001066 .o. f. lange , n .- a .lakomek , c. fares , g. f. schrder , k. f. a. walter , s. becker , j. meiler , h. grubmller , c. griesinger , b. l. de groot , recognition dynamics up to microseconds revealed from an rdc - derived ubiquitin ensemble in solution , science 320 ( 2008 ) 14711475 .y. cheng , w. h. prusoff , relationship between the inhibition constant ( ) and the concentration of inhibitor which causes 50 per cent inhibition ( ) of an enzymatic reaction , biochem .22 ( 23 ) ( 1973 ) 30993108 .s. gulnik , l. suvorov , b. liu , b. yu , b. anderson , h. mitsuya , j. erickson , kinetic characterization and cross - resistance patterns of hiv-1 protease mutants selected under drug pressure , biochemistry 34 ( 1995 ) 92829287 .m. prabu - jeyabalan , e. nalivaika , c. schiffer , substrate shape determines specificity of recognition for hiv-1 protease : analysis of crystal structures of six substrate complexes , structure 10 ( 2002 ) 369381 .m. d. altman , a. ali , g. s. k. k. reddy , m. n. l. nalam , s. g. anjum , h. cao , s. chellappan , v. kairys , m. x. fernandes , m. k. gilson , c. a. schiffer , t. m. rana , b. tidor , hiv-1 protease inhibitors from inverse design in the substrate envelope exhibit subnanomolar binding to drug - resistant variants , j. am . chem . soc . 130 ( 2008 ) 60996113 .m. n. l. nalam , a. ali , m. d. altman , g. s. k. k. reddy , s. chellappan , v. kairys , a. ozen , h. cao , m. k. gilson , b. tidor , t. m. rana , c. a. schiffer , evaluating the substrate - envelope hypothesis : structural analysis of novel hiv-1 protease inhibitors designed to be robust against drug resistance , j. virol .84 ( 2010 ) 53685378 .l. nicholson , t. yamazaki , d. torchia , s. grzesiek , a. bax , s. stahl , j. kaufman , p. wingfield , p. lam , p. jadhav , c. hodge , p. domaille , c. chang , flexibility and function in hiv-1 protease , nat .biol . 2 ( 1995 ) 274280 .r. ishima , d. freedberg , y. wang , j. louis , d. torchia , flap opening and dimer - interface flexibility in the free and inhibitor - bound hiv protease , and their implications for function , structure 7 ( 1999 ) 10471055 .d. freedberg , r. ishima , j. jacob , y. wang , i. kustanovich , j. louis , d. torchia , rapid structural fluctuations of the free hiv protease flaps in solution : relationship to crystal structures and comparison with predictions of dynamics calculations , protein sci . 11( 2002 ) 221232 .v. y. torbeev , h. raghuraman , k. mandal , s. senapati , e. perozo , s. b. h. kent , dynamics of `` flap '' structures in three hiv-1 protease / inhibitor complexes probed by total chemical synthesis and pulse - epr spectroscopy , j. am .131 ( 2009 ) 884885 .a. perryman , j. lin , j. mccammon , hiv-1 protease molecular dynamics of a wild - type and of the v82f / i84v mutant : possible contributions to drug resistance and a potential new target site for drugs , protein sci .13 ( 2004 ) 11081123 .deng , w. zheng , e. gallicchio , r. m. levy , insights into the dynamics of hiv-1 protease : a kinetic network model constructed from atomistic simulations , j. am .133 ( 24 ) ( 2011 ) 938794 .b. maschera , g. darby , g. pal , l. l. wright , m. tisdale , r. myers , e. d. blair , e. s. furfine , human immunodeficiency virus .mutations in the viral protease that confer resistance to saquinavir increase the dissociation rate constant of the protease - saquinavir complex , j. biol .( 1996 ) 3323133235 .l. hong , x. c. zhang , j. a. hartsuck , j. tang , crystal structure of an in vivo hiv-1 protease mutant in complex with saquinavir : insights into the mechanisms of drug resistance , protein sci . 9( 2000 ) 18981904 .b. mahalingam , j. louis , j. hung , r. harrison , i. weber , structural implications of drug - resistant mutants of hiv-1 protease : high - resolution crystal structures of the mutant protease / substrate analogue complexes , proteins 43 ( 2001 ) 455464 .a. y. kovalevsky , y. tie , f. liu , p. i. boross , y .- f .wang , s. leshchenko , a. k. ghosh , r. w. harrison , i. t. weber , effectiveness of nonpeptide clinical inhibitor tmc-114 on hiv-1 protease with highly drug resistant mutations d30n , i50v , and l90 m , j. med .49 ( 2006 ) 13791387 .d. xie , s. gulnik , e. gustchina , b. yu , w. shao , w. qoronfleh , a. nathan , j. erickson , drug resistance mutations can affect dimer stability of hiv-1 protease at neutral ph , protein sc .8 ( 1999 ) 17021707 .b. mahalingam , y. wang , p.boross , j. tozser , j. louis , r. harrison , i. weber , crystal structures of hiv protease v82a and l90 m mutants reveal changes in the indinavir - binding site , eur .j. biochem .271 ( 2004 ) 15161524 .m. kozsek , j. bray , p. rezcov , k. saskov , j. brynda , j. pokorn , f. mammano , l. rulsek , j. konvalinka , molecular analysis of the hiv-1 resistance development : enzymatic activities , crystal structures , and thermodynamics of nelfinavir - resistant hiv protease mutants , j. mol .( 2007 ) 100510016 .u. nillroth , l. vrang , p. markgren , j. hulten , a. hallberg , u. danielson , human immunodeficiency virus type 1 proteinase resistance to symmetric cyclic urea inhibitor analogs , antimicrobial agents and chemotherapy 41 ( 1997 ) 23832388 . c. f. shuman , p. o. markgren , m. hmlinen , u. h. danielson , elucidation of hiv-1 protease resistance by characterization of interaction kinetics between inhibitors and enzyme variants , antiviral res .58 ( 2003 ) 235242 .j. ermolieff , x. lin , j. tang , kinetic properties of saquinavir - resistant mutants of human immunodeficiency virus type 1 protease and their implications in drug resistance in vivo , biochemistry 36 ( 1997 ) 1236412370 .
|
a central question is how the conformational changes of proteins affect their function and the inhibition of this function by drug molecules . many enzymes change from an open to a closed conformation upon binding of substrate or inhibitor molecules . these conformational changes have been suggested to follow an induced - fit mechanism in which the molecules first bind in the open conformation in those cases where binding in the closed conformation appears to be sterically obstructed such as for the hiv-1 protease . in this article , we present a general model for the catalysis and inhibition of enzymes with induced - fit binding mechanism . we derive general expressions that specify how the overall catalytic rate of the enzymes depends on the rates for binding , for the conformational changes , and for the chemical reaction . based on these expressions , we analyze the effect of mutations that mainly shift the conformational equilibrium on catalysis and inhibition . if the overall catalytic rate is limited by product unbinding , we find that mutations that destabilize the closed conformation relative to the open conformation increase the catalytic rate in the presence of inhibitors by a factor where is the mutation - induced shift of the free - energy difference between the conformations . this increase in the catalytic rate due to changes in the conformational equilibrium is independent of the inhibitor molecule and , thus , may help to understand how non - active - site mutations can contribute to the multi - drug - resistance that has been observed for the hiv-1 protease . a comparison to experimental data for the non - active - site mutation l90 m of the hiv-1 protease indicates that the mutation slightly destabilizes the closed conformation of the enzyme . enzyme dynamics , induced fit , conformational selection , hiv-1 protease , non - active - site mutation , multi - drug resistance
|
in this paper , we consider one of the fundamental equations of nonrelativitic quantum mechanics , the maxwell schrdinger ( m - s ) system , which describes the time - evolution of an electron within its self - consistent generated and external electromagnetic fields . in this system, the schrdinger s equation can be written as follows : ^{2 } + q \phi+v\big\rbrace\psi \quad { \rm in } \,\,\ , \omega_{t } , } \\[2 mm ] \end{array}\ ] ] where .here , , and denote the wave function , the mass , and the charge of the electron , respectively . is the time - independent potential energy and is assumed to be bounded in this paper .the magnetic potential and the electric potential are obtained by solving the following equations : where the electric fields and the magnetic fields satisfy the maxwell s equations : { \displaystyle \frac{1}{{\mu}}\nabla \times \mathbf{b } - \epsilon\frac{\partial \mathbf{e}}{\partial t}=\mathbf{j } , \quad \nabla\cdot(\epsilon \mathbf{e } ) = \rho . } \end{array}\ ] ] here and denote the electric permittivity and the magnetic permeability of the material , respectively .the charge density and the current density are defined as follows where denotes the conjugate of . substituting ( [ eq:1.2 ] ) into ( [ eq:1.3 ] ) and combining ( [ eq:1.1 ] ) and ( [ eq:1.4 ] ) , we have the following m - s system ^{2 } + q \phi+v \right\rbrace\psi \,\quad { \rm in } \,\,\ , \omega_{t},}\\[2 mm ] { \displaystyle -\frac{\partial}{\partial t}\nabla\cdot\big(\epsilon\mathbf{a}\big ) -\nabla\cdot\big(\epsilon\nabla\phi\big ) = q |\psi|^{2 } \quad \quad \quad \quad { \rm in } \,\,\, \omega_{t } , } \\[2 mm ] { \displaystyle \epsilon\frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times \big({\mu}^{-1}\nabla\times \mathbf{a}\big ) + \epsilon \frac{\partial ( \nabla \phi)}{\partial t } = \mathbf{j } \quad { \rm in } \,\,\ ,\omega_{t } , } \\[2 mm ] { \displaystyle \mathbf{j}=-\frac{\mathrm{i}q\hbar}{2m}\big(\psi^{\ast}\nabla{\psi}-\psi\nabla{\psi}^{\ast}\big)-\frac{\vert q\vert^{2}}{m}\vert\psi\vert^{2}\mathbf{a } \,\,\quad { \rm in } \,\,\, \omega_{t } , } \\[2 mm ] { \displaystyle \psi , \phi , \mathbf{a } \,\ , \mathrm{subject \ to \ the \ appropriate \ initial \and\ boundary \conditions}. } \end{array } \right.\ ] ] we assume that is a bounded domain in ( ) . the total energy of the system , at time , are defined as follows { \displaystyle \qquad \qquad + \frac{\epsilon}{2}|\mathbf{e}(t,\mathbf{x})|^{2 } + \frac{1}{2\mu}|\mathbf{b}(t,\mathbf{x})|^{2 } \big)\mathrm{d}\mathbf{x}. } \end{array}\ ] ] for a smooth solution satisfying certain appropriate boundary conditions , the energy is a conserved quantity .it is well known that the solutions of the above m - s system are not uniquely determined .in fact , the m - s system is invariant under the gauge transformation : for any sufficiently smooth function .that is , if satisfies m - s system , then so does .in view of the gauge freedom , to obtain mathematically well - posed equations , we can impose some extra constraint , commonly known as gauge choice , on the solutions of the m - s system . in this paper, we study the m - s system in the coulomb gauge , i.e. . by employing the atomic units , i.e. , and assuming that , the m - s system in the coulomb gauge ( m - s - c ) can be reformulated as follow : { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) + \frac{\partial ( \nabla \phi)}{\partial t}+\frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big ) } \\[2 mm ] { \displaystyle \qquad\qquad+\vert\psi\vert^{2}\mathbf{a}=0 \quad { \rm in } \,\,\ , \omega_{t},}\\[2 mm ] { \displaystyle \nabla \cdot \mathbf{a } = 0 , \quad -\delta \phi = \vert\psi\vert^{2}\quad { \rm in } \,\,\ , \omega_{t } } .\end{array } \right.\ ] ] in this paper , the m - s - c system ( [ eq:1.8 ] ) is considered in conjunction with the following initial boundary conditions : { \displaystyle \psi(\mathbf{x},0 ) = \psi_0(\mathbf{x}),\quad\mathbf{a}(\mathbf{x},0)=\mathbf{a}_{0}(\mathbf{x}),\quad \mathbf{a}_{t}(\mathbf{x},0)=\mathbf{a}_{1}(\mathbf{x } ) \quad \rm{in}\;\omega , } \end{array } \right.\ ] ] with . in the m - s - c system, the energy takes the following form \psi\big\vert^{2 } + v|\psi|^{2 } + \frac{1}{2}\big|\frac{\partial \mathbf{a}}{\partial t}\big|^{2 } + \frac{1}{2}|\nabla \times \mathbf{a}|^{2 } + \frac{1}{2}|\nabla \phi|^{2 } \right)\;\mathrm{d}\mathbf{x}. } \end{array}\ ] ] [ rem1 - 1 ] the boundary condition implies that the particle is confined in the domain .the boundary conditions and denote the perfect conductive boundary condition ( pec ) .we refer readers to for the determination of the boundary conditions for the vector potential and the scalar potential in different electromagnetic environment .the existence and uniqueness of ( smooth ) solutions to the time - dependent m - s equations ( [ eq:1.5 ] ) in all of or have been studied in .however , these results do nt hold for bounded domains because some important tools used in these work ca nt apply to bounded domains .for example , strichartz estimates and many tools from fourier analysis . in this paper, we will prove the existence of weak solutions to the m - s - c system in a bounded smooth domain by galerkin s method and compactness arguments .to the best of our knowledge , this is the first result on the existence of weak solutions to the inital - boundary problem of the m - s - c system in a bounded smooth domain . in recent years , with the development of nanotechnology , there has been considerable interest in developing physical models and numerical methods to simulate light - matter interaction at the nanoscale .due to the natural coupling of the electromagnetic fields and quantum effects , the maxwell schrdinger model is widely used in simulating self - induced transparency in quantum dot systems , laser - molecule interaction , carrier dynamics in nanodevices and molecular nanopolaritonics . however , in these existing methods , the maxwell s equations of field type ( [ eq:1.3 ] ) , instead of the potential type in ( [ eq:1.5 ] ) , are usually coupled to the schrdinger s equation through the dipole approximation or by extracting the vector potential and the scalar potential from the electric field and the magnetic field . in part because there exists robust numerical algorithms for the maxwell s equations ( [ eq:1.3 ] ) , for example , the time domain finite difference ( fdtd ) method , the transmission line matrix ( tlm ) method , etc .recently , ryu used the fdtd scheme to discretize the maxwell schrdinger equations ( [ eq:1.5 ] ) directly in the lorentz gauge to simulate a single electron in an artificial atom excited by an incoming electromagnetic field . but so far , there are rather limited studies on the numerical algorithms of the m - s system ( [ eq:1.5 ] ) as well as their convergence analysis . in this paper we will present a fully discrete finite element method for solving the problem ( [ eq:1.8])-([eq:1.9 ] ) and show that it is equivalent to a fully discrete crank nicolson scheme based on mixed finite element method .we will show that our scheme maintains the conservation properties of the original system .compared with the commonly used method which couples the maxwell s equations of field type with the schrdinger s equation and solves the system by the fdtd method , our method keeps the total charge and energy of the discrete system conserved and may suffer from less restriction in the time step - size since we use the crank nicolson scheme in the time direction . in this paperwe establish the optimal error estimates for the proposed method without any restrictions on the spatial mesh step and the time step . in generalit is very difficult to derive error estimates without any restrictions on the spatial mesh step and the time step for the highly complicated , nonlinear equations since the inverse inequalities are usually used to bound the nonlinear terms . in this paperwe avoid using the inverse inequalities due to two aspects .on the one hand , we deduce some stability estimates of the approximate solutions by using the conservation properties of our scheme .more importantly , we take advantage of the special structures of the system and make some difficult nonlinear terms in the schrdinger s equation and the maxwell s equations respectively cancel out . to the best of our knowledge ,this is the first theoretical analysis on the numerical algorithms for the m - s - c system ( [ eq:1.8 ] ) .the rest of this paper is organized as follows . in section [ sec-2 ]we introduce some notation and prove the existence of weak solutions to the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) . in section [ sec-3 ] ,we present two fully discrete finite element schemes for the m - s - c system and show that they are equivalent .section [ sec-4 ] is devoted to the proof of energy - conserving property of the discrete system and some stability estimates of the approximate solutions . in section [ sec-5 ], we prove the existence and uniqueness of solutions to the discrete system .the optimal error estimates without any restrictions on the time step are derived in section [ sec-6 ] .we provide some numerical experiments in section [ sec-7 ] to confirm our theoretical analysis .in this section , we study the existence of weak solutions to the m - s - c system ( [ eq:1.8 ] ) together with the initial - boundary conditions ( [ eq:1.9 ] ) in a bounded smooth domain . for simplicity , we introduce some notation below . for any nonnegative integer ,we denote as the conventional sobolev spaces of the real - valued functions defined in and as the subspace of consisting of functions whose traces are zero on . as usual , we denote , , and , respectively .we use and with calligraphic letters for sobolev spaces and lebesgue spaces of the complex - valued functions , respectively .furthermore , let ^{d} ] with bold faced letters be sobolev spaces and lebesgue spaces of the vector - valued functions with components ( =2,3 ) .the dual spaces of , , and are denoted by , , and , respectively . inner - products in , , and are denoted by without ambiguity . in particular , we consider the following subspaces of and : { \displaystyle \mathbf{h}^{1}_{t,0}(\omega)=\{\mathbf{v}\in\mathbf{h}_{t}^{1}(\omega ) \ , | \,\,\nabla \cdot \mathbf{v } = 0 \ } , } \\[2 mm ] { \displaystyle \mathbf{l}^{2}_{0}(\omega)=\{\mathbf{v}\in\mathbf{l}^{2}(\omega ) \ , | \,\,\nabla \cdot \mathbf{v } = 0 \ ; { \rm weakly}\,\ } } .\end{array}\ ] ] the semi - norms on and are defined by both of which are equivalent to the standard -norm . to take into account the time dependence , for any banach space and integer ,we define function spaces , \ ; w\big ) ] , , and , respectively .we now give two definitions of weak solution to the m - s - c system ( [ eq:1.8 ] ) together with the initial - boundary conditions ( [ eq:1.9 ] ) .[ def:2.1 ] is a weak solution of type to ( [ eq:1.8])-([eq:1.9 ] ) , if ; \mathcal{l}^{2}(\omega)\big)\cap l^{\infty}\big(0 , t ; \mathcal{h}_{0}^{1}(\omega)\big ) , \;\frac{\partial \psi}{\partial t } \in l^{\infty}\big(0 , t ; \mathcal{h}^{-1}(\omega)\big ) , \label{subeq:6 - 000}\\ \phi \in c\big([0,t ] ; { l}^{2}(\omega)\big)\cap l^{\infty}\big(0 , t ; h_{0}^{1}(\omega)\big),\;\frac{\partial \phi}{\partial t } \in l^{\infty}\big(0 , t ; l^{2}(\omega)\big ) , \label{subeq:6 - 001}\\ \mathbf{a } \in c\big([0,t ] ; \,\mathbf{l}^{2}(\omega)\big)\cap l^{\infty}\big(0 , t ; \,\mathbf{h}^{1}_{t,0}(\omega)\big ) , \label{subeq:6 - 002}\\ \frac{\partial \mathbf{a}}{\partial t } \in c\big([0,t ] ; \,\big(\mathbf{h}^{1}_{t,0}(\omega ) \big)^{\prime}\,\big)\cap l^{\infty}\big(0 , t;\ , \mathbf{l}^{2}(\omega)\big ) , \label{subeq:6 - 003 } \end{gathered}\ ] ] with the initial condition , , , and the variational equations \mathrm{d}t = 0,\ ] ] \mathrm{d}t= 0,\ ] ] \mathrm{d}t= 0,\ ] ] hold for all , and .[ def:2.2 ] is a weak solution of type to ( [ eq:1.8])-([eq:1.9 ] ) , if ( [ subeq:6 - 000 ] ) , ( [ subeq:6 - 001 ] ) , ( [ eq:6 - 01 ] ) and ( [ eq:6 - 03 ] ) in definition [ def:2.1 ] are satisfied and ; \,\mathbf{l}^{2}(\omega)\big)\cap l^{\infty}\big(0 , t ; \,\mathbf{h}_{t}^{1}(\omega)\big ) , \label{subeq:6 - 004}\\ \frac{\partial \mathbf{a}}{\partial t } \in c\big([0,t ] ; \,(\mathbf{h}_{t}^{1}(\omega))^{\prime}\big)\cap l^{\infty}\big(0 , t;\ , \mathbf{l}^{2}(\omega)\big ) , \label{subeq:6 - 005}\end{gathered}\ ] ] { \displaystyle \qquad \quad - \big(\frac{\partial \phi}{\partial t},\ , \nabla \cdot \widetilde{\mathbf{a}}\big ) + \big ( |\psi|^{2}\mathbf{a } , \,\widetilde{\mathbf{a}}\big)\bigg]\mathrm{d}t= 0 , \quad \forall \widetilde{\mathbf{a } } \in c_{0}^{2}\big((0,t);\,\mathbf{h}_{t}^{1}(\omega)\big ) } . \end{array}\ ] ] the following theorem shows that the above two definitons of weak solutions are equivalent .[ thm6 - 00 ] the weak solutions to the m - s - c system defined in definiton [ def:2.1 ] and definition [ def:2.2 ] are equivalent .it suffices to show that the vector potential in definiton [ def:2.1 ] and definiton [ def:2.2 ] are consistent . for any , by choosing in ( [ eq:6 - 01 ] ) , in ( [ eq:6 - 03 ] ) , and taking the imaginary part of ( [ eq:6 - 01 ] ) , we have \eta(t)\mathrm{d}t= 0 , } \\[2 mm ] { \displaystyle \displaystyle \int_{0}^{t}\big[\big(\frac{\partial\phi}{\partial t},\ , \delta \varphi\big ) + \big(\frac{\partial \rho}{\partial t},\ , \varphi\big)\big]\eta(t ) \mathrm{d}t= 0 , } \end{array}\ ] ] where since is arbitrary , from ( [ eq:6 - 05 ] ) we see that for any , we have the helmholtz decomposition : we first prove that the vector potential given by definition [ def:2.1 ] satisfies ( [ subeq:6 - 004 ] ) , ( [ subeq:6 - 005 ] ) , and ( [ eq:6 - 04 ] ) in definition [ def:2.2 ] . obviously satisfies ( [ subeq:6 - 004 ] ) . thanks to ( [ subeq:6 - 002 ] ) and ( [ subeq:6 - 003 ] ), we deduce that then ; \,(\mathbf{h}_{t}^{1}(\omega))^{\prime}\big)\ ] ] follows from ( [ subeq:6 - 003 ] ) , ( [ eq:6 - 08])-([eq:6 - 09 ] ) and thus satisfies ( [ subeq:6 - 005 ] ) . since , by using ( [ eq:6 - 07 ] ) , we have { \displaystyle \qquad \quad - \big(\frac{\partial \phi}{\partial t},\ , \nabla \cdot \widetilde{\mathbf{a}}\big ) \bigg]\mathrm{d}t= 0 , \ ; \forall \widetilde{\mathbf{a } } \in c_{0}^{2}\big((0,t);\,\nabla \big(h_{0}^{1}(\omega)\cap h^{2}(\omega)\big)\big ) . }\end{array}\ ] ] by applying ( [ eq:6 - 02 ] ) , ( [ eq:6 - 010 ] ) , and the helmholtz docomposition ( [ eq:6 - 08 ] ) , we find that satisfies ( [ eq:6 - 04 ] ) .next we assume that is the vector potential given by definition [ def:2.2 ] .it is easy to see that satisfies ( [ subeq:6 - 003 ] ) and ( [ eq:6 - 03 ] ) . for any ,take in ( [ eq:6 - 04 ] ) and employ ( [ eq:6 - 07 ] ) to find which implies that consequently satisfies ( [ subeq:6 - 002 ] ) and we complete the proof of this theorem .theorem [ thm6 - 00 ] shows that weak solutions satisfy implicity .next we use the galerkin method and compactness arguments to prove the existence of weak solutions to ( [ eq:1.8])-([eq:1.9 ] ) .we first introduce two lemmas to construct finite dimensional subspaces of , , and .[ lem6 - 0 ] suppose that is a bounded smooth domain .then there exists a sequence being an orthogonal basis of as well as an orthonormal basis of . here is an eigenfunction corresponding to : { \displaystyle u_k = 0 \quad on \;\partial\omega , } \end{array } \right.\ ] ] for the proof of lemma [ lem6 - 0 ] is given in .it is worth pointing out that the conclusion is also true for complex - valued functions , i.e. there exists a sequence being an orthogonal basis of as well as an orthonormal basis of .[ lem6 - 1 ] suppose that is a bounded smooth domain .then there exists an orthonormal basis of , where is an eigenfunction corresponding to : { \displaystyle \mathbf{v}_k \times\mathbf{n}= 0 \quad on \;\partial\omega , } \end{array } \right.\ ] ] for furthermore , is an orthogonal basis of .let be defined by by the lax - milgram theorem , exists and is bounded .since is compactly embedded into , is a bounded , linear , compact operator mapping into itself .it is easy to show that is self - adjoint .then by the hilbert - schmidt theorm , there exists a countable orthonormal basis of consisting of eigenfunctions of .the proof of being an orthogonal basis of is straightfoward .let , , and be n - dimensional subspaces of , , and , respectively , where , , and are given in lemma [ lem6 - 0 ] and [ lem6 - 1 ] .for each , we can construct the galerkin approximate solutions of weak solutions to the m - s - c system in the sense of definition [ def:2.1 ] as follows .find , , and such that { \displaystyle ( \frac{\partial ^{2}\mathbf{a}_n}{\partial t^{2}},\mathbf{v}_j)+(\nabla\times \mathbf{a}_n,\nabla\times\mathbf{v}_j ) + \big(\frac{\mathrm{i}}{2}(\psi_n^{*}\nabla{\psi_n}-\psi_n\nabla{\psi}_n^{*}),\mathbf{v}_j\big ) } \\[2 mm ] { \displaystyle \qquad \qquad + ( |\psi_n|^{2}\mathbf{a}_n,\mathbf{v}_j)=0 , } \\[2 mm ] { \displaystyle ( \nabla \phi_{n},\,\nabla u_j ) = ( |\psi_{n}|^{2},\,u_j ) , } \\[2 mm ] { \displaystyle \psi_n(0 ) = \mathcal{p}_n\psi_0 , \,\,\mathbf{a}_n(0 ) = \mathbf{p}_n\mathbf{a}_0,\,\ , \frac{\partial\mathbf{a}_n}{\partial t}(0 ) = \mathbf{p}_n\mathbf{a}_1 } \end{array } \right.\ ] ] for any , . here and denote the orthogonal projection onto and , respectively . using the local existence and uniqueness theory on odes, we can show that the nonlinear differential system ( [ eq:6 - 2 ] ) has a unique local solution defined on some interval ] . in this paper, the following lemma will be used frequently .[ lem6 - 2 ] let .suppose that , is a bounded lipschitz domain .then for each , there exists some constant depending on ( and on and ) such that lemma [ lem6 - 0 ] can be proved by applying sobolev s embedding thorems , poincar s inequality , and the following lemma in .[ lem6 - 3 ] let , , and be three banach spaces such that , the injection of into being continuous , and the injection of into is compact .then for each , there exists some constant depending on ( and on the spaces , , ) such that we define the energy of the approximate system ( [ eq:6 - 2 ] ) as follows .\psi_{n}\big\vert^{2 } + v|\psi_{n}|^{2 } + \frac{1}{2}\big|\frac{\partial \mathbf{a}_{n}}{\partial t}\big|^{2 } } \\[2 mm ] { \displaystyle \qquad \qquad \quad + \frac{1}{2}|\nabla \times \mathbf{a}_{n}|^{2 } + \frac{1}{2}|\nabla \phi_{n}|^{2 } \big)\mathrm{d}\mathbf{x}. } \end{array}\ ] ] [ lem6 - 4 ] for any ] , if the solution exists , then it satisfies the estimates where is independent of and . by the definition of initial data , it is easy to show that .thus by applying ( [ eq:6 - 3 - 0 ] ) , we have since the semi - norm in is equivalent to -norm , we get then sobolev s imbedding theorem implies that with for and for . using lemma [ lem6 - 0 ], we further prove { \displaystyle } .\ ] ] from ( [ eq:6 - 8 ] ) , ( [ eq:6 - 11 ] ) and lemma [ lem6 - 1 ] , we deduce consequently , we obtain by applying poincar s inequality and ( [ eq:6 - 8 ] ) , we see that therefore , ( [ eq:6 - 6 ] ) is proved by combining ( [ eq:6 - 9 ] ) , ( [ eq:6 - 12 ] ) , and ( [ eq:6 - 13 ] ) . to estimate , we first fix with .note that is an orthogonal basis of as well as an orthonormal basis of .thus we can write , where and for it is clear that . then the first equation of ( [ eq:6 - 2 ] ) implies that { \displaystyle = -\frac{{\rm i}}{2}(\left(\mathrm{i}\nabla + \mathbf{a}_n\right)\psi_n,\left(\mathrm{i}\nabla + \mathbf{a}_n\right ) { \omega}^{1 } ) -{\rm i } ( v\psi_n , { \omega}^{1 } ) -{\rm i } ( \phi_{n}\psi_{n } , { \omega}^{1 } ) } .\end{array}\ ] ] thus by applying ( [ eq:6 - 6 ] ) , we obtain which implies that similarly , we can prove it remains to show in order to estimate , we fix and find such that then by differentiating both sides of the third equation of ( [ eq:6 - 2 ] ) with respect to and using ( [ eq:6 - 17 ] ) , we have thus from ( [ eq:6 - 15 ] ) and ( [ eq:6 - 12 ] ) , we deduce next we will prove . note that each basis function satisfies . by ( [ eq:6 - 17 ] ) , we have it follows that which implies that it is easy to show .hence we get combining ( [ eq:6 - 18 ] ) and ( [ eq:6 - 19 ] ) , we find thus we complete the proof of theorem [ thm6 - 0 ] . using the above energy estimates ,we have given , the nonlinear differential system ( [ eq:6 - 2 ] ) has a unique global solution , which satisfies ( [ eq:6 - 6 ] ) and ( [ eq:6 - 7 ] ) .we now quote a compactness lemma which can be found in .[ lem6 - 5 ] let , , be three banach spaces such that with continuous embedding and the embedding is compact .suppose is a bounded set in such that is bounded in for some .then is relatively compact in ; b) ] , ; \mathbf{l}^{p}(\omega))\cap l^{\infty}((0 , t ) ; \mathbf{h}^{1}_{t,0}(\omega)) ] and a subsequence such that as ; \mathcal{l}^{p}(\omega ) ) , \;\ ; \psi_{n_k}\rightharpoonup \psi \;in\ ; l^{\infty}(0 , t ; \mathcal{h}_{0}^{1}(\omega ) ) \ ; weak\text{-}star , } \\[2 mm ] { \displaystyle \mathbf{a}_{n_k}\rightarrow \mathbf{a } \ ; in\ ; c([0,t ] ; \mathbf{l}^{p}(\omega)),\;\ ; \mathbf{a}_{n_k}\rightharpoonup \mathbf{a } \ ; in\ ; l^{\infty}(0,t ; \mathbf{h}^{1}_{t,0}(\omega ) ) \ ; weak\text{-}star , } \\[2 mm ] { \displaystyle \phi_{n_k } \rightarrow \phi \;in \;c([0,t ] ; { l}^{p}(\omega ) ) , \;\ ; \phi_{n_k}\rightharpoonup \phi \;in\ ; l^{\infty}(0 , t ; { h}_{0}^{1}(\omega ) ) \ ; weak\text{-}star . } \end{array}\ ] ] here for and for .furthermore , we have the following convergence properties for time derivatives of . { \displaystyle \frac{\partial \phi_{n_k } } { \partial t } \rightharpoonup \frac{\partial \phi } { \partial t } \;in\ ; l^{\infty}(0 , t ; { l}^{2}(\omega ) ) \ ; \ ; weak\text{-}star , } \\[2 mm ] { \displaystyle \frac{\partial \mathbf{a}_{n_k } } { \partial t } \rightharpoonup \frac{\partial \mathbf{a } } { \partial t } \;in\ ; l^{\infty}(0 , t ; \mathbf{l}^{2}(\omega ) ) \;\ ; weak\text{-}star,}\\[2 mm ] { \displaystyle \frac{\partial^{2 } \mathbf{a}_{n_k } } { \partial t^{2 } } \rightharpoonup \frac{\partial ^{2 } \mathbf{a } } { \partial t^{2 } } \;in\ ; l^{\infty}(0 , t ; ( \mathbf{h}^{1}_{t,0}(\omega))^{\prime } ) \ ; \ ; weak\text{-}star}.\\[2 mm ] \end{array}\ ] ] passing to the limits in our galerkin appriximations , we obtain [ thm6 - 1 ] given , there exists a weak solution to the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) in the sense of definition [ def:2.1 ] , which satisfies the conservation of the total energy : where is given in ( [ eq:1.10 ] ) .here we omit the proof of the weak limit satisfies ( [ eq:6 - 01])-([eq:6 - 03 ] ) since the technique is standard . by making a slight modification of the proof in this section ,theorem [ thm6 - 1 ] holds for the m - s - c system with bounded coefficients : { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \mu^{-1}(\mathbf{x})\nabla\times \mathbf{a } ) + \frac{\partial ( \nabla \phi)}{\partial t}+\frac{\mathrm{i}}{2m(\mathbf{x})}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big ) } \\[2 mm ] { \displaystyle \qquad\qquad+\frac{1}{m(\mathbf{x})}\vert\psi\vert^{2}\mathbf{a}=0 , \,\,\quad ( \mathbf{x},t)\in \omega\times(0,t),}\\[2 mm ] { \displaystyle \nabla \cdot \mathbf{a } = 0 , \quad -\delta \phi = \vert\psi\vert^{2},\,\ , ( \mathbf{x},t)\in \omega\times(0,t ) } , \end{array } \right.\ ] ] where , .in particular , theorem [ thm6 - 1 ] holds for the m - s - c system with rapidly oscillating discontinuous coefficients arising from the modeling of a heterogeneous structure with a periodic microstructure , i.e. , , where , are 1-periodic in and is a small parameter .furthermore , if , the initial energy , is independent of , then is also independent of .in this section , we consider the fully discretization of the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) by the galerkin finite element method in space together with the crank - nicolson scheme in time . in the following of the paper , we assume that is a bounded lipschitz polyhedron convex domain in .let us first triangulate the space domain and assume that is a regular partition of into tetrahedrons of maximal diameter . without loss of generality ,we assume that .we denote by the space of polynomials of degree defined on the element . in the rest of this paper , we assume that unless otherwise specified . for a given partition , we define the classical lagrange finite element space we have the following finite element subspaces of , , and let , , and be the commonly used lagrange interpolation on , , and , respectively . for , , we have the following interpolation error estimates : we approximate the scalar potential and the wave function in and respectively , and find the approximate solution of the vector potential in a subspace of : it is important to note that since for each , we only have , where is the orthogonal projection of onto we now claim that there exists an interpolation operator , such that for every , by the mixed finite element theory , we can ensure ( [ eq:2 - 0 ] ) by applying ( [ subeq:2.02 ] ) and the following discrete inf - sup condition : there exists a positive constant , independent of , such that for , , the following discrete inf - sup condition for hood - taylor element is proved in by verfrth s trick : the technique used in the proof of ( [ eq:2 - 1 - 0 ] ) can be applied directly to prove ( [ eq:2 - 1 ] ) by virtue of the fact that , and the following continuous inf - sup condition : for more details , see . thus ( [ eq:2 - 0 ] ) is verified .let be a ritz projection as follows : , find such that owing to ( [ eq:2 - 0 ] ) , we have the following error estimate of . to define our fully discrete scheme , we divide the time interval into uniform subintervals using the nodal points with and .we denote for any given functions with a banach space . for a given sequence , we introduce the following notation : { \displaystyle \overline{u}^{k } = ( u^{k}+u^{k-1})/2 , \quad \widetilde{u}^{k}=(u^{k}+u^{k-2})/2,}\\[2 mm ] \end{array}\ ] ] for convenience , let us assume that is defined by which is an approximation of with second order accuracy . using the above notation , we can formulate our first fully discrete finite element scheme for the m - s - c system as follows .scheme ( ) . , find such that and for any , , , the following equations hold : { \displaystyle ( \partial_{\tau}^{2}\mathbf{a}_{h}^{k},\mathbf{v } ) + \big(\nabla\times \widetilde{\mathbf{a}}_{h}^{k},\nabla\times\mathbf{v}\big ) + \big(\nabla \cdot \widetilde{\mathbf{a}}_{h}^{k},\nabla\cdot\mathbf{v}\big)+\big(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k } + \overline{\mathbf{a}}_{h}^{k-1}}{2},\ , \mathbf{v}\big)}\\[2 mm ] { \displaystyle\quad + \big(\frac{\mathrm{i}}{2}\big((\psi_{h}^{k-1})^{\ast}\nabla{\psi_{h}^{k-1 } } -\psi_h^{k-1}\nabla{(\psi_{h}^{k-1})}^{\ast}\big),\mathbf{v}\big ) = 0,}\\[2 mm ] { \displaystyle ( \nabla { \phi}_{h}^{k } , \,\nabla u ) = ( \vert\psi_{h}^{k}\vert^{2},\,u)}. \end{array } \right.\ ] ] in section [ sec-2 ] we prove that weak solutions in definition [ def:2.1 ] and definition [ def:2.2 ] are equivalent .however , from the standpoint of finite element numerical computation , we choose to approximate weak solutions of the m - s - c system in the sense of definition [ def:2.2 ] instead of definition [ def:2.1 ] since it is very difficult to construct finite element subspaces of .we know that weak solutions of type in definition [ def:2.2 ] imply that is divergence - free although we only require in . in the discrete level , if we approximate in , it is difficult to degisn time integration schemes to ensure a discrete analogue of , i.e. , where denotes the orthogonal projection of onto some finite element space .thus we approximate in to enforce the projection of onto vanishes .moreover , for the purpose of theoretical analysis , we add an extra term to the discrete system ( [ eq:2 - 6 ] ) .it turns out that this term is indispensable to the proof of the error estimates and the existence of solutions to the discrete system ( [ eq:2 - 6 ] ) .apart from introducing the subspace of , we can also introduce a lagrangian multiplier to relax the divergence - free constraint of at each time step .we now give another fully discrete scheme based on the mixed finite element method as follows . and for , find satisfying { \displaystyle \qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad \quad \qquad \quad\qquad\quad \forall \varphi\in\mathcal{x}_{h}^{r},}\\[2 mm ] { \displaystyle ( \partial_{\tau}^{2}\mathbf{a}_{h}^{k},\mathbf{v } ) + \big(\nabla\times \widetilde{\mathbf{a}}_{h}^{k},\nabla\times\mathbf{v}\big ) + \big(\nabla \cdot \widetilde{\mathbf{a}}_{h}^{k},\nabla\cdot\mathbf{v}\big)+\big(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k } + \overline{\mathbf{a}}_{h}^{k-1}}{2},\ , \mathbf{v}\big)}\\[2 mm ] { \displaystyle\quad + \big(p_{h}^{k } , \ , \nabla \cdot \mathbf{v}\big ) + \big(\frac{\mathrm{i}}{2}\big((\psi_{h}^{k-1})^{\ast}\nabla{\psi_{h}^{k-1 } } -\psi_h^{k-1}\nabla{(\psi_{h}^{k-1})}^{\ast}\big),\,\mathbf{v}\big ) = 0 , \quad \forall\mathbf{v}\in\mathbf{x}_{h},}\\[2 mm ] { \displaystyle \big(\nabla \cdot \mathbf{a}_{h}^{k},\ , q\big ) = 0 , \quad \forall\ ,q \in x_{h}^{1 } , } \\[2 mm ] { \displaystyle ( \nabla { \phi}_{h}^{k } , \,\nabla u ) = ( \vert\psi_{h}^{k}\vert^{2},\,u),\quad\forall \;u \in x_{h}^{r}}. \end{array } \right.\ ] ] at each time step , the equation { \displaystyle\quad + \big(\frac{\mathrm{i}}{2}\big((\psi_{h}^{k-1})^{\ast}\nabla{\psi_{h}^{k-1 } } -\psi_h^{k-1}\nabla{(\psi_{h}^{k-1})}^{\ast}\big),\mathbf{v}\big ) = 0 , \quad \forall\mathbf{v}\in\mathbf{x}_{0h } } \end{array}\ ] ] in scheme ( ) and { \displaystyle\quad + \big(p_{h}^{k } , \ , \nabla \cdot \mathbf{v}\big ) + \big(\frac{\mathrm{i}}{2}\big((\psi_{h}^{k-1})^{\ast}\nabla{\psi_{h}^{k-1 } } -\psi_h^{k-1}\nabla{(\psi_{h}^{k-1})}^{\ast}\big),\,\mathbf{v}\big ) = 0 , \quad \forall\mathbf{v}\in\mathbf{x}_{h},}\\[2 mm ] { \displaystyle \big(\nabla \cdot \mathbf{a}_{h}^{k},\ , q\big ) = 0 , \quad \forall\ , q \in x_{h}^{1 } } \end{array } \right.\ ] ] in scheme ( ) are decoupled from the other two equations , respectively . due to the discrete inf - sup condition ( [ eq:2 - 1 ] ) and the coercivity of the bilinear functional in , where { \displaystyle \qquad \qquad + \frac{1}{2}\big(|\psi_{h}^{k-1}|^{2}{\mathbf{u}},\ , \mathbf{v}\big ) , \qquad \forall \, \mathbf{u } , \mathbf{v } \in \mathbf{x}_{h } , } \end{array}\ ] ] we know that there exists a unique solution to ( [ eq:2 - 6 - 1 ] ) and ( [ eq:2 - 6 - 2 ] ) , respectively .it is easy to see that in ( [ eq:2 - 6 - 2 ] ) satisfies ( [ eq:2 - 6 - 1 ] ) and thus the two above equations admit the same solution .consequently , scheme ( ) and scheme ( ) are mathematically equivalent .however , scheme ( ) is easier to perform theoretical analysis while scheme ( ) is easier to carry out numerical computation . at each time step, we first solve ( [ eq:2 - 6 - 1 ] ) or ( [ eq:2 - 6 - 2 ] ) and obtain .then we substitute it into ( [ eq:2 - 6 - 0 ] ) and solve the nonlinear subsystem concerning and .the existence and uniqueness of solutions to this subsystem is proved in section [ sec-5 ] . in practical computations, we can apply the picard simple iterative method or the newton iterative method to solve the nonlinear subsystem . for convenience ,we define the following bilinear forms : { \displaystyle d(\mathbf{u},\,\mathbf{v})=(\nabla\cdot \mathbf{u},\,\nabla\cdot \mathbf{v})+ ( \nabla\times\mathbf{u},\,\nabla\times\mathbf{v}),}\\[2 mm ] { \displaystyle f(\psi,\varphi)=\frac{\mathrm{i}}{2}(\varphi^{\ast}\nabla\psi-\psi\nabla\varphi^{\ast } ) . }\end{array}\ ] ] then ( [ eq:2 - 6 ] ) in scheme ( ) can be rewritten as follows : for , { \displaystyle ( \partial_{\tau } ^{2}\mathbf{a}_{h}^{k},\mathbf{v})+d(\widetilde{\mathbf{a}}_{h}^{k},\mathbf{v } ) + \left ( f(\psi_{h}^{k-1},\psi_h^{k-1}),\mathbf{v}\right ) + \big(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k } + \overline{\mathbf{a}}_{h}^{k-1}}{2},\mathbf{v}\big)=0 , } \\[2 mm ] { \displaystyle \qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad \quad \qquad \quad\qquad \quad \forall\mathbf{v}\in\mathbf{x}_{0h } , } \\[2 mm ] { \displaystyle ( \nabla { \phi}_{h}^{k},\,\nabla u ) = ( \vert\psi_{h}^{k}\vert^{2},\,u),\quad\forall u \in x_{h}^{r}. } \end{array } \right.\ ] ] in this paper we assume that the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) has one and only one weak solution in the sense of definition [ def:2.2 ] and the following regularity conditions are satisfied : { \displaystyle \qquad \psi_{tttt } \in l^{2}(0 , t ; \mathcal{l}^{2}(\omega ) ) , } \\[2 mm ] { \displaystyle \mathbf{a},\mathbf{a}_{t } \in { l}^{\infty}(0 , t ; \mathbf{h}^{r+1}(\omega ) ) , \quad \mathbf{a}_{tt } \in { l}^{\infty}(0 , t ; \mathbf{h}^{1}(\omega ) ) } \\[2 mm ] { \displaystyle\mathbf{a}_{ttt } \in { l}^{2}(0 , t ; \mathbf{h}^{1}(\omega)),\;\ ; \mathbf{a}_{tttt } \in l^{2}(0 , t ; \mathbf{l}^{2}(\omega)),}\\[2 mm ] { \displaystyle \phi,\phi_{t}\in { l}^{\infty}(0 , t ; { h}^{r+1}(\omega ) ) , \;\;\phi_{tt } \in { l}^{\infty}(0 , t ; { h}^{1}(\omega)),}\\[2 mm ] { \displaystyle \phi_{ttt}\in { l}^{\infty}(0 , t ; { l}^{2}(\omega ) ) , \quad \phi_{tttt}\in { l}^{2}(0 , t ; { l}^{2}(\omega ) ) . } \end{array}\ ] ] for the initial conditions , we assume that we now give the main convergence result in this paper as follows : [ thm2 - 1 ] suppose that is a bounded lipschitz polyhedral convex domain . let be the unique solution to the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) , and let be the numerical solution to the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) . under the assumptions ( [ eq:2 - 9 ] ) and ( [ eq:2 - 10 ] ) , we have the following error estimates \leq c ( h^{2r}+{\tau}^{4 } ) , } \end{array}\ ] ] where , , , , and is a constant independent of and .in this section we first show the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) maintains the conservation of the total charge and energy .then we deduce some stability estimates of the discrete solutions , which will be used to derive the error estimates in next section .first we define the energy of the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) as follows : { \displaystyle \qquad \qquad \qquad + \frac{1}{4 } d(\mathbf{a}_{h}^{k},\,\mathbf{a}_{h}^{k } ) + \frac{1}{4 } d(\mathbf{a}_{h}^{k-1},\,\mathbf{a}_{h}^{k-1 } ). } \end{array}\ ] ] lemma [ lem3 - 1 ] and theorem [ thm3 - 1 ] in the following are the discrete analogues of lemma [ lem6 - 4 ] and theorem [ thm6 - 0 ] , respectively .[ lem3 - 1 ] for , the solution of the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) satisfies the proof the this lemma is very similar to its continuous counterpart . for we can simply choose in and take its imaginary part . to prove ,we first notice that =\frac{1}{2}\partial_{\tau } b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k},\psi_{h}^{k } ) } \\[2 mm ] { \displaystyle \quad\quad+ \frac{1}{2\tau}\left[b(\overline{\mathbf{a}}_{h}^{k-1};{\psi}_{h}^{k-1},\psi_{h}^{k-1})-b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k-1},\psi_{h}^{k-1})\right]}\\[2 mm ] { \displaystyle \quad\quad+ \frac{1}{2 \tau}\mathrm{re}\left[b(\overline{\mathbf{a}}_{h}^{k};\psi_{h}^{k-1},\psi_{h}^{k})-b(\overline{\mathbf{a}}_{h}^{k};\psi_{h}^{k},\psi_{h}^{k-1})\right ] . }\end{array}\ ] ] we also have the following identities by direct calculations { \displaystyle b(\mathbf{a};\psi,\varphi)-b(\tilde{\mathbf{a}};\psi,\varphi)=\left((\mathbf{a}+\tilde{\mathbf{a}})\psi \varphi^{*},\mathbf{a}-\tilde{\mathbf{a}}\right)+2(f(\psi,\varphi),\mathbf{a}-\tilde{\mathbf{a } } ) , } \end{array}\ ] ] from which we deduce = 0.\ ] ] thus we get =\frac{1}{2}\partial_{\tau } b(\overline{\mathbf{a}}_{h}^{k};{\psi}_{h}^{k},\psi_{h}^{k})}\\[2 mm ] { \displaystyle - \left(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1}}{2},\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\tau}\right)-\left(f(\psi_{h}^{k-1},\psi_{h}^{k-1}),\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\tau}\right ) . }\end{array}\ ] ] also we have =\frac{1}{2}\partial_{\tau}\big(v\psi_{h}^{k},\psi_{h}^{k}\big),\quad \mathrm{re}\left[\big(\overline{\phi}_{h}^{k}\overline{\psi}_{h}^{k},\partial_{\tau } \psi_{h}^{k}\big)\right]=\frac{1}{2}\big(\overline{\phi}_{h}^{k } , \,\partial_{\tau } |\psi_{h}^{k}|^{2}\big).\ ] ] by choosing in , taking the real part of the equation and combining with ( [ eq:3.7 ] ) and ( [ eq:3.8 ] ) , we get { \displaystyle - \left(|\psi_{h}^{k-1}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1}}{2},\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\tau}\right)-\left(f(\psi_{h}^{k-1},\psi_{h}^{k-1}),\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\tau}\right)=0 . }\end{array}\ ] ] next by taking and adding it to ( [ eq:3.9 ] ) , we obtain { \displaystyle \quad+\partial_{\tau}\left(\frac{1}{4 } \vert \nabla\times\mathbf{a}_{h}^{k}\vert_{\mathbf{l}^{2}}^{2 } + \frac{1}{4}\vert \nabla\times\mathbf{a}_{h}^{k-1}\vert_{\mathbf{l}^{2}}^{2}\right)+\big(\overline{\phi}_{h}^{k } , \,\partial_{\tau } |\psi_{h}^{k}|^{2}\big ) = 0 . }\end{array}\ ] ] finally it is easy to deduce the following equation from the last equation of : take in ( [ eq:3.11 ] ) , insert it into ( [ eq:3.10 ] ) and we complete the proof of .[ thm3 - 1 ] the solution of the discrete system ( [ eq:2 - 7 ] ) fulfills the following estimate where is independent of , and .the proof of this theorem is very similar to its continuous counterpart , i.e. theorem [ thm6 - 0 ] , and thus we omit the proof .in this section we consider the existence and uniqueness of the solutions to the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) . to prove it ,we first introduce a useful lemma in as follows .[ lem5 - 0 ] let be a finite - dimensional inner product space , be the associated norm , and be continuous .assume that then there exists a such that and .[ thm5 - 0 ] for any satisfies ( [ eq:3.12 ] ) , there exists a solution to the discrete system ( [ eq:2 - 6 ] ) .furthermore , if the time step is sufficiently small , the solution is unique . as noted in section [ sec-3 ] , to solve the discrete system ( [ eq:2 - 5])-([eq:2 - 6 ] ) , we need to solve ( [ eq:2 - 6 - 1 ] ) and the following subsystem alternately . { \displaystyle \qquad \qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad \quad \qquad \quad\qquad \quad \forall \varphi\in\mathcal{x}_{h}^{r},}\\[2 mm ] { \displaystyle ( \nabla { \phi}_{h}^{k } , \,\nabla u ) = ( \vert\psi_{h}^{k}\vert^{2},\,u),\quad\forall u \in x_{h}^{r } } , \end{array } \right.\ ] ] since we have proved the solvability of ( [ eq:2 - 6 - 1 ] ) in section [ sec-3 ] , we only need to consider the solvability of ( [ eq:5.1 ] ) , which can be rewritten as follows : { \displaystyle ( \nabla \overline{\phi}_{h}^{k } , \,\nabla u ) = \big(\frac{1}{2}(|\psi_{h}^{k-1}|^{2 } + |2\overline{\psi}_{h}^{k } - \psi_{h}^{k-1}|^{2}),\ , u\big),\quad\forall u \in x_{h}^{r}}. \end{array } \right.\ ] ] for a given , assume that is a basis of .note that .then , , and can be written as follows where denote by { \displaystyle s = \big(s_{ij}\big)\in \mathbb{r}^{n\times n } , \quad q = \big(q_{ij}\big)\in \mathbb{r}^{n\times n } , } \end{array}\ ] ] where { \displaystyle s_{ij } = b(\overline{\mathbf{a}}_{h}^{k};\ , u_i,\,u_j ) , \quad q_{ij } = \big ( ( v + \overline{\phi}_{h}^{k})u_i , \ , u_j\big )\quad i , j = 1,\cdots , n. } \end{array}\ ] ] using the above notation , we can write ( [ eq:5.2 ] ) in the form of matrix : or a more compact form now we define a finite dimensional space and a mapping as following : { \displaystyle g(\vec{e } ) = \vec{e}-\vec{c}+{\rm i}\frac{\tau}{4}w^{-1}s\vec{e } + { \rm i}\frac{\tau}{2}w^{-1}q(\vec{e})\vec{e } , \quad \forall \vec{e}\in h , } \end{array}\ ] ] where denotes the conjugate transpose of and , and are defined in ( [ eq:5.3])-([eq:5.4 ] ) .obviously is continuous .moreover , which implies that thus the existence of and for ( [ eq:5.6 ] ) follows from lemma [ lem5 - 0 ] . combining with the existence of , we obtain the existence of to the discrete system ( [ eq:2 - 6 ] ) .now we study the uniqueness of the solutions to ( [ eq:5.1 ] ) .let and be two solutions of ( [ eq:5.1 ] ) .set they satisfy { \displaystyle \qquad \qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad \quad \qquad \quad\qquad \quad \forall \varphi\in\mathcal{x}_{h}^{r},}\\[2 mm ] { \displaystyle ( \nabla \psi , \,\nabla u ) = ( ( \psi_{h}^{k})^{\ast}\eta + { \eta}^{\ast}\hat{\psi}_{h}^{k } , \,u),\quad\forall u \in x_{h}^{r } } , \end{array } \right.\ ] ] by choosing in the first equation of ( [ eq:5.7 ] ) and taking its imaginary part , we obtain take in the second equation of ( [ eq:5.7 ] ) and we get by substituting ( [ eq:5.9 ] ) into ( [ eq:5.8 ] ) and taking sufficiently small , we find and consequently obtain the uniqueness of the solutions .we now turn to the proof of theorem [ thm2 - 1 ] .set , and , where , and are defined in section [ sec-3 ] . from the regularity assumption and the interpolation error estimates ( [ subeq:2.00 ] ) , ( [ subeq:2.01 ] ) and ( [ eq:2 - 2 ] ) , we deduce by using standard finite element theory and the regularity assumption ( [ eq:2 - 9 ] ) , we also have in the rest of this paper , we need the following discrete integration by parts formulas . { \displaystyle \sum_{k=1}^{m}{(a_{k}-a_{k-1})b_{k}}=a_{m}b_{m}-a_{0}b_{0}-\sum_{k=1}^{m}{a_{k-1}(b_{k}-b_{k-1})}.}\\[2 mm ] \end{array}\ ] ] to simplify the notation , we denote by , and . in view of the interpolation error estimates ( [ eq:4 - 0 ] ), we only need to prove that for , there holds by assuming that the weak solution of the m - s - c system in sense of definition [ def:2.2 ] satisfies { \displaystyle \big(\frac{\partial ^{2 } \mathbf{a}}{\partial t^{2 } } , \ ,\mathbf{v}\big ) + \big(\nabla \times\mathbf{a},\,\nabla \times\mathbf{v}\big ) - \big(\frac{\partial \phi}{\partial t } , \,\nabla \cdot \mathbf{v}\big ) + \big(f(\psi , \psi ) , \ , \mathbf{v}\big ) + \big(|\psi|^{2}\mathbf{a } , \mathbf{v}\big ) = 0 , } \\[2 mm ] { \displaystyle \qquad\qquad \qquad \qquad\qquad \qquad \qquad\qquad \qquad \qquad \qquad \qquad \qquad\forall \mathbf{v}\in \mathbf{h}_{\rm t}^{1}(\omega ) , } \\[2 mm ] { \displaystyle \big(\nabla \phi,\ , \nabla u\big ) = \big(|\psi|^{2},\ , u\big ) , \quad \forall u \in h_{0}^{1}(\omega ) . }\end{array } \right.\ ] ] subtracting the discrete sysytem ( [ eq:2 - 8 ] ) from ( [ eq:4 - 3 ] ) and noting that , we have { \displaystyle \quad+2\big(v(\psi^{k-\frac{1}{2}}-\overline{\psi}_{h}^{k}),\,\varphi\big)+ b\big(\mathbf{a}^{k-\frac{1}{2 } } ; \ , ( \psi^{k-\frac{1}{2}}-\mathcal{i}_{h}\overline{\psi}^{k } ) , \ , \varphi\big)}\\[2 mm ] { \displaystyle \quad + 2\big(\phi^{k-\frac12}\psi^{k-\frac12}-\overline{\phi}^{k}_{h}\overline{\psi}^{k}_{h } , \ , \varphi\big)+\left(b\big(\mathbf{a}^{k-\frac{1}{2}};\,\mathcal{i}_{h}\overline{\psi}^{k},\varphi\big)-b\big(\overline{\mathbf{a}}^{k}_{h } ; \ , \mathcal{i}_{h}\overline{\psi}^{k},\varphi\big)\right ) } \\ { \displaystyle \ ; : = \sum_{i=1}^{5}v_{i}^{k}(\varphi ) , \quad \forall \varphi\in\mathcal{x}_{h}^{r } , } \end{array}\ ] ] { \displaystyle \qquad \quad -\big((\phi_t)^{k-1},\ , \nabla \cdot \mathbf{v}\big ) + \big(|\psi^{k-1}|^{2}\mathbf{a}^{k-1}-|\psi^{k-1}_{h}|^{2}\frac{\overline{\mathbf{a}}_{h}^{k } + \overline{\mathbf{a}}_{h}^{k-1}}{2 } , \;\mathbf{v}\big)}\\[2 mm ] { \displaystyle \qquad \quad+ \left(f(\psi^{k-1},\psi^{k-1})-f(\psi^{k-1}_{h},\psi^{k-1}_{h}),\;\mathbf{v}\right)}\\[2 mm ] { \displaystyle \quad : = \sum_{i=1}^{5}u_{i}^{k}(\mathbf{v } ) , \quad \forall\mathbf{v}\in\mathbf{x}_{0h } , } \end{array}\ ] ] in the following of the section , we analyze the above three error equations term by term , respectively . first , by taking in ( [ eq:4 - 4 ] ), the imaginary part of the equation implies \end{array}\ ] ] now we are going to estimate the terms one by one . bythe error estimates for the interpolation operator and the regularity of in ( [ eq:2 - 9 ] ) , we see that thanks to { \displaystyle \qquad \leq c\|\nabla\psi\|_{\mathbf{l}^2 } \|\nabla\varphi\|_{\mathbf{l}^2},\quad \forall \mathbf{a}\in \mathbf{l}^6(\omega ) , \,\,\,\psi,\varphi\in\mathcal{h}_{0}^{1}(\omega ) , } \end{array}\ ] ] and we get in order to estimate , we first decompose it as follows . { \displaystyle \qquad \quad + \left ( ( \mathcal{i}_{h}\overline{\psi}^{k}-\overline{\psi}^{k}_{h})\phi^{k-\frac12 } , \,\overline{\theta}_{\psi}^{k}\right ) + \left ( \overline{\psi}^{k}_{h}(\phi^{k-\frac12}-i_{h}\phi^{k-\frac12 } ) , \,\overline{\theta}_{\psi}^{k}\right ) } \\[2 mm ] { \displaystyle \qquad \quad + \left ( \overline{\psi}^{k}_{h}i_{h}(\phi^{k-\frac12}-\overline{\phi}^{k } ) , \,\overline{\theta}_{\psi}^{k}\right ) + \left ( \overline{\psi}^{k}_{h } ( i_{h}\overline{\phi}^{k}-\overline{\phi}^{k}_{h } ) , \,\overline{\theta}_{\psi}^{k}\right ) } \end{array}\ ] ] by using theorem [ thm3 - 1 ] , the regularity assumption , and the properties of the interpolation operators , we obtain from ( [ eq:4 - 9 - 0 ] ) that notice that +\big[b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})}\\[2 mm ] { \displaystyle \quad -b(\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})\big]+\big[b(\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k } ) -b(\mathbf{a}^{k-\frac{1}{2 } } ; \mathcal{i}_{h}\overline{\psi}^{k},\overline{\theta}_{\psi}^{k})\big].}\\[2 mm ] \end{array}\ ] ] by applying ( [ eq:3.6 ] ) and theorem [ thm3 - 1 ] , it is easy to see that now multiplying ( [ eq:4 - 5 - 0 ] ) by , summing over , and applying the above estimates , we have { \displaystyle \quad \leq c\big(h^{2r}+\tau^{4}\big ) + c\tau \sum_{k=0}^{m}\left ( d({\theta}^{k}_{\mathbf{a } } , { \theta}^{k}_{\mathbf{a } } ) + \|\nabla\theta_{\psi}^{k}\|^{2}_{\mathbf{l}^2 } + \|\nabla { \theta}_{\phi}^{k}\|^{2}_{\mathbf{l}^2}\right ) . } \end{array}\ ] ] next , we take in ( [ eq:4 - 4 ] ) , which gives from the real part of ( [ eq:4 - 16 ] ) and ( [ eq:3.6 ] ) , we obtain } \\[2 mm ] { \displaystyle \qquad + \big(\frac{1}{2}(\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1})|\theta_{\psi}^{k-1}|^{2},\,\frac{1}{2}(\partial_{\tau } \mathbf{a}_{h}^{k}+\partial_{\tau } \mathbf{a}_{h}^{k-1})\big ) } \\[2 mm ] { \displaystyle\qquad + \big(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\,\frac{1}{2}(\partial_{\tau } \mathbf{a}_{h}^{k}+\partial_{\tau } \mathbf{a}_{h}^{k-1})\big ) , } \end{array}\ ] ] which yields } \\[2 mm ] { \displaystyle \quad + \tau\sum_{k=1}^{m}\big(\frac{1}{2}(\overline{\mathbf{a}}_{h}^{k}+\overline{\mathbf{a}}_{h}^{k-1})|\theta_{\psi}^{k-1}|^{2},\,\partial_{\tau } \overline{\mathbf{a}}_{h}^{k}\big ) + \tau\sum_{k=1}^{m } \big(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\ , \partial_{\tau } \overline{\mathbf{a}}_{h}^{k}\big ) . } \end{array}\ ] ] from theorem [ thm3 - 1 ] , we deduce { \displaystyle \qquad \leq c\sum_{k=1}^{m}\|\theta_{\psi}^{k-1}\|^{2}_{\mathcal{l}^{6 } } \leq c\sum_{k=0}^{m}\|\nabla\theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2 } } , } \\[2 mm ] { \displaystyle \sum_{k=1}^{m}\big(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\,\partial_{\tau } \overline{{\bm \pi}_{h}\mathbf{a}}^{k}\big ) \leq \sum_{k=1}^{m } \vert \nabla \theta_{\psi}^{k-1 } \vert_{\mathbf{l}^{2}}\vert \theta_{\psi}^{k-1 } \vert_{\mathcal{l}^{6 } } \vert\partial_{\tau } \overline{{\bm \pi}_{h}\mathbf{a}}^{k}\vert_{\mathbf{l}^{3 } } } \\[2 mm ] { \displaystyle \qquad \qquad \leq c\sum_{k=0}^{m}\|\nabla\theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2}}. } \end{array}\ ] ] denoting by we have { \displaystyle \qquad \qquad + \tau\sum_{k=1}^{m}j_1^{k } + \tau \sum_{j=1}^{5 } \sum_{k=1}^{m}\mathrm{re}\big[v_{j}^{k}(\partial_{\tau}{\theta_{\psi}^{k}})\big ] . } \\[2 mm ] \end{array}\ ] ] now let us estimate $ ] , term by term . in light of ( [ eq:4 - 2 ] ) , we get { \displaystyle \quad= 2\mathrm{i}\big(\partial_{\tau } \mathcal{i}_{h}\psi^{m}-(\psi_t)^{m-\frac{1}{2}},\;\theta_{\psi}^{m}\big)-2\mathrm{i}\big(\partial_{\tau } \mathcal{i}_{h}\psi^{1 } -(\psi_t)^{\frac{1}{2}},\;\theta_{\psi}^{0}\big)}\\[2 mm ] { \displaystyle \quad-2\mathrm{i}\sum_{k=1}^{m-1}\big(\partial_{\tau } \mathcal{i}_{h}\psi^{k+1 } -\partial_{\tau } \mathcal{i}_{h}\psi^{k}-(\psi_t)^{k+\frac{1}{2 } } + ( \psi_t)^{k-\frac{1}{2}},\;\theta_{\psi}^{k}\big ) . }\end{array}\ ] ] by employing the regularity assumption and the error estimates of interpolation operators , we see that the second term can be rewritten as { \displaystyle = 2\big(v(\psi^{k-\frac{1}{2 } } -\mathcal{i}_{h}\overline{\psi}^{k}),\,\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\big)-2\big(v(\frac{1}{2}(\theta_{\psi}^{k}+\theta_{\psi}^{k-1}),\,\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\big ) .} \\[2 mm ] \end{array}\ ] ] arguing as before , we obtain \big|\leq c\big(h^{2r}+\tau^{4}\big ) + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+ c\tau \sum_{k=1}^{m-1}{\|\theta_{\psi}^{k}\|_{\mathcal{l}^2}^{2}}. } \end{array}\ ] ] by the definition of the bilinear functional in ( [ eq:2 - 7 ] ) , we can rewrite as follows : { \displaystyle\quad \qquad + \left(|\mathbf{a}^{k-\frac{1}{2}}|^2(\psi^{k-\frac{1}{2}}-\mathcal{i}_{h}\overline{\psi}^{k}),\;\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle\quad \qquad+ \mathrm{i}\left(\nabla(\psi^{k-\frac{1}{2}}-\mathcal{i}_{h}\overline{\psi}^{k})\mathbf{a}^{k-\frac{1}{2}},\;\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle \quad \qquad-\mathrm{i}\left((\psi^{k-\frac{1}{2}}-\mathcal{i}_{h}\overline{\psi}^{k})\mathbf{a}^{k-\frac{1}{2}},\;\nabla \theta_{\psi}^{k } -\nabla \theta_{\psi}^{k-1}\right).}\\[2 mm ] \end{array}\ ] ] by employing ( [ eq:4 - 2 ] ) , ( [ subeq:2.00 ] ) , the regularity assumption ( [ eq:2 - 9 ] ) , and the young s inequality , we can prove the following estimate by some standard but tedious arguments which are analogous to the estimate of . due to space limitations, we omit the proof here . to estimate the term , we rewrite it by { \displaystyle \qquad \qquad - 2\left ( \mathcal{i}_{h}\overline{\psi}^{k}\overline{\theta}_{\phi}^{k } , \;\theta_{\psi}^{k}-\theta_{\psi}^{k-1 } \right ) -2 \left(\overline{\theta}_{\phi}^{k}\overline{\theta}_{\psi}^{k } , \;\theta_{\psi}^{k}-\theta_{\psi}^{k-1 } \right ) } .\end{array}\ ] ] arguing as before , we can obtain { \displaystyle \qquad\quad + c \vert\theta_{\psi}^{m}\vert_{\mathcal{l}^2}^{2 } + c\tau \sum_{k=0}^{m}{\vert\theta_{\psi}^{k}\vert_{\mathcal{l}^2}^{2 } } , } \\[2 mm ] { \displaystyle \big|{\rm re}\sum_{k=1}^{m } \big(i_{h}\overline{\phi}^{k}\overline{\theta}_{\psi}^{k } , \;\theta_{\psi}^{k}-\theta_{\psi}^{k-1 } \big ) \big| \leq c(h^{2r } + \tau^{4 } ) + c \vert\theta_{\psi}^{m}\vert_{\mathcal{l}^2}^{2 } + \frac{1}{32}\vert\nabla{\theta}_{\psi}^{m}\vert_{\mathbf{l}^2}^{2 } } \\[2 mm ] { \displaystyle \qquad\quad+ c\tau \sum_{k=0}^{m } \vert\nabla\theta_{\psi}^{k}\vert_{\mathbf{l}^2}^{2}. } \end{array}\ ] ] the real part of the last two terms on the right hand side of ( [ eq:4 - 24 ] ) can be decomposed as follows : } \\[2 mm ] { \displaystyle = { \rm re}\big [ \big ( \mathcal{i}_{h}\overline{\psi}^{k } ( \theta_{\psi}^{k}-\theta_{\psi}^{k-1})^{\ast } , \;\overline{\theta}_{\phi}^{k } \big ) \big ] + \frac{1}{2}\big(|\theta_{\psi}^{k}|^{2}-|\theta_{\psi}^{k-1}|^{2 } , \;\overline{\theta}_{\phi}^{k } \big ) } \\[2 mm ] { \displaystyle = { \rm re}\big [ \big ( \mathcal{i}_{h}{\psi}^{k } ( \theta_{\psi}^{k})^{\ast}-\mathcal{i}_{h}{\psi}^{k-1}(\theta_{\psi}^{k-1})^{\ast } , \;\overline{\theta}_{\phi}^{k } \big ) \big ] + \frac{1}{2 } \big(|\theta_{\psi}^{k}|^{2}-|\theta_{\psi}^{k-1}|^{2 } , \;\overline{\theta}_{\phi}^{k } \big ) } \\[2 mm ] { \displaystyle \qquad - { \rm re}\big [ \big ( \,\overline{\theta}_{\phi}^{k } ( \mathcal{i}_{h}{\psi}^{k } -\mathcal{i}_{h}{\psi}^{k-1 } ) , \ ; \overline{\theta}_{\psi}^{k}\big ) \big ] } \\[2 mm ] { \displaystyle = \frac{1}{2}\big ( |\psi_{h}^{k}|^{2 } - |\mathcal{i}_{h}\psi^{k}|^{2},\ ; \overline{\theta}_{\phi}^{k } \big ) - \frac{1}{2}\big ( |\psi_{h}^{k-1}|^{2 } - |\mathcal{i}_{h}\psi^{k-1}|^{2},\ ; \overline{\theta}_{\phi}^{k } \big ) } \\[2 mm ] { \displaystyle \qquad - { \rm re}\big [ \big ( \,\overline{\theta}_{\phi}^{k } ( \mathcal{i}_{h}{\psi}^{k } -\mathcal{i}_{h}{\psi}^{k-1 } ) , \ ; \overline{\theta}_{\psi}^{k}\big ) \big ] . } \\[2 mm ] \end{array}\ ] ] combining ( [ eq:4 - 24])-([eq:4 - 26 ] ) and setting obtain \leq c\big(h^{2r}+\tau^{4}\big ) + c \vert\theta_{\psi}^{m}\vert_{\mathcal{l}^2}^{2 } + \frac{1}{32 } \|\nabla\theta_{\psi}^{m}\|^{2}_{\mathbf{l}^{2 } } } \\[2 mm ] { \displaystyle \qquad\quad+ c\tau \sum_{k=0}^{m}\big(\vert\nabla \theta_{\psi}^{k}\vert_{\mathbf{l}^2}^{2 } + \vert\nabla \theta_{\phi}^{k}\vert_{\mathbf{l}^2}^{2}\big ) - \sum_{k=1}^{m}j_2^{k}. } \end{array}\ ] ] we now focus on the analysis of , which can be rewritten as }\\[2 mm ] { \displaystyle \qquad\quad \qquad\quad + \big[b(\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})-b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})\big]}\\[2 mm ] { \displaystyle \qquad \quad \qquad\quad + \big[b({\bm \pi}_{h}\overline{\mathbf{a}}^{k } ; \mathcal{i}_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})-b(\overline{\mathbf{a}}^{k}_{h } ; \mathcal{i}_{h}\overline{\psi}^{k},\theta_{\psi}^{k}-\theta_{\psi}^{k-1})\big]}\\[2 mm ] { \displaystyle\qquad \qquad\quad:=v_5^{k,1}+v_5^{k,2}+v_5^{k,3}. } \end{array}\ ] ] by applying ( [ eq:3.6 ] ) and ( [ eq:4 - 2 ] ) and arguing as before , we deduce { \displaystyle \quad+ \frac{1}{32 } \|\nabla\theta_{\psi}^{m}\|_{\mathbf{l}^2}^{2}+c\tau \sum_{k=1}^{m}\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}. }\end{array}\ ] ] in order to estimate , we rewrite it as follows . { \displaystyle\quad\quad\quad\quad-\sum_{k=1}^{m}{\mathrm{i}\left ( \mathcal{i}_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k}-\overline{\mathbf{a}}^{k}_{h}),\ ; \nabla\theta_{\psi}^{k}-\nabla\theta_{\psi}^{k-1}\right)}}\\[2 mm ] { \displaystyle \quad\quad\quad\quad+\sum_{k=1}^{m}{\mathrm{i}\left(\nabla \mathcal{i}_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k } -\overline{\mathbf{a}}^{k}_{h}),\;\theta_{\psi}^{k}-\theta_{\psi}^{k-1}\right)}}\\[2 mm ] { \displaystyle\quad\quad\quad : = t_1+t_2+t_3 . }\end{array}\ ] ] we decompose the term as follows . { \displaystyle \quad = -\left(\mathcal{i}_{h}\overline{\psi}^{m}({\bm\pi}_{h}\overline{\mathbf{a}}^{m } + \overline{\mathbf{a}}^{m}_{h } ) \overline{\theta}_{\mathbf{a}}^{m},\;\theta_{\psi}^{m}\right ) + \left(\mathcal{i}_{h}\overline{\psi}^{0}({\bm\pi}_{h}\overline{\mathbf{a}}^{0}+\overline{\mathbf{a}}^{0}_{h } ) \overline{\theta}_{\mathbf{a}}^{0},\;\theta_{\psi}^{0}\right)}\\[2 mm ] { \displaystyle \quad + \sum_{k=1}^{m}\left(\mathcal{i}_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k } + \overline{\mathbf{a}}^{k}_{h } ) \overline{\theta}_{\mathbf{a}}^{k}-\mathcal{i}_{h}\overline{\psi}^{k-1}({\bm\pi}_{h}\overline{\mathbf{a}}^{k-1 } + \overline{\mathbf{a}}^{k-1}_{h } ) \overline{\theta}_{\mathbf{a}}^{k-1},\;\theta_{\psi}^{k-1}\right ) . }\end{array}\ ] ] by applying the young s inequality and theorem [ thm3 - 1 ] , we can estimate the first two terms on the right side of ( [ eq:4 - 36 ] ) by { \displaystyle \quad\leq \frac{1}{16}d(\overline{\theta}_{\mathbf{a}}^{m},\overline{\theta}_{\mathbf{a}}^{m})+ c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2}+ ch^{2r}. } \end{array}\ ] ] since { \displaystyle \quad= \tau\left(\mathcal{i}_{h}\overline{\psi}^{k}({\bm\pi}_{h}\overline{\mathbf{a}}^{k } + \overline{\mathbf{a}}^{k}_{h})\frac{\overline{\theta}_{\mathbf{a}}^{k}-\overline{\theta}_{\mathbf{a}}^{k-1}}{\tau},\;\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle \quad\quad+ \tau\left(\frac{\mathcal{i}_{h}\overline{\psi}^{k}-\mathcal{i}_{h}\overline{\psi}^{k-1}}{\tau}({\bm\pi}_{h}\overline{\mathbf{a}}^{k}+\overline{\mathbf{a}}^{k}_{h})\overline{\theta}_{\mathbf{a}}^{k-1},\;\theta_{\psi}^{k-1}\right)}\\[2 mm ] { \displaystyle \quad \quad+ \tau \left(\mathcal{i}_{h}\overline{\psi}^{k-1}\overline{\theta}_{\mathbf{a}}^{k-1}\big(\frac{{\bm\pi}_{h}\overline{\mathbf{a}}^{k } -{\bm\pi}_{h}\overline{\mathbf{a}}^{k-1}}{\tau}+\frac{\overline{\mathbf{a}}_{h}^{k}-\overline{\mathbf{a}}_{h}^{k-1}}{\tau}\big),\;\theta_{\psi}^{k-1}\right ) , } \end{array}\ ] ] we deduce { \displaystyle \quad\leq \tau \|\mathcal{i}_{h}\overline{\psi}^{k}\|_{\mathcal{l}^6}\| { \bm\pi}_{h}\overline{\mathbf{a}}^{k}+\overline{\mathbf{a}}^{k}_{h}\|_{\mathbf{l}^6 } \|\partial_{\tau}\overline{\theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad \quad+ \tau \|\partial_{\tau } \mathcal{i}_{h}\overline{\psi}^{k}\|_{\mathcal{l}^2}\| { \bm\pi}_{h}\overline{\mathbf{a}}^{k}+\overline{\mathbf{a}}^{k}_{h}\|_{\mathbf{l}^6 } \|\overline{\theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^6}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad\quad + \tau \|\mathcal{i}_{h}\overline{\psi}^{k-1}\|_{\mathcal{l}^6}\| \overline{\theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^6}\| \partial_{\tau } { \bm\pi}_{h}\overline{\mathbf{a}}^{k } + \partial_{\tau}\overline{\mathbf{a}}_{h}^{k}\|_{\mathbf{l}^2}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6}}\\[2 mm ] { \displaystyle \quad\leq c\tau\left(\|\partial_{\tau } \overline{\theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2 } + \|\overline{\theta}_{\mathbf{a}}^{k-1}\|_{\mathbf{h}^1}\right)\|\theta_{\psi}^{k-1}\|_{\mathcal{h}^1}}\\[2 mm ] { \displaystyle \quad\leq c\tau\left(\|\partial_{\tau } \overline{\theta}_{\mathbf{a}}^{k}\|^{2}_{\mathbf{l}^2 } + d(\overline{\theta}_{\mathbf{a}}^{k-1},\overline{\theta}_{\mathbf{a}}^{k-1 } ) + \|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2 } \right ) } \end{array}\ ] ] by applying theorem [ thm3 - 1 ] .hence we get the estimate of as follows . { \displaystyle \qquad + c\tau \sum_{k=0}^{m}\left(\|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k})+ \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right).}\\[2 mm ] \end{array}\ ] ] by virtue of ( [ eq:4 - 2 ] ) and integrating by parts , we discover { \displaystyle \quad=\left(\nabla \mathcal{i}_{h}\overline{\psi}^{m}\overline{\theta}_{\mathbf{a}}^{m},\;\theta_{\psi}^{m}\right ) + \left(\mathcal{i}_{h}\overline{\psi}^{m}\nabla\cdot\overline{\theta}_{\mathbf{a}}^{m},\;\theta_{\psi}^{m}\right ) + \left(\mathcal{i}_{h}\overline{\psi}^{0}\overline{\theta}_{\mathbf{a}}^{0},\;\nabla\theta_{\psi}^{0}\right)}\\[2 mm ] { \displaystyle \quad \quad\quad + \sum_{k=1}^{m}\left(\mathcal{i}_{h}\overline{\psi}^{k}\overline{\theta}_{\mathbf{a}}^{k}- \mathcal{i}_{h}\overline{\psi}^{k-1}\overline{\theta}_{\mathbf{a}}^{k-1},\;\nabla\theta_{\psi}^{k-1}\right ) , } \end{array}\ ] ] by using young s inequality and ( [ eq:4 - 1 ] ) , we can estimate the first three terms on the right side of ( [ eq:4 - 41 ] ) as follows : { \displaystyle \quad\leq \|\nabla \mathcal{i}_{h}\overline{\psi}^{m}\|_{\mathbf{l}^3}\|\overline{\theta}_{\mathbf{a}}^{m}\|_{\mathbf{l}^6}\|\theta_{\psi}^{m}\|_{\mathcal{l}^2 } + \|\mathcal{i}_{h}\overline{\psi}^{m}\|_{\mathcal{l}^{\infty}}\|\nabla\cdot\overline{\theta}_{\mathbf{a}}^{m}\|_{{l}^2 } \|\theta_{\psi}^{m}\|_{\mathcal{l}^2}+c h^{2r } } \\[2 mm ] { \displaystyle \quad\leq c\|\overline{\theta}_{\mathbf{a}}^{m}\|_{\mathbf{h}^1}\|\theta_{\psi}^{m}\|_{\mathcal{l}^2 } + c h^{2r}\leq \frac{1}{16 } d(\overline{\theta}_{\mathbf{a}}^{m},\overline{\theta}_{\mathbf{a}}^{m } ) + c\|\theta_{\psi}^{m}\|^{2}_{\mathcal{l}^2}+c h^{2r}. } \\[2 mm ] \end{array}\ ] ] the last term at the right side of ( [ eq:4 - 41 ] ) satisfies the following estimate . { \displaystyle \leq \tau \sum_{k=1}^{m } { \big ( \|\partial_{\tau } \mathcal{i}_{h}\overline{\psi}^{k}\|_{\mathcal{l}^3}\|\overline{\theta}_{\mathbf{a}}^{k } \|_{\mathbf{l}^6}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2 } } + \|\mathcal{i}_{h}\overline{\psi}^{k-1}\|_{\mathcal{l}^{\infty } } \|\partial_{\tau } \overline{\theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2 } \big ) } \\[2 mm ] { \displaystyle \leq c\tau \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . } \end{array}\ ] ] hence we get { \displaystyle \qquad + c\tau \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] reasoning as before , we can estimate as follows . { \displaystyle \leq \frac{1}{16 } d(\overline{\theta}_{\mathbf{a}}^{m},\overline{\theta}_{\mathbf{a}}^{m } ) + c\|\theta_{\psi}^{m}\|^{2}_{\mathcal{l}^2}+c h^{2r } } \\[2 mm ] { \displaystyle \qquad + c\tau \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) .} \end{array}\ ] ] combining ( [ eq:4 - 40 ] ) , ( [ eq:4 - 44 ] ) , and ( [ eq:4 - 45 ] ) implies { \displaystyle \quad\quad\quad + c\tau \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) , } \end{array}\ ] ] and thus { \displaystyle \quad \leq c\big(h^{2r}+\tau^{4}\big)+\frac{3}{16 } d(\overline{\theta}_{\mathbf{a}}^{m},\overline{\theta}_{\mathbf{a}}^{m } ) + \frac{1}{32 } \|\nabla\theta_{\psi}^{m}\|_{\mathbf{l}^2}^{2 } + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2 } } \\[2 mm ] { \displaystyle \qquad + c\tau \sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k},{\theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } { \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}+\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2 } \right).}\\[2 mm ] \end{array}\ ] ] now substituting ( [ eq:4 - 19 ] ) , ( [ eq:4 - 22 ] ) , ( [ eq:4 - 23 - 0 ] ) , ( [ eq:4 - 27 ] ) , and ( [ eq:4 - 47 ] ) into ( [ eq:4 - 16 - 4 ] ) , we have { \displaystyle + c\|\theta_{\psi}^{m}\|_{\mathcal{l}^2}^{2 } + \tau\sum_{k=1}^{m}j_1^{k } + c \tau\sum_{k=0}^{m}\left ( \|\nabla \theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2 } } + \|\nabla \theta_{\phi}^{k}\|^{2}_{\mathbf{l}^{2}}+ d(\theta_{\mathbf{a}}^{k},\theta_{\mathbf{a}}^{k } ) + \|\partial_{\tau } \theta_{\mathbf{a}}^{k } \|^{2}_{\mathbf{l}^{2}}\right ) . } \end{array}\ ] ] arguing as in the proof of theorem [ thm6 - 0 ] , we have consequently , by inserting ( [ eq:4 - 49 ] ) into ( [ eq:4 - 48 ] ) , we find { \displaystyle + \tau\sum_{k=1}^{m}j_1^{k } + c \tau\sum_{k=0}^{m}\left ( \|\nabla \theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2 } } + \|\nabla \theta_{\phi}^{k}\|^{2}_{\mathbf{l}^{2}}+ d(\theta_{\mathbf{a}}^{k},\theta_{\mathbf{a}}^{k } ) + \|\partial_{\tau } \theta_{\mathbf{a}}^{k } \|^{2}_{\mathbf{l}^{2}}\right ) . }\end{array}\ ] ] by substituting ( [ eq:4 - 15 ] ) into ( [ eq:4 - 50 ] ) , we end up with { \displaystyle \qquad + c \tau\sum_{k=0}^{m}\left ( \|\nabla \theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2 } } + \|\nabla \theta_{\phi}^{k}\|^{2}_{\mathbf{l}^{2}}+ d(\theta_{\mathbf{a}}^{k},\theta_{\mathbf{a}}^{k } ) + \|\partial_{\tau } \theta_{\mathbf{a}}^{k } \|^{2}_{\mathbf{l}^{2}}\right ) . }\end{array}\ ] ] taking in ( [ eq:4 - 5 ] ) , we see that { \displaystyle \quad = \frac{1}{2\tau}\left(\|\partial_{\tau } \theta_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } -\|\partial_{\tau } \theta_{\mathbf{a}}^{k-1}\|_{\mathbf{l}^2}^{2}\right)+\frac{1}{4\tau}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k } ) -(d({\theta}_{\mathbf{a}}^{k-2 } , { \theta}_{\mathbf{a}}^{k-2})\right)}\\[2 mm ] { \displaystyle \quad = \sum_{i = 1}^{5}u_i^{k}(\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k } ) , } \end{array}\ ] ] which leads to { \displaystyle \quad= \frac{1}{2}\|\partial_{\tau } \theta_{\mathbf{a}}^{0}\|_{\mathbf{l}^2}^{2 } + \frac{1}{4}d({\theta}_{\mathbf{a}}^{0 } , { \theta}_{\mathbf{a}}^{0 } ) + \frac{1}{4}d({\theta}_{\mathbf{a}}^{-1 } , { \theta}_{\mathbf{a}}^{-1 } ) + { \tau}\sum_{k=1}^{m } \sum_{i = 1}^{5}u_i^{k}(\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k } ) } \\[2 mm ] { \displaystyle \quad \leq ch^{2r } + { \tau}\sum_{k=1}^{m } \sum_{i = 1}^{5}u_i^{k}(\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k } ) . }\\[2 mm ] \end{array}\ ] ] now we estimate , . under the regularity assumption of in ( [ eq:2 - 9 ] ), we have applying ( [ eq:4 - 2 ] ) , the regularity assumption , and the young s inequality , we can bound as follows . { \displaystyle \quad\quad+ \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m},\theta_{\mathbf{a}}^{m}\right ) + \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m-1},\theta_{\mathbf{a}}^{m-1}\right ) . } \end{array}\ ] ] since , we have from which we deduce { \displaystyle \qquad\qquad - \big(\frac{1}{2\tau}(\phi^{k } - \phi^{k-2 } ) - \frac{1}{2\tau}(i_{h}\phi^{k } - i_{h}\phi^{k-2 } ) , \ ; \nabla \cdot \overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k } \big ) . }\end{array}\ ] ] by applying ( [ eq:4 - 2 ] ) and reasoning as before , we have { \displaystyle \quad\quad+ \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m},\theta_{\mathbf{a}}^{m}\right ) + \frac{1}{32 } d\left(\theta_{\mathbf{a}}^{m-1},\theta_{\mathbf{a}}^{m-1}\right ) . }\end{array}\ ] ] by applying theorem [ thm3 - 1 ] and the regularity assumption , the terms can be estimated by a standard argument . to estimate , we first rewrite as follows . { \displaystyle \quad + \left(f(\mathcal{i}_{h}\psi^{k-1 } , \mathcal{i}_{h}\psi^{k-1})-f(\psi^{k-1}_{h},\psi^{k-1}_{h}),\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right ) } \\[2 mm ] { \displaystyle \quad : = u_5^{k,1}(\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k})+u_5^{k,2}(\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k } ) . } \end{array}\ ] ] a simple calculation shows that { \displaystyle = -\frac{\mathrm{i}}{2}\left(\varphi^{\ast}\nabla(\varphi-\psi)-\varphi\nabla(\varphi-\psi)^{\ast}\right ) + \frac{\mathrm{i}}{2}\left((\varphi-\psi)\nabla\psi^{\ast}-(\varphi-\psi)^{\ast}\nabla\psi\right ) , } \end{array}\ ] ] and consequently , we have { \displaystyle\quad \leq c\left(h^{2r}+\|\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}\right ) } \end{array}\ ] ] by applying ( [ eq:4 - 0 ] ) and ( [ eq:4 - 1 ] ) . similarly , by employing ( [ eq:4 - 1 ] ), we deduce { \displaystyle \quad = -\frac{\mathrm{i}}{2}\left((\theta_{\psi}^{k-1})^{\ast}\nabla\theta_{\psi}^{k-1 } -\theta_{\psi}^{k-1}\nabla(\theta_{\psi}^{k-1})^{\ast},\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right ) } \\[2 mm ] { \displaystyle \quad\quad-\frac{\mathrm{i}}{2}\left((\mathcal{i}_{h}\psi^{k-1})^{\ast}\nabla\theta_{\psi}^{k-1 } -\mathcal{i}_{h}\psi^{k-1}\nabla(\theta_{\psi}^{k-1})^{\ast},\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right ) } \\[2 mm ] { \displaystyle\quad \quad + \frac{\mathrm{i}}{2}\left(\theta_{\psi}^{k-1}\nabla ( \mathcal{i}_{h}\psi^{k-1})^{\ast}-(\theta_{\psi}^{k-1})^{\ast}\nabla \mathcal{i}_{h}\psi^{k-1},\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right)}\\[2 mm ] { \displaystyle \quad \leq -\left(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right ) + c \| \mathcal{i}_{h}\psi^{k-1}\|_{\mathcal{l}^{\infty}}\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}\|\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}}\\[2 mm ] { \displaystyle \quad\quad+c\|\nabla \mathcal{i}_{h}\psi^{k-1}\|_{\mathbf{l}^{3}}\|\theta_{\psi}^{k-1}\|_{\mathcal{l}^6 } \|\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}}\\[2 mm ] { \displaystyle \quad \leq -\left(f(\theta_{\psi}^{k-1},\theta_{\psi}^{k-1}),\;\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\right ) + c\left(\|\nabla\theta_{\psi}^{k-1}\|_{\mathbf{l}^2}^{2 } + \|\overline{\partial_{\tau } \theta}_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] therefore , we have { \displaystyle \qquad + c\tau\sum_{k=0}^{m}\left(\|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2 } + \|\partial_{\tau } \theta_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2}\right ) . }\end{array}\ ] ] substituting ( [ eq:4 - 63 ] ) , ( [ eq:4 - 63 - 4 ] ) , ( [ eq:4 - 63 - 5 ] ) , ( [ eq:4 - 65 ] ) , and ( [ eq:4 - 73 ] ) into ( [ eq:4 - 62 - 1 ] ) and recalling the definition of in ( [ eq:4 - 16 - 3 ] ) , we obtain { \displaystyle \quad \leq c\big(h^{2r}+\tau^{4 } \big ) + c\tau\sum_{k=0}^{m}\left(d({\theta}_{\mathbf{a}}^{k } , { \theta}_{\mathbf{a}}^{k } ) + \|\partial_{\tau } \theta_{\mathbf{a}}^{k}\|_{\mathbf{l}^2}^{2 } + \|\nabla\theta_{\psi}^{k}\|_{\mathbf{l}^2}^{2}\right ) -\tau\sum_{k=1}^{m}j_1^{k}.}\\[2 mm ] \end{array}\ ] ] we can deduce the estimate of by a standard argument. from ( [ eq:4 - 6 ] ) we know that by taking in ( [ eq:4 - 75 ] ) and recalling the definition of in ( [ eq:4 - 26 - 0 ] ) , we find { \displaystyle\qquad \qquad + \frac{1}{\tau}\big(\vert\mathcal{i}_{h}\psi^{k}\vert^{2}-\vert\psi^{k}\vert^{2 } - \vert\mathcal{i}_{h}\psi^{k-1}\vert^{2}+\vert\psi^{k-1}\vert^{2},\ , \overline{\theta}_{\phi}^{k } \big ) , } \end{array}\ ] ] which implies that { \displaystyle \qquad \quad + \sum_{k=1}^{m } \big(\vert\mathcal{i}_{h}\psi^{k}\vert^{2}-\vert\psi^{k}\vert^{2 } - \vert\mathcal{i}_{h}\psi^{k-1}\vert^{2}+\vert\psi^{k-1}\vert^{2},\ , \overline{\theta}_{\phi}^{k } \big ) .} \end{array}\ ] ] employing the error estimates of interpolation operators and the regularity assumption , we obtain { \displaystyle \sum_{k=1}^{m } \big(\vert\mathcal{i}_{h}\psi^{k}\vert^{2}-\vert\psi^{k}\vert^{2 } - \vert\mathcal{i}_{h}\psi^{k-1}\vert^{2}+\vert\psi^{k-1}\vert^{2},\ , \overline{\theta}_{\phi}^{k } \big ) \leq ch^{2r } + c\tau\sum_{k=0}^{m } \vert \nabla { \theta}_{\phi}^{k } \vert_{\mathbf{l}^{2}}^{2}. } \end{array}\ ] ] it follows that by combining ( [ eq:4 - 51 ] ) , ( [ eq:4 - 74 ] ) , and ( [ eq:4 - 76 ] ) , we finally obtain { \displaystyle \quad \leq c\big(h^{2r } + \tau^{4}\big ) + c \tau\sum_{k=0}^{m}\left ( \|\nabla \theta_{\psi}^{k}\|^{2}_{\mathbf{l}^{2 } } + \|\nabla \theta_{\phi}^{k}\|^{2}_{\mathbf{l}^{2}}+ d(\theta_{\mathbf{a}}^{k},\theta_{\mathbf{a}}^{k } ) + \|\partial_{\tau } \theta_{\mathbf{a}}^{k } \|^{2}_{\mathbf{l}^{2}}\right ) , } \end{array}\ ] ] which yields the desired estimate ( [ eq:4 - 2 - 0 ] ) by using the discrete gronwall s inequality .in this section , we present two numerical examples to confirm our theoretical analysis .[ exam7 - 1 ] to verify the conservation of total charge and energy of our scheme , we consider the m - s - c system ( [ eq:1.8])-([eq:1.9 ] ) with the initial datas : { \mathbf{a}(\mathbf{x},0)=\mathbf{a}_0(\mathbf{x})=\big(5\sin(2\pi x_3)(1 - \cos(2\pi x_1))\sin(\pi x_2 ) , \;\;0 , } \\[2 mm ] { \displaystyle\qquad \qquad \quad 5\sin(2\pi x_1)(1 - \cos(2\pi x_3))\sin(\pi x_2)\big ) , } \\[2 mm ] { \displaystyle \mathbf{a}_{t}(\mathbf{x},0)=\mathbf{a}_1(\mathbf{x})=0 . }\end{array}\ ] ] the domain is partitioned into uniform tetrahedrals with nodes in each direction and elements in total .we solve the m - s - c system by the scheme ( [ eq:2 - 6 - 0 ] ) using mixed finite element method with .the evolutions of the total charge and energy of the discrete system are displayed in fig.1 , which clearly show that our algorithm almost exactly keeps the conservation of the total charge and energy of the system .: ( a ) the evolution of the total charge ; ( b ) the evolution of the total energy of the discrete system . , title="fig:",width=188,height=188 ] : ( a ) the evolution of the total charge ; ( b ) the evolution of the total energy of the discrete system . , title="fig:",width=188,height=188 ] [ exam7 - 2 ] we consider the following m - s - c system : { \displaystyle \frac{\partial ^{2}\mathbf{a}}{\partial t^{2}}+\nabla\times ( \nabla\times \mathbf{a } ) + \frac{\partial ( \nabla \phi)}{\partial t}+\frac{\mathrm{i}}{2}\big(\psi^{*}\nabla{\psi}-\psi\nabla{\psi}^{*}\big ) } \\[2 mm ] { \displaystyle \qquad\qquad+\vert\psi\vert^{2}\mathbf{a}=\mathbf{f}(\mathbf{x},t ) , \,\,\quad ( \mathbf{x},t)\in \omega\times(0,t),}\\[2 mm ] { \displaystyle \nabla \cdot \mathbf{a } = 0 , \quad -\delta \phi - \vert\psi\vert^{2 } = h(\mathbf{x},t),\,\ , ( \mathbf{x},t)\in \omega\times(0,t ) } \\[2 mm ] { \displaystyle \psi(\mathbf{x},t)=0,\quad \mathbf{a}(\mathbf{x},t)\times\mathbf{n}=0 , \quad \phi(\mathbf{x},t ) = 0 , \,\ , ( \mathbf{x},t)\in \partial \omega\times(0,t ) . }\end{array } \right.\ ] ] with the exact solution { \displaystyle \mathbf{a}(\mathbf{x},t)=\sin(\pi t)\big(\cos(\pi x_1)\sin(\pi x_2)\sin(\pi x_3),\ , \sin(\pi x_1)\cos(\pi x_2)\sin(\pi x_3),}\\[2 mm ] { \displaystyle \quad -2\sin(\pi x_1)\sin(\pi x_2)\cos(\pi x_3)\big ) + \cos(\pi t)\big(\sin(2\pi x_3)(1 - \cos(2\pi x_1))\sin(\pi x_2),}\\[2 mm ] { \displaystyle \qquad 0 , \,\,\sin(2\pi x_1)(1 - \cos(2\pi x_3))\sin(\pi x_2)\big),}\\[2 mm ] { \displaystyle \phi(\mathbf{x } , t ) = 4\sin(\pi t ) x_1 x_2 x_3(1-x_1)(1-x_2)(1-x_3 ) } \\[2 mm ] { \displaystyle \qquad \qquad + \;\;cos(\pi t)\sin(\pi x_1)\sin(\pi x_2)\sin(\pi x_3 ) . } \end{array}\ ] ] where and the right - hand side functions , and are determined by the exact solution . in this example, we set and . we take a uniform tetrahedral partition with nodes in each direction and elements in total as in the example [ exam7 - 1 ] . the m - s - c system ( [ eq:7 - 1 ] ) are solved by the proposed scheme ( [ eq:2 - 6 - 0 ] ) with , which are denoted by linear element method and quadratic element method , respectively .to test the convergence order of our scheme , we pick for the linear element method and for the quadratic element method respectively .we present numerical results for the linear element method and the quadratic element method at the final time in tables [ table7 - 1 ] and [ table7 - 2 ] , respectively .we can clearly see that the convergence rate of the quadratic element method agrees with our theoretical analysis while the linear element method has better convergence order than our theoretical analysis , which is partially because we use a quadratic element approximation of the vector potential .gao , h.d . , li , b.y . ,sun , w.w . :optimal error estimates of linearized crank nicolson galerkin fems for the time - dependent ginzburg landau equations in superconductivity .siam j. numer .* 52 * , 11831202 ( 2014 ) ohnuki , s. , et al . : coupled analysis of maxwell schrdinger equations by using the length gauge : harmonic model of a nanoplate subjected to a 2d electromagnetic field .j. of numer . model . : electronic networks , devices and fields * 26 * , 533544 ( 2013 ) pierantoni , l. , mencarelli , d. , rozzi , t. : a new 3-d transmission line matrix scheme for the combined schrdinger maxwell problem in the electronic / electromagnetic characterization of nanodevices .ieee transactions on microwave theory and techniques * 56 * , 654662 ( 2008 ) sui , w. , yang , j. , yun , x.h . , wang , c. : including quantum effects in electromagnetic system for fdtd solution to maxwell schrdinger equations .microwave symposium , ieee / mtt - s international , 19791982 ( 2007 ) turati , p. , hao , y. : a fdtd solution to the maxwell schrdinger coupled model at the microwave range . electromagnetics in advanced applications ( iceaa ) ,2012 international conference on ieee , 363366 ( 2012 )
|
in this paper , we consider the initial - boundary value problem for the time - dependent maxwell schrdinger equations in the coulomb gauge . we first prove the global existence of weak solutions to the equations . next we propose an energy - conserving fully discrete finite element scheme for the system and prove the existence and uniqueness of solutions to the discrete system . the optimal error estimates for the numerical scheme without any time - step restrictions are then derived . numerical results are provided to support our theoretical analysis . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
studies on the emergence of collective and synchronized dynamics in large ensembles of coupled units have been carried out since the beginning of the nineties in different contexts and in a variety of fields , ranging from biology , ecology , and semiconductor lasers , to electronic circuits .collective synchronized dynamics has multiple applications in technology , and is a common framework to investigate the crucial features in the emergence of critical phenomena in natural systems . for instance , it is a relevant issue to fully understand some diseases that appear as the result of a sudden and undesirable synchronization of a large number of neuronal units .recently , synchronization phenomena have also been proved to be helpful outside the traditional fields where it applies , for instance , in sociology where it can be used to study the mechanisms leading to the formation of social collective behaviors . among the many models that have been proposed to address synchronization phenomena ,one of the most successful attempts to understand them is due to kuramoto , who capitalized on previous works by winfree , and proposed a model system of nearly identical weakly coupled limit - cycle oscillators .the kuramoto ( km ) mean field case corresponding to a uniform , all - to - all and sinusoidal coupling is described by the equations of motion , where the factor is incorporated in order to ensure a good behavior of the model in the thermodynamic limit , , stands for the natural frequencies of the oscillators , and is the coupling constant .moreover , the coherence of the population of oscillators is measured by the complex order parameter , where the modulus measures the phase coherence of the population and is the average phase . inwhat follows , we will focus on the synchronization of coupled oscillators described by the dynamics eq .( [ eq : kuramodel ] ) , because of its validity as an approximation for a large number of nonlinear equations and its ubiquity in the nonlinear literature . the km approach to synchronization was a great breakthrough for the understanding of the emergence of synchronization in large populations of oscillators , in particular it presents a second - order phase transition from incoherence to synchronization , in the order parameter eq.([eq : kuraorderparam ] ) for a critical value of the coupling constant .however , a large amount of real systems do not show a homogeneous pattern of interconnections among their parts where the original km assumptions apply .many real natural , social and technological systems conform as networks of nodes with connectivity patterns that diverge considerably from homogeneity , and are usually characterized by a scale - free degree distribution , ( the degree is the number of connections of a node ) .the study of processes taking place on top of scale - free networks has led to reconsider classical results obtained for regular lattices or random graphs due to the radical changes of the system s dynamics when the heterogeneity of the connectivity patterns can not be neglected . in this case one has to deal with two sources of complexity , the nonlinear character of the dynamics and the complex structures of the substrate , which are usually entangled . a contemporary effort to attack this entangled problem was due to watts and strogatz , that in 1998 , trying to understand the synchronization of cricket chirps , which show a high degree of coordination over long distances as though the insects where invisibly " connected , end up with a seminal paper about the small - world connectivity property .this work was the seed of the modern theory of complex networks .nevertheless , the understanding of the synchronization dynamics in complex networks still remains a challenge . in recent years, scientists have addressed the problem of synchronization on complex networks capitalizing on the master stability function ( msf ) formalism which allows to study the stability of the _ fully synchronized state _the msf is the result of a linear stability analysis for a completely synchronized system . while the msf approach is useful to get a first insight into what is going on in the system as far as the stability of the synchronized state is concerned , it tells nothing about how synchronization is attained and whether or not the system under study exhibits a transition similar to the original km . to this end, one must rely on numerical calculations and explore the _ entire phase diagram_. surprisingly , there are only a few works that have dealt with the study of the whole synchronization dynamics in specific scenarios as compared with those where the msf is used , given that the onset of synchronization is reacher in its behavioral repertoire than the state of complete synchronization . in a previous work ,we have shown how , for fixed coupling strengths , local patterns of synchronization emerge differently in homogeneous and heterogeneous complex networks , driving the process towards a certain degree of global synchronization following different paths . in this paper , we extend the previous work to different topologies , even those with modular structure , and report more results supporting the previous claim .first , we extend the analysis carried out in to networks in which the degree of heterogeneity can be tuned between the two limits of random scale - free networks and random graphs with a poisson degree distribution .second , in order to get further insights about the role of the structural properties on the route towards complete synchronization , we study the same dynamics on top of networks with a non - random structure at the mesoscopic level , i.e. , networks with communities .the results support the usefulness of the tools developed and highlight the relevance of synchronization phenomena to study in detail the relationship between structure and function in complex networks .let us now focus on the paradigmatic kuramoto model . in order to manage with the km on top of complex topologies we reformulate eq .( [ eq : kuramodel ] ) to the form where is the coupling strength between pairs of connected oscillators and is the connectivity matrix ( if is linked to and otherwise ) . the original kuramoto model introduced above assumed mean - field interactions so that ( all - to - all ) and .the first problem when dealing with the km in complex networks is the definition of the dynamics . in the seminal paper by kuramoto , eq .( [ eq : kuramodel ] ) , the coupling term in the right hand side of eq .( [ ks ] ) is an intensive magnitude .the dependence on the number of oscillators is avoided by choosing .this prescription turns out to be essential for the analysis of the system in the thermodynamic limit .however , choosing the dynamics of the km in a complex network becomes dependent on .therefore , in the thermodynamic limit , the coupling term tends to zero except for those nodes with a degree that scales with .a second prescription consists of taking ( where is the degree of node ) so that is a weighted interaction factor that also makes intensive the right hand side of eq .( [ ks ] ) .this form has been used to solve the so - called _ paradox of heterogeneity _ that states that the heterogeneity in the degree distribution , which often reduces the average distance between nodes , may suppress synchronization in networks of oscillators coupled symmetrically with uniform coupling strength .one should consider this result carefully because it refers to the stability of the _ fully synchronized state _( see below ) not to the _ whole evolution _ of synchronization in the network .more important , the inclusion of weights in the interaction strongly affects the original km dynamics in complex networks because it imposes a dynamic homogeneity that could mask the real topological heterogeneity of the network .finally , the prescription , which may seem more appropriate , also presents some conceptual problems because the sum in the right hand side of eq .( [ ks ] ) could eventually diverge in the thermodynamic limit if synchronization is achieved .to our understanding , the most accurate interpretation of the km dynamics in complex networks should preserve the essential fact of treating the heterogeneity of the network independently of the interaction dynamics , and at the same time , should remain calculable in the thermodynamic limit . taking into account these factors ,the interaction in complex networks should be inversely proportional to the largest degree of the system keeping in this way the original formulation of the km valid in the thermodynamic limit ( in sf networks ) .in addition , the same order parameter , eq .( [ eq : kuraorderparam ] ) , can be used to describe the coherence of the synchronized state .since is constant for a given network , the physical meaning of this prescription is a re - scaling of the time units involved in the dynamics .note , however , that for a proper comparison of the synchronizability of different complex networks , the global and local measures of coherence should be represented according to their respective time scales .therefore , given two complex networks a and b with and respectively , the comparison between observable must be done for the same effective coupling . with this formulation in mind( [ ks ] ) reduces to independently of the specific topology of the network .this allow us to study the dynamics of eq .( [ eq : kscn ] ) over different topologies in order to compare the results and properly inspect the interplay between topology and dynamics in what concerns to synchronization .recent results have shed light on the influence of the local interactions topology on the route to synchronization . however , in these studies at least two parameters ( clustering and average path length ) vary along the studied family of networks . this paired evolution ,although yielding an interesting interplay between the two topological parameters , makes it difficult to distinguish what effects were due to one or other factors . here , we would like to address first what is the influence of heterogeneity , keeping the number of degrees of freedom to a minimum for the comparison to be meaningful .the family of networks used in the present section are comparable in their clustering , average distance and correlations so that the only difference relies on the degree distribution , ranging from a poissonian type to a scale - free distribution . later on in this paper, we will relax these constraints and study networks in which the main topological feature is given at the mesoscopic scale , i.e. , networks with community structure .therefore , let us first scrutinize and compare the synchronization patterns in erds - rnyi ( er ) and scale - free ( sf ) networks .for this purpose we make use of the model proposed in that allows a smooth interpolation between these two extremal topologies . besides , we introduce a new parameter to characterize the synchronization paths to unravel their differences .the results reveal that the synchronizability of these networks does depend on the coupling between units , and hence , that general statements about their synchronizability are eventually misleading .moreover , we show that even in the incoherent solution , , the system is self - organizing towards synchronization. we will analyze in detail how this self - organization is attained .the first numerical study about the onset of synchronization of kuramoto oscillators in sf networks revealed the great propense of sf networks to synchronization , which is revealed by a non - zero but very small critical value . besides, it was observed that at the synchronized state , , hubs are extremely robust to perturbations since the recovery time of a node as a function of its degree follows a power law with exponent .however , how do sf networks compare with homogeneous networks and what are the roots of the different behaviors observed ?we first concentrate on global synchronization for the kuramoto model eq .( [ eq : kscn ] ) . for thiswe follow the evolution of the order parameter , eq .( [ eq : kuraorderparam ] ) , as increases , to capture the global coherence of the synchronization in networks. we will perform this analysis on the family of networks generated with the model introduced in .this model generates a one - parameter family of networks labeled by ] accounts for the synchronization strength of the network link .the results point out that link synchronization depends on the organizational level they belong to .those connecting nodes belonging to the same first level community are the fastest ( in terms of the coupling strength ) to reach full synchronization . for larger values of full synchronizationis attained progressively for the subsequent organizational levels .then , one can conclude that the inner the link is the faster it gets synchronized in agreement with previous studies reported above .in this paper we have explored several issues about synchronization in complex networks of kuramoto phase oscillators .our main concern has been the study of the synchronization patterns that emerge as the coupling between non - identical oscillators increases .we have described the degree of synchronization between each pair of connected oscillators .the use of a new parameter , , allows to reconstruct the synchronization clusters from the dynamical data .we have studied how the underlying topology ( ranging from homogeneous to heterogeneous structures ) affects the evolution of synchronization patterns .the results reveal that the route towards full synchronization depends strongly on whether one deals with homogeneous or heterogenous topologies .in particular , it has been shown that a giant cluster of synchronization in heterogeneous networks comes out from a unique core formed by highly connected nodes ( hubs ) whereas for homogeneous networks several synchronization clusters of similar size can coexist . in the latter case , a coalescence of these clustersis observed in the synchronization path which is macroscopically manifested by the sudden growth of global coherence . another important effect of the underlying topology is manifested in an anticipated onset of global coherence for heterogeneous networks with respect to more homogeneous topologies .however , the latter reaches the state of full synchronization at lower values of the coupling strength , therefore showing that statements about synchronizability of complex networks are relative to the region of the phase diagram where they operate .additionally , we have shown that these systems are seen to organize towards synchronization even when no macroscopic signs of global coherence is observed .finally , the framework of structured networks has provided a useful benchmark for testing the validity of the new parameter and the information obtained from the computation of matrix .the results obtained by means of these quantities allow to conclude that for modular networks synchronization is first locally attained at the most internal level of organization and , as the coupling is increased , it progressively evolves toward outer shells of the network .the latter process is however achieved at the expense of partially readjusting some pairs of synchronized nodes between the inner and outer community levels . besides, we have obtained evidences that a high cohesion at the first level communities produce a high degree of local synchronization although it delays the appearance of the global coherent state .this study has extended the previous findings about the paths towards synchronization in complex networks , and provides a deeper understanding of phase synchronization phenomena on top of complex topologies . in general, the work supports the idea that in the absence of analytical tools to confront the resolution of non - linear dynamical models in complex networks , the introduction of new parameters to describe the statistical properties of the emergence of local patterns is needed as they give novel and useful information that might guide our comprehension of these phenomena . on more general grounds , this work adds to other recent findings about the topology emerging from dynamical processes .the evidences that are being accumulated point to a dynamical organization , both at the local and global scales , that is driven by the underlying topology .whether or not this intriguing regularity has something to do with the ubiquity of complex heterogeneous networks in nature is not clear yet .more works in this direction are needed , but we think that they may ultimately lead to uncover important universal relations between the structure and function of complex natural systems that form networks . another issue to explore in future works concerns the behavior of non - linear dynamical systems on top of directed networks , which will allow deeper insights into the behavior of natural systems .we thank j.a .acebrn , s. boccaletti , a. daz - guilera , c.j .prez - vicente and v. latora for helpful comments .j.g.g . and y.m .are supported by mec through a fpu grant and the ramn y cajal program , respectively .this work has been partially supported by the spanish dgicyt projects fis2006 - 13321-c02 - 02 , fis2006 - 12781-c02 - 01 and fis2005 - 00337 and by the european nest pathfinder project gaba under contract 043309 .note that this is only possible in networks with power - law degree distributions , but with a very small probability as with . in these cases , mean - field solutionsindependent on are recovered , with slight differences in the onset of synchronization of all - to - all and scale - free networks . a direct comparison with the all - to - all kuramoto model is difficult to stablish , since one system is extensive ( sf networks ) and the other depends on ( km ) so that the results does not corresponds to the same effective coupling .note that the main difference between both measures is that one refers to the degree of synchronization of nodes ( ) with respect to the average phase and the other ( ) to the degree of synchronization between every pair of connected nodes .
|
we investigate in depth the synchronization of coupled oscillators on top of complex networks with different degrees of heterogeneity within the context of the kuramoto model . in a previous paper [ phys . rev . lett . 98 , 034101 ( 2007 ) ] , we unveiled how for fixed coupling strengths local patterns of synchronization emerge differently in homogeneous and heterogeneous complex networks . here , we provide more evidence on this phenomenon extending the previous work to networks that interpolate between homogeneous and heterogeneous topologies . we also present new details on the path towards synchronization for the evolution of clustering in the synchronized patterns . finally , we investigate the synchronization of networks with modular structure and conclude that , in these cases , local synchronization is first attained at the most internal level of organization of modules , progressively evolving to the outer levels as the coupling constant is increased . the present work introduces new parameters that are proved to be useful for the characterization of synchronization phenomena in complex networks .
|
deep convolutional neural nets ( cnns ) , pioneered by lecun and collaborators , now produce state - of - the - art performance on many visual recognition tasks .an attractive property is that appear to serve as universal feature extractors , either as `` off - the - shelf '' features or through a small amount of `` fine tuning '' .cnns trained on particular tasks such as large - scale image classification _ transfer _ extraordinarily well to other tasks such as object detection , scene recognition , image retrieval , etc .* hierarchical chain models : * cnns are hierarchical feed - forward architectures that compute progressively invariant representations of the input image . however , the appropriate level of invariance might be task - dependent .distinguishing people and dogs requires a representation that is robust to large spatial deformations , since people and dogs can articulate .however , fine - grained categorization of car models ( or bird species ) requires fine - scale features that capture subtle shape cues .we argue that a universal architecture capable of both tasks must employ some form of multi - scale features for output prediction .* multi - scale representations : * multi - scale representations are a classic concept in computer vision , dating back to image pyramids , scale - space theory , and multiresolution models .though somewhat fundamental notions , they have not been tightly integrated with contemporary feed - forward approaches for recognition .we introduce multi - scale cnn architectures that use features at multiple scales for output prediction ( fig .[ fig : splash ] ) . from one perspective ,our architectures are quite simple .typical approaches train a output predictor ( e.g. , a linear svm ) using features extracted from a single output layer .instead , one can train an output predictor using features extracted from _ multiple _ layers .note that these features come `` for free '' ; they are already computed in a standard feed - forward pass . *spatial pooling : * one difficulty with multi - scale approaches is feature dimensionality - the total number of features across all layers can easily reach hundreds of thousands .this makes training even linear models difficult and prone to over - fitting .instead , we use marginal activations computed from sum ( or max ) pooling across spatial locations in a given activation layer . from this perspective ,our models are similar to those that compute multi - scale features with spatial pooling , including multi - scale templates , orderless models , spatial pyramids , and bag - of - words .our approach is most related to , who also use spatially pooled cnn features for scene classification .they do so by pooling together multiple cnn descriptors ( re)computed on various - sized patches within an image .instead , we perform a single cnn encoding of the entire image , extracting multiscale features `` for free '' .* end - to - end training : * our multi - scale model differs from such past work in another notable aspect . our entire model is still a feed - forward cnn that is no longer chain - structured , but a directed - acyclic graph ( dag ) .dag - structured cnns can still be discriminatively trained in an end - to - end fashion , allowing us to directly learn multi - scale representations .dag structures are relatively straightforward to implement given the flexibility of many deep learning toolboxes .our primary contribution is the demonstration that structures can capture multiscale features , which in turn allow for transfer learning between coarse and fine - grained classification tasks .* dag neural networks : * dag - structured neural nets were explored earlier in the context of recurrent neural nets .recurrent neural nets use feedback to capture dynamic states , and so typically can not be processed with feed - forward computations .more recently , networks have explored the use of `` skip '' connections between layers , similar to our multi - scale connections . show that such connections are useful for a single binary classification task , but we motivate multiscale connections through multitask learning : different visual classification tasks require features at different image scales .finally , the recent state - of - the - art model of make use of skip connections for training , but does not use them at test - time .this means that their final feedforward predictor is not a dag .our results suggest that adding multiscale connections at testtime might further improve their performance .* overview : * we motivate our multi - scale dag - cnn model in sec . [ sec : motivation ] , describe the full architecture in sec .[ sec : approach ] , and conclude with numerous benchmark results in sec .[ sec : exp ] .we evaluate multi - scale dag - structured variants of existing cnn architectures ( , caffe , deep19 ) on a variety of scene recognition benchmarks including sun397 , mit67 , scene15 .we observe a consistent improvement regardless of the underlying cnn architecture , producing state - of - the - art results on all 3 datasets .in this section , we motivate our multi - scale architecture with a series of empirical analysis . we carry out an analysis on existing cnn architectures , namely caffe and deep19 .caffe is a broadly used cnn toolbox .it includes a pre - trained model `` alexnet '' model , learned with millions of images from the imagenet dataset .this model has 6 conv .layers and 2 fully - connected ( fc ) layers .deep19 uses very small receptive fields , but an increased number of layers 19 layers ( 16 conv . and 3 fc layers ) .this model produced state - of - the - art performance in ilsvrc-2014 classification challenge .we evaluate both `` off - the - shelf '' pre - trained models on the heavily benchmarked mit indoor scene ( mit67 ) dataset , using 10-fold cross - validation .* image retrieval : * recent work has explored sparse reconstruction techniques for visualizing and analyzing features . inspired by such techniques , we use image retrieval to begin our exploration .we attempt to `` reconstruct '' a query image by finding closest images in terms of l2-distance , when computed with mean - pooled layer - specific activations .results are shown for two query images and two caffe layers in fig .[ fig : moti ] .the florist query image tends to produces better matches when using mid - level features that appear to capture _ objects _ and _ parts_. on the other hand , the church - inside query image tends to produce better matches when using high - level features that appear to capture more global _ scene _ statistics . *single - scale classification : * following past work , we train a linear svm classifier using features extracted from a particular layer .we specifically train 1-vs - all linear classifiers .we plot the performance of single - layer classifiers in fig .[ fig : layer_mit67 ] . the detailed parameter options for both caffe model are described later in sec .[ sec : exp ] . as pastwork has pointed out , we see a general increase in performance as we use higher - level ( more invariant ) features .we do see a slight improvement at each nonlinear activation ( relu ) layer .this makes sense as this layer introduces a nonlinear rectification operation , while other layers ( such an convolutional or sum - pooling ) are linear operations that can be learned by a linear predictor . * scale - varying classification : * the above experiment required training 1-vs - all classifiers , where is the number of classes and is the number of layers .we can treat each of the classifiers as binary predictors , and score each with the number of correct detections of its target class .we plot these scores as a matrix in fig .[ fig : level_perf ] .we tend to see groups of classes that operate best with features computed from particular high - level or mid - level layers .most categories tend to do well with high - level features , but a significant fraction ( over a third ) do better with mid - level features . *spatial pooling : * in the next section , we will explore multi - scale features . one practical hurdle to including all features from all layers is the massive increase in dimensionality . here , we explore strategies for reducing dimensionality through pooled features .we consider various pooling strategies ( sum , average , and max ) , pooling neighborhoods , and normalization post - processing ( with and without l2 normalization ) .we saw good results with average pooling over all spatial locations , followed by l2 normalization .specifically , assume a particular layer is of size , where is the height , is the width , and is the number of filter channels .we compute a feature by averaging across spatial dimensions .we then normalize this feature to have unit norm .we compare this encoding versus the unpooled full - dimensional feature ( also normalized ) in fig .[ fig : full_marg ] .pooled features always do better , implying dimensionality reduction actually helps performance .we verified that this phenomena was due to over - fitting ; the full features always performed better on training data , but performed worse on test data .this suggests that with additional training data , less - aggressive pooling ( that preserves some spatial information ) may perform better .* multiscale classification : * we now explore multi - scale predictors that process pooled features extracted from multiple layers . as before ,we analyze `` off - the - shelf '' pre - trained models .we evaluate performance as we iteratively add more layers .[ fig : layer_mit67 ] suggests that the last relu layer is a good starting point due to its strong single - scale performance .fig [ fig : add_back_caffe ] plots performance as we add previous layers to the classifier feature set .performance increases as we add intermediate layers , while lower layers prove less helpful ( and may even hurt performance , likely do to over - fitting ) .our observations suggest that high and mid - level features ( , _ parts _ and _ objects _ ) are more useful than low - features based on _ edges _ or _ textures _ in scene classification .* multi - scale selection : * the previous results show that adding all layers may actually hurt performance .we verified that this was an over - fitting phenomena ; additional layers always improved training performance , but could decrease test performance due to over - fitting .this appears especially true for multi - scale analysis , where nearby layers may encoded redundant or correlated information ( that is susceptible to over - fitting ) .ideally , we would like to search for the `` optimal '' combination of relu layers that maximize performance on validation data .since there exists an exponential number of combinations ( for relu layers ) , we find an approximate solution with a greedy forward - selection strategy .we greedily select the next - best layer ( among all remaining layers ) to add , until we observe no further performance improvement .as seen in fig .[ fig : forward_select_caffe ] , the optimal results of this greedy approach rejects the low - level features .this is congruent with the previous results in fig .[ fig : add_back_caffe ] .our analysis strongly suggest the importance ( and ease ) of incorporating multi - scale features for classification tasks . for our subsequent experiments , we use scales selected by the forward selection algorithm on mit67 data ( shown in fig . [fig : forward_select_caffe ] ) . note that we use them for all our experimental benchmarks , demonstrating a degree of cross - dataset generalization in our approach .in this section , we show that the multi - scale model examined in fig . [fig : forward_select_caffe ] can be written as a dag - structured , feed - forward cnn .importantly , this allows for end - to - end gradient - based learning . to do so, we use standard calculus constructions specifically the chain rule and partial derivatives to generalize back - propagation to layers that have multiple `` parents '' or inputs .though such dag structures have been previously introduced by prior work , we have not seen derivations for the corresponding gradient computations .we include them here for completeness , pointing out several opportunities for speedups given our particular structure .* model : * the run - time behavior of our multi - scale predictor from the previous section is equivalent to feed - forward processing of the dag - structured architecture in fig . [fig : forward_select_caffe ] . note that we have swapped out a margin - based hinge - loss ( corresponding to a svm ) with a softmax function , as the latter is more amenable to training with current toolboxes .specifically , typical cnns are grouped into collections of four layers , , conv . ,relu , contrast normalization ( norm ) , pooling layers ( with the norm and pooling layers being optional ) .the final layer is usually a -way softmax function that predicts one of outputs .we visualize these layers as a chain - structured `` backbone '' in fig .[ fig : model ] .our dag - cnn simply links each relu layer to an average - pooling layer , followed by a l2 normalization layer , which feeds to a fully - connected ( fc ) layer that produces outputs ( represented formally as a matrix ) .these outputs are element - wise added together across all layers , and the resulting numbers are fed into the final softmax function .the weights of the fc layers are equivalent to the weights of the final multi - scale -way predictor ( which is a softmax predictor for a softmax loss output , and a svm for a hinge - loss output ) .note that all the required operations are standard modules except for the _add_. * training : * let be the cnn model parameters at -th layer , training data be ( ) , where is the -th input image and is the indicator vector of the class of . then we intend to solve the following optimization problem as is now commonplace , we make use of stochastic gradient descent to minimize the objective function . for a traditional _ chain _ model , the partial derivative of the output with respect to any one weight can be recursively computed by the chain rule , as described in the back - prop algorithm . * multi - output layers ( relu ) : * our dag - model is structurally different at the relu layers ( since they have multiple outputs ) and the _ add _ layer ( since it has multiple inputs ) .we can still compute partial derivatives by recursively applying the chain rule , but care needs to be taken at these points .let us consider the -th relu layer in fig .[ fig : backprop_eq ] .let be its input , be the output for its -th output branch ( its child in the dag ) , and let is the final output of the softmax layer .the gradient of with respect to the input of the -th relu layer can be computed as where for the example in fig .[ fig : backprop_eq ] .one can recover standard back - propagation equations from the above by setting : a single back - prop signal arrives at relu unit , is multiplied by the local gradient , and is passed on down to the next layer below . in our dag , _ multiple _ back - prop signals arrive from each branch , each is multiplied by an branch - specific gradient , and their total sum is passed on down to the next layer .-th relu . ] * multi - input layers ( add ) : * let represents the output of a layer with multiple inputs .we can compute the gradient along the layer by applying the chain rule as follows : one can similarly arrive at the standard back - propagation by setting . *special case ( relu ) : * our particular dag architecture can further simplify the above equations. firstly , it may be common for multiple - output layers to duplicate the same output for each child branch .this is true of our relu units ; they pass the same values to the next layer in the chain and the current - layer pooling operation .this means the output - specific gradients are identical for those outputs , which simplifies to this allows us to add together multiple back - prop signals before scaling them by the local gradient , reducing the number of multiplications by .we make use of this speed up to train our relu layers .* special case(add ) : * similarly , our multi - input add layer reuses the same partial gradient for each input which simplifies even further in our case to . the resulting back - prop equations that simplify are given by implying that one can similarly save multiplications .the above equations have an intuitive implementation ; the standard chain - structured back - propagation signal is simply replicated along each of the parents of the add layer . *implementation : * we use the excellent matconnet codebase to implement our modifications .we implemented a custom add layer and a custom dag data - structure to denote layer connectivity .training and testing is essentially as fast as the chain model . * vanishing gradients : * we point out an interesting property of our multiscale models that make them easier to train . vanishing gradients refers to the phenomena that gradient magnitudes decrease as they are propogated through layers , implying that lower - layers can be difficult to learn because they receive too small a learning signal . in our dag - cnns ,lower layers are _ directly _ connected to the output layer through multi - scale connections , ensuring they receive a strong gradient signal during learning .[ fig : grad ] experimentally verifies this claim .larger , implying that they receive a stronger supervised signal from the target label during gradient - based learning . ]we explore dag - structured variants of two popular deep models , caffe and deep19 .we refer to these models as caffe - dag and deep19-dag .we evaluate these models on three benchmark scene datasets : sun397 , mit67 , and scene15 . in absolute terms, we achieve the best performance ever reported on all three benchmarks , sometimes by a significant margin .* feature dimensionality : * most existing methods that use cnns as feature extractors work with the last layer ( or the last fully connected layer ) , yielding a feature vector of size 4096. forward feature selection on caffe - dag selects layers , making the final multiscale feature 9216-dimensional .deep19-dag selects layers , for a final size of 6144 .we perform feature selection by cross - validating on mit67 , and use the same multiscale structure for all other datasets .dataset - dependant feature selection may further improve performance .our final multiscale dag features are _ only 2x larger _ than their single - scale counter part , making them practically easy to use and store .* training : * we follow the standard image pre - processing steps of fixing the input image size to by scaling and cropping , and subtracting out the mean rgb value ( computed on imagenet ) .we initialize filters and biases to their pre - trained values ( tuned on imagenet ) and initialize multi - scale fully - connected ( fc ) weights to small normally - distributed numbers .we perform 10 epochs of learning .* baselines : * we compare our dag models to published results , including two additional baselines .we evaluate the best single - scale `` off - the - shelf '' model , using both caffe and deep19 .we pass l2-normalized single - scale features to liblinear to train -way one - vs - all classifiers with default settings .finally , sec .[ sec : diag ] concludes with a detailed diagnostic analysis comparing off - the - shelf and fine - tuned versions of chain and dag structures .* sun397 : * we tabulate results for all our benchmark datasets in table [ table : all ] , and discuss each in turn .sun397 is a large scene recognition dataset with 100k images spanning 397 categories , provided with standard train - test splits .our dag models outperform their single - scale counterparts . in particular, deep19-dag achieves the highest accuracy .our results are particularly impressive given that the next - best method of ( with a score of ) makes use of a imagenet - trained cnn and a custom - trained cnn using a new 7-million image dataset with 400 scene categories . * mit67 : * mit67 consists of 15k images spanning 67 indoor scene classes , provided with standard train / test splits .indoor scenes are interesting for our analysis because some scenes are well characterized by high - level spatial geometry ( church and cloister ) , while others are better described by mid - level objects ( wine celler and operating room ) in various spatial configurations .we show qualitative results in fig .[ fig : more_eg ] .deep19-dag produces a classification accuracy of , reducing the best - previously reported error by * 23.9%*. interestingly also uses multi - scale cnn features , but do so by first extracting various - sized patches from an image , rescaling each to canonical size .single - scale cnn features extracted from these patches are then vector - quantized into a large - vocabulary codebook , followed by a projection step to reduce dimensionality .our multi - scale representation , while similar in spirit , is an end - to - end trainable model that is computed `` for free '' from a single ( dag ) cnn .* scene15 : * the scene15 includes both indoor scene ( , store and kitchen ) and outdoor scene ( , mountain and street ) .it is a relatively small dataset by contemporary standards ( 2985 test images ) , but we include here for completeness .performance is consistent with the results above .our multi - scale dag model , specifically deep19-dag , outperforms all prior work . for reference ,the next - best method of uses a new custom 7-million image scene dataset for training . in this section , we analyze `` off - the - shelf '' ( ots ) and `` fine - tuned '' ( ft ) versions of both single - scale chain and multi - scale dag models .we focus on the caffe model , as it is faster and easier for diagnostic analysis .* chain : * chain - ots uses single - scale features extracted from cnns pre - trained on imagenet .these are the baseline caffe results presented in the previous subsections .chain - ft trains a model on the target dataset , using the pre - trained model as an initialization . this can be done with standard software packages . to ensure consistency of analysis , in both cases featuresare passed to a k - way multi - class svm to learn the final predictor .* dag : * dag - ots is obtained by fixing all internal filters and biases to their pre - trained values , and only learning the multiscale fully - connected ( fc ) weights .because this final stage learning is a convex problem , this can be done by simply passing off - the - shelf multi - scale features to a convex linear classification package ( e.g. , svm ) .we compare this model to a fine - tuned version that is trained end - to - end , making use of the modified backprop equation from sec .[ sec : approach ] .* comparison : * fig .[ fig : comp_otf ] compares off - the - shelf and fine - tune variants of chain and dag models .we see two dominant trends .first , as perhaps expected , fine - tuned ( ft ) models consistently outperform their off - the - shelf ( ots ) countparts .even more striking is the large improvement from chain to dag models , indicating the power of multi - scale feature encodings .* dag - ots : * perhaps most impressive is the strong performance of dag - ots . from a theoretical perspective , this validates our underyling hypothesis that multi - scale features allow for better transfer between recognition tasks in this case , imagenet and scene classification . an interesting question is whether multi - scale features , when trained with gradient - based dag - learning on imagenet , will allow for even more transfer .we are currently exploring this .however even with current cnn architectures , our results suggest that _ any system making use of off - the - shelf cnn features should explore multi - scale variants as a cheap " baseline . _ compared to their single - scale counterpart , multiscale features require no additional time to compute , are only a factor of 2 larger to store , and consistently provide a noticeable improvement .* conclusion : * we have introduced multi - scale cnns for image classification .such models encode scale - specific features that can be effectively shared across both coarse and fine - grained classification tasks .importantly , such models can be viewed as dag - structured feedforward predictors , allowing for end - to - end training . while fine - tuning helps performance ,we empirically demonstrate that even `` off - the - self '' multiscale features perform quite well .we present extensive analysis and demonstrate state - of - the - art classification performance on three standard scene benchmarks , sometimes improving upon prior art by a significant margin .
|
we explore multi - scale convolutional neural nets ( cnns ) for image classification . contemporary approaches extract features from a single output layer . by extracting features from multiple layers , one can simultaneously reason about high , mid , and low - level features during classification . the resulting multi - scale architecture can itself be seen as a feed - forward model that is structured as a directed acyclic graph ( dag - cnns ) . we use dag - cnns to learn a set of multiscale features that can be effectively shared between coarse and fine - grained classification tasks . while fine - tuning such models helps performance , we show that even `` off - the - self '' multiscale features perform quite well . we present extensive analysis and demonstrate state - of - the - art classification performance on three standard scene benchmarks ( sun397 , mit67 , and scene15 ) . in terms of the heavily benchmarked mit67 and scene15 datasets , our results reduce the lowest previously - reported error by * 23.9% * and * 9.5% * , respectively .
|
in time series analysis , stationarity requires that dependence structure be sustained over time , and thus we can borrow information from one time period to study model dynamics over another period ; see fan and yao for nonparametric treatments and lahiri for various resampling and block bootstrap methods . in practice , however , many climatic , economic and financial time series are non - stationary and therefore challenging to analyze .first , since dependence structure varies over time , information is more localized .second , non - stationary processes often require extra parameters to account for time - varying structure .one way to overcome these issues is to impose certain local stationarity ; see , for example , dahlhaus and adak for spectral representation frameworks and dahlhaus and polonik for a time domain approach . in this articlewe study a class of modulated stationary processes ( see adak ) where are stationary time series with zero mean , and are unknown constants adjusting for time - dependent variances . then oscillates around the constant mean , whereas its variance changes over time in an unknown manner . in the special case of , ( [ eq : xinons ] ) reduces to stationary case .if for a lipschitz continuous function on ] , the following uniform approximations hold : & & \max_{cn \le j\le n } |\underline{v}^2_j - \sigma^2_j |= \mathrm{o}_\mathrm { p}\{(r^2_n\delta^2_n + \sigma^2_n)/n + \sigma^{*2}_n+r^*_n \delta _ n\}.\label{eq : thm0b}\vspace*{-2pt}\end{aligned}\ ] ] theorem [ thm:0 ] provides quite general results under ( [ eq : sip ] ) .we now discuss sufficient conditions for ( [ eq : sip ] ) .shao obtained sufficient mixing conditions for ( [ eq : sip ] ) . in this article , we briefly introduce the framework in wu .assume that has the causal representation , where are i.i.d .innovations , and is a measurable function such that is well defined .let be an independent copy of .assume proposition [ pro:1 ] below follows from corollary 4 in wu .[ pro:1 ] assume that ( [ eq : pro1con ] ) holds .then ( [ eq : sip ] ) holds with , the optimal rate up to a logarithm factor . for linear process with and , . if , then ( [ eq : sip ] ) holds with . for many nonlinear time series, decays exponentially fast and hence ( [ eq : pro1con ] ) holds ; see section 3.1 of wu . from now on we assume ( [ eq : sip ] ) holds with .[ rmk : moment ] if are i.i.d . with and for some , the celebrated `` hungarian embedding '' asserts that satisfies a strong invariance principle with the optimal rate .thus , it is necessary to have the moment assumption in proposition [ pro:1 ] in order to ensure strong invariance principles for both and in ( [ eq : snsn ] ) with approximation rate . on the other hand, one can relax the moment assumption by loosening the approximation rate .for example , by corollary 4 in wu , assume for some and , then ( [ eq : sip ] ) holds with .as shown in examples [ exmp:3][exmp:4 ] below , and in ( [ eq : volwei ] ) often have tractable bounds . [ exmp:3 ]if is non - decreasing in , then and .if is non - increasing in , then and .if are piecewise constants with finitely many pieces , then . [ exmp:6 ]let for ] ; if , we have the infinite time window as , which may be more reasonable for data with a long time horizon .[ exmp:4 ] if for a slowly varying function such that as for all . then we can show or and or , depending on whether or .for the boundary case , assume uniformly , then .similarly , . in this section we establish a self - normalized clt for the sample average . to understand how non - stationarity makes this problem difficult, elementary calculation shows where . in the stationary case , under condition , , the long - run variance in ( [ eq : fanyao ] ) . for non - constant variances , it is difficult to deal with directly , due to the large number of unknown parameters and complicated structure .see de jong and davidson for a kernel estimator of under a near - epoch dependent mixing framework . to attenuate the aforementioned issue, we apply the uniform approximations in theorem [ thm:0 ] .assume that ( [ eq : thmtestcon ] ) below holds .note that the increments of standard brownian motions are i.i.d .standard normal random variables . by ( [ eq : thm0a ] ) , is equivalent to in distribution . by ( [ eq : thm0b ] ) , in probability . by slutskys theorem , we have proposition [ cor:1 ] .[ cor:1 ] let ( [ eq : sip ] ) hold with . for in ( [ eq : volwei])([eq : omegan ] ) , assume recall in ( [ eq : fj ] ) .then as , . consequently , a asymptotic confidence interval for is , where is a consistent estimate of ( section [ sec : lrv ] below ) , and is standard normal quantile .proposition [ cor:1 ] is an extension of the classical clt for i.i.d .data or stationary processes to modulated stationary processes . if are i.i.d . , then . in proposition[ cor:1 ] , can be viewed as the variance inflation factor due to the dependence of . for stationary data ,the sample variance is a consistent estimate of the population variance . here , for non - constant variances case ( [ eq :xinons ] ) , by ( [ eq : thm0b ] ) in theorem [ thm:0 ] , can be viewed as an estimate of the time - average `` population variance ''so , we can interpret the clt in proposition [ cor:1 ] as a self - normalized clt for modulated stationary processes with the self - normalizing term , adjusting for non - stationarity due to and , accounting for dependence of .clearly , parameters are canceled out through self - normalization . finally , condition ( [ eq : thmtestcon ] ) is satisfied in example [ exmp:6 ] with and example [ exmp:4 ] with . in classical statistics ,the width of confidence intervals usually shrinks as sample size increases . by proposition [ cor:1 ] and theorem [ thm:0 ] , the width of the constructed confidence interval for is proportional to or , equivalently , .thus , a necessary and sufficient condition for shrinking confidence interval is , which is satisfied if .an intuitive explanation is as follows .for i.i.d .data , sample mean converges at a rate of . in ( [ eq : xinons ] ) ,if grows faster than , the contribution of a new observation is negligible relative to its noise level .[ exmp : ci ] if with , the length of confidence interval is proportional to .in particular , if for some positive constants and , then achieves the optimal rate .if , then .the same idea can be extended to linear combinations of means over multiple time periods .suppose we have observations from consecutive time periods , each of the form ( [ eq : xinons ] ) with different means , denoted by , and each having time - dependent variances .let for given coefficients .for example , if we are interested in mean change from to , we can take ; if we are interested in whether the increase from to is larger than that from to , we can let .proposition [ thm : lcm ] below extends proposition [ cor:1 ] to multiple means .[ thm : lcm ]let . for ,denote its sample size and its sample average .assume that ( [ eq : thmtestcon ] ) holds for each individual time period and , for simplicity , that are of the same order .then ^ 2 \biggr\}.\ ] ] recall in ( [ eq : xinons ] ) .suppose we are interested in the self - normalized statistic for problems with small sample sizes , it is natural to use bootstrap distribution instead of the convergence in proposition [ cor:1 ] .wu and liu have pioneered the work on the wild bootstrap for independent data with non - identical distributions .we shall extend their wild bootstrap procedure to the modulated stationary process ( [ eq : xinons ] ) .let be i.i.d .random variables independent of satisfying .define the self - normalized statistic based on the following new data : clearly , inherits the non - stationarity structure of by writing with . on the other hand , for the new error process , and for .thus , is a white noise sequence with long - run variance one . by proposition [ cor:1 ] ,the scaled version is robust against the dependence structure of , so we expect that should be close to in distribution .[ thm : bootstrap ] let the conditions in proposition [ cor:1 ] hold .further assume let be a consistent estimate of .denote by the conditional law given .then theorem [ thm : bootstrap ] asserts that , behaves like the scaled version , with the scaling factor coming from the dependence of . herewe use the sample mean in ( [ eq : xinons ] ) to illustrate a wild bootstrap procedure to obtain the distribution of in proposition [ cor:1 ] .a. apply the method in section [ sec : lrv ] to to obtain a consistent estimate of .b. subtract the sample mean from data to obtain .c. generate i.i.d .random variables satisfying .d. based on in ( ii ) and in ( iii ) , generate bootstrap data , and compute where is a long - run variance estimate ( see section [ sec : lrv ] ) for bootstrap data .e. repeat ( iii)(iv ) many times and use the empirical distribution of those realizations of as the distribution of .the proposed wild bootstrap is an extension of that in liu for independent data to modulated stationary case , and it has two appealing features .first , the scaling factor makes the statistic independent of the dependence structure .second , the bootstrap data - generating mechanism is adaptive to unknown time - dependent variances . for the distribution of in step ( iii ) ,we use , which has some desirable properties .for example , it preserves the magnitude and range of the data .as shown by davidson and flachaire , for certain hypothesis testing problems in linear regression models with symmetrically distributed errors , the bootstrap distribution is exactly equal to that of the test statistic ; see theorem 1 therein . for the purpose of comparison, we briefly introduce the widely used block bootstrap for a stationary time series with mean . by ( [ eq : fanyao ] ) ,suppose that we want to bootstrap the distribution of .let be defined as in section [ sec : lrv ] below .the non - overlapping block bootstrap works in the following way : a. take a simple random sample of size with replacement from the blocks , and form the bootstrap data by pooling together for which the index is within those selected blocks .b. let be the sample average of .compute , where is the conditional expectation of given . c. repeat ( i)(ii ) many times and use the empirical distribution of s as the distribution of . in step ( ii ), another choice is the studentized version , where is a consistent estimate of based on bootstrap data . assuming stationarity and , the blocks are asymptotically independent and share the same model dynamics as the whole data , which validates the above block bootstrap .we refer the reader to lahiri for detailed discussions . for a non - stationary process , block bootstrap is no longer valid , because individual blocks are not representative of the whole data .by contrast , the proposed wild bootstrap is adaptive to unknown dependence and the non - constant variances structure . to test a change point in the mean of a process , two popular cusum - type tests ( see section 3 of robbins _ et al . _ for a review and related references ) are where is a consistent estimate of the long - run variance of , and here ( in our simulation studies ) is a small number to avoid the boundary issue . for i.i.d .data , is proportional to the variance of , so is a studentized version of . for i.i.d .gaussian data , is equivalent to likelihood ratio test ; see csrg and horvth .assume that , under null hypothesis , \biggr\ } _{ 0\le t\le1 } \rightarrow\tau\{b_t\}_{0\le t\le1},\qquad \mbox{in the skorohod space}\ ] ] for a standard brownian motion .the above convergence requires finite - dimensional convergence and tightness ; see billingsley . by the continuous mapping theorem , and .for the modulated stationary case ( [ eq : null ] ) , ( [ eq : cusuma ] ) is no longer valid .moreover , since and do not take into account the time - dependent variances , an abrupt change in variances may lead to a false rejection of when the mean remains constant .for example , our simulation study in section [ sec : power ] shows that the empirical false rejection probability for and is about for nominal level . to alleviate the issue of non - constant variances, we adopt the self - normalization approach as in previous sections .recall and in ( [ eq : fj ] ) . for each fixed , by theorem [ thm:0 ] and slutsky s theorem , in distribution ,assuming the negligibility of the approximation errors .therefore , the self - normalization term can remove the time - dependent variances . in light of this, we can simultaneously self - normalize the two terms and in ( [ eq : sxj ] ) and propose the self - normalized test statistic here , is defined as in ( [ eq : fj ] ) , with .[ thm : test ] assume ( [ eq : sip ] ) holds .let be as in ( [ eq : thmtestcon ] ) . under , we have where by theorem [ thm : test ] , under , is asymptotically equivalent to . due to the self - normalization , for each , the time - dependent variances are removed and has a standard normal distribution .however , and are correlated for .therefore , is a non - stationary gaussian process with a standard normal marginal density . due to the large number of unknown parameters , it is infeasible to obtain the null distribution directly . on the other hand ,theorem [ thm : test ] establishes the fact that , asymptotically , the distribution of in ( [ eq : tstar ] ) depends only on and is robust against the dependence structure of , which motivates us to use the wild bootstrap method in section [ sec : wild ] to find the critical value of .a. compute and find .b. divide the data into two blocks and . within each block ,subtract the sample mean from the observations therein to obtain centered data .pool all centered data together and denote them by . c. based on ,obtain an estimate of .see section [ sec : lrv ] below .d. compute the test statistic in ( [ eq : tstar ] ) .e. based on in ( ii ) , use the wild bootstrap method in section [ sec : wild ] to generate synthetic data , and use ( i)(iv ) to compute the bootstrap test statistic based on the bootstrap data .f. repeat ( v ) many times and find quantile of those . as argued in section [ sec : wild ] , the synthetic data - generating scheme ( v ) inherits the time - varying non - stationarity structure of the original data . also , the statistic is robust against the dependence structure , which justifies the proposed bootstrap method .if is rejected , the change point is then estimated by .if there is no evidence to reject , we briefly discuss how to apply the same methodology to test , that is , whether there is a change point in the variances . by ( [ eq : xinons ] ), we have , where has mean zero .therefore , testing a change point in the variances of is equivalent to testing a change point in the mean of the new data . to apply the results in sections [ sec : cltx][sec : cusum ] , we need a consistent estimate of the long - run variance .most existing works deal with stationary time series through various block bootstrap and subsampling approaches ; see lahiri and references therein . assuming a near - epoch dependent mixing condition , de jong and davidson established the consistency of a kernel estimator of , and their result can be used to estimate in ( [ eq : lrvnon ] ) for the clt of .however , for the change point problem in section [ sec : cusum ] , we need an estimator of the long - run variance of the unobservable process , so the method in de jong and davidson is not directly applicable .to attenuate the non - stationarity issue , we extend the idea in section [ sec : cltx ] to blockwise self - normalization .let be the block length .denote by the largest integer not exceeding .ignore the boundary and divide into blocks recall the overall sample mean .for each block , define the self - normalized statistic }{v(j ) } , \qquad\mbox{where } \bar{x}(j)=\frac{1}{k_n } \sum_{i\in\mathcal{i}_j } x_i , v^2(j ) = \sum_{i\in\mathcal{i}_j } [ x_i-\bar{x}(j)]^2.\ ] ] by proposition [ cor:1 ] , the self - normalized statistics are asymptotically i.i.d .thus , we propose estimating by as in ( [ eq : volwei])([eq : omegan ] ) , we define the quantities on block [ thm:3 ] let ( [ eq : sip ] ) hold with .recall in ( [ eq : volwei])([eq : omegan ] ) .define assume that and then .consequently , is a consistent estimate of .consider example [ exmp:6 ] with .then . for , it can be shown that the optimal rate is when .in example [ exmp:4 ] with for some , elementary but tedious calculations show that the optimal rate is ,\vspace*{5pt}\cr n^{{(\beta-1)}/{(5 - 4\beta)}}\{\log(n)\}^{{(8(1-\beta))}/{(5 - 4\beta ) } } , \qquad k_n\asymp n^{{(4.5 - 4\beta)}/{(5 - 4\beta)}}\{\log(n)\}^{{4}/{(5 - 4\beta ) } } , \vspace*{2pt}\cr \quad \beta\in(3/4,1).}\ ] ] the self - normalization approaches in sections [ sec : cltx][sec : lrv ] can be extended to linear regression model ( [ eq : lr ] ) with modulated stationary time series errors .the approach in phillips , sun and jin is not applicable here due to non - stationarity . for simplicity, we consider the simple case that and .hansen studied a similar setting for martingale difference errors .denote by and the simple linear regression estimates of and given by then simple algebra shows that the latter expressions are linear combinations of .thus , by the same argument in proposition [ cor:1 ] and theorem [ thm:0 ] , we have self - normalized clts for and .[ thm : lr ] let and .assume that and satisfy condition ( [ eq : thmtestcon ] ) .then as , \frac{n^2(\hat\beta_1-\beta_1)}{6 v_{n,1 } } & \rightarrow & n(0,\tau^2 ) , \qquad\mbox{where } v_{n,1}^2 = \sum^n_{i=1 } ( 2i - n-1)^2 ( x_i-\hat\beta_0-\hat\beta_1 i / n)^2.\end{aligned}\ ] ] the long - run variance can be estimated using the idea of blockwise self - normalization in section [ sec : lrv ] .let and be defined as in section [ sec : lrv ] .then we propose here , are asymptotically i.i.d .normal random variables with mean zero and variance .consistency can be established under similar conditions as in theorem [ thm:3 ] . for the general linear regression model ( [ eq : lr ] ) , the linearly weighted average structure of linear regression estimates allows us to obtain self - normalized clts as in theorem [ thm : lr ] under more complicated conditions . also , it is possible to extend the proposed method to the nonparametric regression model with time - varying variances where is a nonparametric time trend of interest .nonparametric estimates , for example , the nadaraya watson estimate , are usually based on locally weighted observations .the latter feature allows us to derive similar self - normalized clt .however , the change point problem for ( [ eq : lr ] ) and ( [ eq : npm ] ) will be more challenging , and aue _ et al . _ studied ( [ eq : lr ] ) for uncorrelated errors with constant variance . also , it is more difficult to address the bandwidth selection issues ; see altman for related contribution when .it remains a direction of future research to investigate ( [ eq : lr ] ) and ( [ eq : npm ] ) .recall that in ( [ eq : lrvest ] ) are asymptotically i.i.d .normal random variables . to get a sensible choice of the block length parameter , we propose a simulation - based method by minimizing the empirical mean squared error ( mse ) : a. simulate i.i.d .standard normal random variables .b. based on , obtain with block length .c. repeat ( i)(ii ) many times , compute empirical as the average of realizations of , and find the optimal by minimizing .we find that the optimal block length is about 12 for , about 15 for , about 20 for and about 25 for .let sample size .recall and in ( [ eq : xinons ] ) . for , consider four choices : where is the standard normal density , and is the indicator function .the sequences a1a4 exhibit different patterns , with a piecewise constancy for a1 , a cosine shape for a2 , a sharp change around time for a3 and a gradual downtrend for a4 .let be i.i.d .n(0 , 1 ) . for ,we consider both linear and nonlinear processes . for b1 , by wu , ( [ eq : pro1con ] ) holds . by andel , netuka and svara , and . to examine how the strength of dependence affects the performance , we consider , representing independence , intermediate and strong dependence , respectively . for b2 with , ( [ eq : sip ] ) holds with , and we consider three cases . to assess the effect of block length , three choices are usedthus , we consider all 72 combinations of . without loss of generality we examine coverage probabilitiesbased on realized confidence intervals for in ( [ eq : xinons ] ) .we compare our self - normalization - based confidence intervals to some stationarity - based methods . for ( [ eq : xinons ] ) , if we pretend that the error process is stationary , then we can use ( [ eq : fanyao ] ) to construct an asymptotic confidence interval for . under stationarity, the long - run variance of can be similarly estimated through the block method in section [ sec : lrv ] by using the non - normalized version ] for , ] .note that the latter confidence interval for does not cover zero , which provides further evidence for and the existence of a change point at year 1880 .the data set consists of quarterly u.s . gross national product ( gnp ) growth rates from the first quarter of 1947 to the third quarter of 2002 ; see section 3.8 in shumway and stoffer for a stationary autoregressive model approach .however , the plot in figure [ fig : gnp ] suggests a non - stationary pattern : the variation becomes smaller after year 1985 whereas the mean level remains constant .moreover , the stationarity test in kwiatkowski _et al . _ provides fairly strong evidence for non - stationarity with a p - value of 0.088 . with the block length , we obtain the corresponding p - values , and hence there is no evidence to reject the null hypothesis of a constant mean . based on ,the wild bootstrap confidence interval for is $ ] . to test whether there is a change point in the variance , by the discussion in the last paragraph of section [ sec : cusum ] , we consider . with ,the corresponding p - values are , indicating strong evidence for a change point in the variance at year 1984 . in summary , we conclude that there is no change point in the mean level , but there is a change point in the variance at year 1984 .proof of theorem [ thm:0 ] let . by the triangle inequality , we have .recall in ( [ eq : sip ] ) . by the summation by parts formula , ( [ eq : thm0a ] ) follows via by kolmogorov s maximal inequality for independent random variables , for , =\delta^{-2}.\quad\ ] ] thus , by ( [ eq : tnsip ] ) , .observe that by ( [ eq : sip ] ) , the same argument in ( [ eq : tnsip ] ) and ( [ eq : maxin ] ) shows , uniformly .the desired result then follows via ( [ eq : p1a ] ) .proof of theorem [ thm : bootstrap ] denote by the standard normal distribution function . by proposition [ cor:1 ] and slutsky s theorem , for each fixed . since is a continuous distribution , .it remains to prove , in probability .notice that , conditioning on , are independent random variables with zero mean . by the berry essen bound in bentkus , bloznelis and gtze , there exists a finite constant such that where denotes conditional expectations given .clearly , and .thus , under the assumption , we have .meanwhile , by the proof of theorem [ thm:0 ] , .therefore , the desired result follows from ( [ eq : pfbta ] ) in view of ( [ eq : bootcon ] ) .proof of theorem [ thm : test ] for , . for in ( [ eq : sxj ] ) , by ( [ eq : thm0a ] ) , we have , where by ( [ eq : thm0b ] ) , , where for , .thus , condition ( [ eq : thmtestcon ] ) implies and .therefore , uniformly over , by ( [ eq : maxin ] ) , .thus , the result follows in view of .proof of theorem [ thm:3 ] condition implies . by ( [ eq : thm0b ] ) , define .clearly , are independent standard normal random variables .thus , . by ( [ eq : thm0a ] ) ,recall the definition of in ( [ eq : dj ] ) . by the same argument in ( [ eq : thm0a ] ) , using as , we have \{1+\mathrm{o}(\omega_j)\ } + \mathrm{o}_\mathrm { p}\biggl\{\frac{k_n\sigma_n}{n\sigma(j ) } \biggr\}\\ & = & \tau u_j + \mathrm{o}_\mathrm { p}\biggl\ { \sqrt{\log(n ) } m_n + \frac{\sigma_ n}{\ell_n \sigma(j ) } \biggr\}.\end{aligned}\ ] ] by the latter expression and , we can easily verify .we are grateful to the associate editor and three anonymous referees for their insightful comments that have significantly improved this paper .we also thank amanda applegate for help on improving the presentation and kyung - ja ha for providing us the seoul precipitation data .zhao s research was partially supported by nida grant p50-da10075 - 15 .the content is solely the responsibility of the authors and does not necessarily represent the official views of the nida or the nih .
|
we study statistical inferences for a class of modulated stationary processes with time - dependent variances . due to non - stationarity and the large number of unknown parameters , existing methods for stationary , or locally stationary , time series are not applicable . based on a self - normalization technique , we address several inference problems , including a self - normalized central limit theorem , a self - normalized cumulative sum test for the change - point problem , a long - run variance estimation through blockwise self - normalization , and a self - normalization - based wild bootstrap . monte carlo simulation studies show that the proposed self - normalization - based methods outperform stationarity - based alternatives . we demonstrate the proposed methodology using two real data sets : annual mean precipitation rates in seoul from 17712000 , and quarterly u.s . gross national product growth rates from 19472002 .
|
the security of the infrastructure in modern society is of great importance .systems like internet , power grids , transportation and fuel distribution networks need to be robust and capable of surviving from random failures or intentional attacks .many processes taking place on networks might be significantly influenced and showing low degree of tolerance to damage on their structures .examples of such processes in nature and society include epidemic spreading , synchronization , random walks , traffic and opinion formation .therefore , the robustness for different network structures was intensively studied in the past decade .it is also revealed that the shortest path and graph spectrum can be employed to estimate the network robustness .moreover , interdependent network is proposed to model the catastrophic cascade of failures in real systems . in a recent work , a new measure for network robustness under malicious attack on nodesis proposed .this measurement , which we call node - robustness in this paper , considers the size of the largest component during all possible malicious attacks , namely , where is the number of nodes in the network and is the fraction of nodes in the largest connected cluster after removing nodes .the normalization factor makes robustness of networks with different sizes comparable .a robust network is generally corresponding to a large value . with this measurement , a greedy algorithmis designed to enhance the node - robustness in real systems and large improvement is observed even a small number of links are modified .moreover , the optimal structure for node - robustness is found to be an onion structure in which high - degree nodes are highly connected with rings of nodes with decreasing degree surrounding . lately , some simple methods were also proposed to generate such robust onion networks . however , the analysis in ref . is only based on the targeted attacks on nodes . in reality, failures can happen in connections between nodes as well .for example , the power cables can be dysfunctional and some airlines can be blocked due to the terrible weather or terrorist attacks . in this paper , we propose a link - robustness index ( ) to measure the ability of network to resist link failures .we find that solely enhancing can not always improve and the network structure for optimal is far different from the onion network . in order to design robust network resistant to different kinds of malicious attacks ,we propose a greedy algorithm aiming for both and improvement . to validate the robustness of the resultant networks , we examined them against more realistic attack strategy which combines both nodes and links failure .since the manipulation of real network always confront certain economical constraints , we finally took these requirements into consideration in our method and some significant improvement in both and are still obtained .since a robust network should be able to resist the most destructive attack , we begin our analysis by comparing the harmfulness caused by different malicious attack strategies on links . the most destructive attack is supposed to destroy the most important " links in the networks . like ref . , we monitor the size of giant component to estimate how the network gets destroyed after these important " links are removed step by step .there are many methods to measure the importance " of links , here we mainly consider three indexes to identify the most important link to delete .the indexes include edge - betweenness , link clustering coefficient and degree product .we also use the random link removal as a benchmark for comparison . in order to simulate a more harmful strategy, we apply a dynamical approach in which the importance " of the links ( i.e. edge - betweenness , link clustering coefficient and degree product ) are recalculated during the attack .fig . 1 reports how the relative size of the giant component changes with the fraction of links removed by different strategies in a barabasi - albert ( ba ) network model .obviously , the most destructive strategy is the one based on the edge - betweenness .this is because the links with high betweeness are with many shortest paths passing through .if they are cut , many nodes can not communicate with each other and the networks are likely to break into pieces . specifically , though the degree - based node attack strategy can make a severe damage to the network , cutting the links connecting high degree nodes leads to even less harmful effect than the random removal method to the network connectivity .this is reasonable because the hubs can be strongly connected with each other , and this is well known as the rich - club phenomenon . with the fraction of links removed by different strategies in barabasi - albert ( ba ) networks .the original ba network is with and .the results are averaged over independent realizations.,width=264 ] of ba networks with and , the corresponding -optimized and -optimized networks .the results are averaged over independent realizations.,width=264 ] according to the analysis above , we propose a link - robustness index ( ) based on the highest edge - betweenness attack strategy as where is the total number of links .this measure captures the network response to any fraction of link removal . apparently ,if a network is robust against link attack , its should be relatively large . in ref . , it is found that the most robust structure for node attack is the onion - like network , which is corresponding to the topology with maximum .however , it is still unclear whether this structure is tolerant to the link attack as well . in principle, a robust network should have both large and since both the nodes and links can fail due to some unexpected accidents .we therefore report the in ba networks and the corresponding onion networks in fig.2 .interestingly , despite the onion networks is resistant to malicious node attack , it is weaker than the original ba networks with respect to the intentional link attack .more specifically , the in onion network is lower than the ba model ( for detail value , see table i ) .therefore , it is necessary to design a structural manipulating method to enhance the link - robustness for networks . since changing the degree of a nodeis commonly assumed to be particular more expensive than changing the connections , we keep invariant the degree of each node in our algorithm . starting from an original network, we swap the connections of two randomly chosen edges , i.e. , we randomly select two edges and ( which connect node with node , and node with node , respectively ) , then change them to and only if .we then repeat this procedure with another randomly chosen pair of edges until no further substantial improvement is achieved for a given large number of consecutive swapping trials ( here , we set it as ) . in fig .2 , we can clearly see that the can be significantly improved by the algorithm .compared to the original ba network , can be increased by ( see table i for detail value ) .[ cols="<,<,<,<,<,<,<,<",options="header " , ] when different networks are attacked by the mixed strategy .the original network is usair and the fraction of node failure is set as .the results are averaged over independent realizations ., width=264 ] when different networks are attacked by the mixed strategy .the original network is grid and the fraction of node failure is set as .the results are averaged over independent realizations.,width=264 ] in real systems , the failures can actually happen in not only nodes but also links .for example , heavy snow can break some power cables and aircraft mechanical problem can block certain airlines .therefore , when designing the robust networks , we should take both and into account . in order to achieve this objective ,we propose a hybrid greedy algorithm to manipulate the network structure for better robustness .different from the process in the previous section , we swap the connections of two randomly chosen edges only if both and are improved .the swapping process stops if there is no improvement in trials . besides the ba network model, we further consider two real systems : ( 1 ) usair : the network of us air transportation system , which contains airports and airlines .( 2 ) grid : an electrical power grid of western europe ( mainly portugal and spain ) , with nodes representing generators , and links corresponding to the high - voltage transmission lines between them .this network contains nodes and links .both real networks are well connected and without any isolated component . for each network mentioned above, we obtained the corresponding -optimized , -optimized and hybrid - optimized networks by the greedy algorithm and the related results are given in table i. as we can see from the ba model and usair network , optimizing can not guarantee the improvement of and optimizing can not always increase either .however , the hybrid method can improve both and from the original networks . more specifically , the and the are increased respectively by and in the usair network . in the grid network ,the improvement of is and the increment of can reach even . compared with -optimized and -optimized networks , the hybrid - optimized networks do not have the advantage in single aspect of robustness , but they are kept with a reasonable balance between and .as we mentioned in the introduction , the network robustness was formerly characterized by the spectrum of the adjacency matrix ( ) , we however show that the spectrum index has certain positive correlation with but does nt have obvious relation to .therefore , it actually only represents the node - robustness but can not reflect the network robustness for link attack .the topology properties of the resultant networks are also analyzed .the result in table i shows that the hybrid - optimized networks usually have larger assortativity , smaller average shortest path length and lower cluster coefficient than the original networks .it has been revealed that the optimal structure for is the onion structure in which nodes with almost the same degree are connected , so the most significant feature for -optimized network is the large assortativity . for the aspect of ,since the most destructive attack strategy is based on the highest load ( edge - betweenness ) , the less significant the community structure is , the higher will be .consequently , the robust network against to the link attack should be with small average shortest path length and cluster coefficient .unlike the onion network , the -optimized networks usually do nt have a large assortativity , which explains why the onion network do nt have a high .for the resultant network from the hybrid algorithm , they will finally carry these topology properties from both -optimized and -optimized networks .value of different networks when changes from to .the original network is usair .the results are averaged over independent realizations.,width=264 ] value of different networks when changes from to .the original network is grid .the results are averaged over independent realizations.,width=264 ] since the failures in nodes and links can happen simultaneously , a robust network should be able to resist the attack from both ways . one interesting aspect to consider is to see how the networks in table i react to the attack combining node failures and link failures .accordingly , we design a mixed attack strategy in which the nodes will be removed with probability and the links will be cut with probability in each step .the procedure goes on until the size of the giant component reaches .we first set as an example and report in fig .3 and 4 the performance of the networks in table i. the results show that the hybrid - optimized networks preserve the giant component most effectively .we then consider the mixed attack process with varying from to .when , the process is just pure highest load ( edge - betweenness ) attack on links .when , it returns to the largest degree attack on nodes . here ,we are mainly interested in the situation where . in order to estimate inwhich range of the hybrid - optimized network has advantage , we generalize the definition of robustness to a quantity in the mixed attack process , where is the total number of steps to entirely destroy the network connectivity . measures how tolerant a network against the malicious attack ( which can be nodes attack , link attack or mixed ) . according to eq .( 2 ) , when the and when .the value of the networks in table i under different are reported in fig .obviously , the original networks performs worst under any .the -optimized networks can indeed improve the value when is large .however , they do nt have too much advantage when is small . more specifically , in the usair network ( see fig .5 ) , the -optimized network has almost the same when is smaller than .the -optimized network can significant improve the value when is small , but drops nearly back to the original network level when is large .the similar trend can be seen also in the grid network ( fig .these phenomena indicate that the -optimized network is very sensitive to link attack while the -optimized network is fragile when attacked by nodes .the hybrid - optimized networks , however , perform very stable under different attack situations ( i.e. , different ) , which suggests that the hybrid - optimized network is a much more reliable structure in reality , especially when the fraction of node and link failure is unknown .moreover , compared to the -optimized and -optimized networks , the hybrid - optimized network can even enjoy a higher value in certain range of ( in the usair network and in the grid network ) .in other words , when the network is attacked by both links and nodes , the hybrid - optimized network seems to be the most robust structure . finally , we consider some economical constraint on improving the robustness in the real system .first of all , the total length ( geographically calculated ) of links can not be exceedingly large .secondly , the number of changes of links should be relatively small .therefore , for reconstructing the real networks like usair and grid , we add two more constraints to the greedy algorithm : the swap of two links is only accepted if the total geographic length of edges does not increase , and both and are increased more than certain values ( denoted as and ) . with the strong constraints , and of real networks can still be significantly improved . specifically , with only links changed , the and of the usair network can be respectively increased by and ( : from to . : from to ) . in the grid network, the can be improved by ( from to ) and the can be improved by ( from to ) with only links changed .how to enhance the robustness of networks is an important topic , which is related to protecting the real system from random failures and malicious attacks . in the former literatures, most of the works focused on proposing methods to improve the network robustness for the attack on nodes .however , the connections between nodes can be also damaged due to some unexpected accidents , which requires us to take the link failure into account when designing robust networks . in this paper , based on the highest load attack strategy , we propose the link - robustness index to estimate how the network can resist to the most destructive targeted attack on links .moreover , we designed a hybrid greedy algorithm to enhance both node - robustness and link - robustness . when attacked by the strategy combining node and link failure ,the resultant networks from the hybrid method outperform the networks from solely improving either or .finally , some economical constraints are considered when enhancing the robustness of real networks and some significant improvement are observed .although the hybrid method can obtain a reliable network which is generally robust to the attack mixed with node failures and link failures , there are still some further improvement can be achieved . in real system , the probability of the node failure and link failure can be different from one system to another . in the paper ,we accept the swap of links only if both and are increased .however , one can sacrifice a little bit for larger improvement in or the other way around since these two robustness aspects ask for different structure properties. provided knowing the fraction of node failure and link failure from the analysis of the historical data , more effective greedy algorithm can be designed to generate suitable network structures for some specific real systems .we would like to thank yi - cheng zhang and xiao - pu han for helpful suggestions .this work is supported by the swiss national science foundation ( no .200020 - 132253 ) .99 r. albert , h. jeong , a .-barabasi , nature * 406 * , 378 ( 2000 ) .r. guimera and m. sales - pardo , proc .usa * 106 * , 22073 ( 2009 ) .a. zeng and g. cimini , phys .e * 85 * , 036101 ( 2012 ) .r. pastor - satorras and a. vespignani , phys .* 86 * , 3200 ( 2001 ) .m. kitsak , l. k. gallos , s. havlin , f. liljeros , l. muchnik , h. e. stanley and h. a. makse , nature phys .* 6 * , 888 ( 2010 ) .a. arenas , a. diaz - guilera , j. kurths , y. moreno , and c .- s .zhou , phys . rep . * 469 * , 93 ( 2008 ) .a. zeng , s .- w .son , c. h. yeung , y. fan and z. di , phys . rev .e * 83 * , 045101(r ) ( 2011 ) .a. zeng , y. hu and z. di , europhys .87 * , 48002 ( 2009 ) .j. d. noh and h. rieger , phys .* 92 * , 118701 ( 2004 ) .m. rosvall and c. t. bergstrom , proc .105 * , 1118 ( 2008 ) .g. li , s. d. s. reis , a. a. moreira , s. havlin , h. e. stanley and j. s. andrade , jr .lett . * 104 * , 018701 ( 2010 ) .h. yang , y. nie , a. zeng , y. fan , y. hu and z. di , europhys . lett . *89 * , 58002 ( 2010 ) .m. bartolozzi , d. b. leinweber and a. w. thomas , phys .e * 72 * , 046113 ( 2005 ) . c. castellano , s. fortunato and v. loreto , rev .* 81 * , 591 ( 2009 ) .d. s. callaway , m. e. j. newman , s. h. strogatz and d. j. watts , phys .lett . * 85 * , 5468 ( 2000 ) .r. cohen , k. erez , d. ben - avraham and s. havlin , phys .* 85 * , 4626 ( 2000 ) .b. shargel , h. sayama , i. r. epstein and y. bar - yam , phys .lett . * 90 * , 068701 ( 2003 ) .e. estrada , eur .j. b * 52 * , 563 ( 2006 ) .a. a. moreira , j. s. andrade jr ., h. j. herrmann and j. o. indekeu , phys .* 102 * , 018701 ( 2009 ) .v. latora and m. marchiori , phys .lett . * 87 * , 198701 ( 2005 ) .m. fiedler , czech math .j. * 23 * , 298 ( 1973 ) .s. v. buldyrev , r. parshani , g. paul , h. e. stanley and s. havlin , nature * 464 * , 1025 ( 2010 ) .x. huang , j. gao , s. v. buldyrev , s. havlin , and h. e. stanley , phys .e * 83 * , 065101(r ) ( 2011 ) . c. m. schneider , a. a. moreira , j. s. andrade , jr . , s. havlin and h. j. herrmann , proc .sci . u.s.a . * 108 * , 3838 ( 2011 ) .z .- x . wu and p. holme , phys .e * 84 * , 026106 ( 2011 ) . v. colizza , a. flammini , m. a. serrano and a. vespignani , nature phys .* 2 * , 110 ( 2006 ) .v. batageli and a. mrvar , pajek datasets , available at http://vlado.fmf.uni-lj.si/pub/networks/data/default.htm q. zhou , j. w. bialek , ieee t. power syst . *20 * , 782 ( 2005 ) . in the usair network , and . in grid network , and
|
in a recent work [ proc . natl . acad . sci . usa 108 , 3838 ( 2011 ) ] , the authors proposed a simple measure for network robustness under malicious attacks on nodes . with a greedy algorithm , they found the optimal structure with respect to this quantity is an onion structure in which high - degree nodes form a core surrounded by rings of nodes with decreasing degree . however , in real networks the failure can also occur in links such as dysfunctional power cables and blocked airlines . accordingly , complementary to the node - robustness measurement ( ) , we propose a link - robustness index ( ) . we show that solely enhancing can not guarantee the improvement of . moreover , the structure of -optimized network is found to be entirely different from that of onion network . in order to design robust networks resistant to more realistic attack condition , we propose a hybrid greedy algorithm which takes both the and into account . we validate the robustness of our generated networks against malicious attacks mixed with both nodes and links failure . finally , some economical constraints for swapping the links in real networks are considered and significant improvement in both aspects of robustness are still achieved .
|
complex networked systems represent better than any other the idea of complexity . when looking at their large scale topological properties , real networks are far more complex than classical random graphs , showing emerging properties not obvious at the level of their elementary constituents the small - world effect , scale - free connectivity , clustering , degree correlations , etc . when looking at the dynamical processes that take place on top of them , these large scale topological properties have striking consequences on the behavior of the system absence of epidemic threshold , resilience to damage , etc .the understanding that many real world systems of interacting elements can be mapped into graphs or networks has led to a surge of interest in the field of complex networks and to the development of a theoretical frameworks able to properly analyze them . under this approach, the elements of the system are mapped into vertices whereas interactions among these elements are represented as edges , or links , among vertices of the network . in an attempt to bring nearer theory and reality ,many researchers working on the new rapidly evolving science of complex networks have recently shifted focus from unweighted graphs to weighted networks .commonly , interactions between elements in network - representable complex real systems -may they be communication systems , such as the internet , or transportation infrastructures , social communities , biological or biochemical systems- are not of the same magnitude .it seems natural that the first more simple representations , where edges between pairs of vertices are quantified just as present or absent , give way to more complex ones , where edges are no longer in binary states but may stand for connections of different strength . in an unweighted network ,all the topological properties can be expressed as a function of the adjacency matrix , , whose elements take the value when vertices and are connected by an edge and otherwise . using this matrix ,the degree of a vertex the number of neighbors or connections it has becomes the distribution of , , is called the degree distribution of the network , and it arises as the most fundamental topological characteristic .surprisingly , in a vast majority of cases , real networks show degree distributions following power laws of the form for and .this implies that the second moment of the degree distribution , , diverges in the thermodynamic limit , which causes the loss of any characteristic degree scale .for this reason , these class of networks are called scale - free ( sf ) .this feature of the degree distribution is , eventually , the responsible for the surprising behavior of dynamical processes that run on top of these networks . for weighted networks ,the adjacency matrix is no longer the fundamental quantity ruling the properties of the network .instead , we have to consider the matrix , which measures the weight between the pair of vertices and , that is , the magnitude of the connection between and . the analogous quantity to the vertex degree is now the vertex strength , defined as this quantity measures the total strength of vertex as the sum of the weights of all its connections .if the weights of a given vertex are not correlated with the vertex degree , the average strength of a vertex of degree is simply given by where is the average weight of the edges of the network .in this situation , strength and degree are proportional and weights do not incorporate more information to the network than that already present in the adjacency matrix .real networks show , however , a very different scaling , with a non trivial relation between strength and degree of the form with ( typically ) .this anomalous scaling reveals the presence of correlations between strength and degree and , thus , the need to model network formation in an weighted basis , where the evolution of the network is linked to the evolution of the weights assigned to connections . in this paper , we present a model of weighted network based on the configuration model . we shall show that , when the expected degree sequence follows a power law with exponent smaller than 2 , the resulting network is weighted and shows a non trivial scaling between strength and degree .the configuration model was first introduced as an algorithm to generate uncorrelated random networks with a given degree distribution , .the model operates in two steps : * we start assigning to each vertex , out of a set of , a number of `` stubs '' , , drawn from the distribution , under the constraint that the sum is even .* pairs of these stubs are chosen uniformly at random and the corresponding vertices are connected by an undirected edge . given the random nature of the edge assignment , this algorithm generates networks with the expected degree distribution and no correlations between the degrees of connected vertices .the model , as described above , allows the formation of multiple or self connections among vertices . nevertheless , when the expected degree distribution has bounded fluctuations , the number of such `` pathological '' connections is small an can be neglected in the thermodynamic limit . in this case , one can add an extra constraint in step two of the algorithm avoiding multiple or self connections without modifying the resulting network .the picture changes drastically when the expected degree distribution has unbounded fluctuations .this is precisely the case of sf distributions with exponent $ ] . in this situation ,vertices with expected degree satisfying can not avoid to form multiple connections .indeed , as it was proved in , the number of multiple connections in this case scales as .yet the situation is not so dramatic since the overall number of connections is much larger that the number of multiple connections .however , one has to be careful because the `` hubs '' of the network those in the tail of the distribution are precisely the ones more prone to hold multiple connections and , therefore , it could alter the results of dynamical processes evaluated on top of these class of networks .there are two strategies one can follow in order to avoid multiple connections : * one can introduce a constraint in step two prohibiting multiple connections .this has the side effect of introducing strong disassortative degree correlations among connected vertices .* alternatively , one can introduce a cut - off in the distribution scaling as . by doing so, one recovers uncorrelated networks but with a smaller cut - off than the natural one .the configuration model with expected degree distribution with has not been studied so forth .this is an extreme situation in which the number of multiple connections can not be neglected .we can take advantage of this fact to construct a weighted network , where the weight between a pair of vertices is the number of multiple connections they have .now , the expected quantity of a given vertex is no longer its degree but its strength. therefore , let us change notation and denote by , , the distribution of expected strengths , that is , the number of expected connections of vertices . at the level of strength , vertices are uncorrelated and the average weight of edges linking nodes of strengths and can be written as where is the total number of connections regardless they are multiple or not in the network , that is , the average degree of a vertex can be obtained as where is the heaviside step function .the first term in this equation is the contribution of those connections that , on average , are smaller than 1 .the second term represents connections with an average number larger than 1 and , thus , holding multiple connections . in this case , the contribution to the degree is just 1 and not . using the expression for the weights , eq .( [ eq:5 ] ) , in the continuum limit we can write that , in the thermodynamic limit goes as it is worth noticing that there are finite size effects that depend on the specific form of the strength distribution .the model generates a non trivial weighted network with an exponent larger than 1 , as it is observed in real weighted networks .the tail of the degree distribution can be obtained from this scaling relation , yielding an exponent .thus , even though the expected strength distribution has an exponent smaller than two , the resulting network has exponent equal to two and a number of connections growing as . , used in the simulations .the solid lines correspond to the theoretical curves given by eq .( [ eq:11 ] ) ], , as a function of , for , , and .the size of the network is . in all cases ,the strength distribution is given by eq .( [ eq:10 ] ) with .solid lines correspond to the best fit estimates ( numbers in parenthesis into the legend box ) of the theoretical behavior . ] to test the results of our analysis , we have performed numerical simulations of the configuration model , generating expected strength distributions of the form the value of modulates the probability to find strengths of value larger than .we choose this particular form because , by an appropriate choice of , the finite size effects are minimized . in all the simulations ,the size of the network is , the values of are , , and and .we first show , in fig .[ fig1 ] , the cumulative strength distributions used in the simulations as compared to the theoretical ones , computed using the continuum approximation . fig .[ fig2 ] shows the scaling relation between and the strength .as it can be seen , the theoretical prediction is well satisfied , with the following estimates for the exponent : for , for , and for .finally , the resulting cumulative degree distribution is plotted in fig .[ fig3 ] , showing that this function goes , for large degrees , as independent of , as predicted by our analysis . , generated by de model .the solid line is power law of the form corresponding to an exponent . ]in summary , we have analyzed the behavior of the classical configuration model in the case of expected strength distributions following a power law form of exponent smaller than two .we have shown that , in this case , the resulting network is weighted , where the weight stands for the number of multiple connections among vertices . the model presents a non trivial scaling relation between strength and degree and a degree distribution with exponent , independent of the exponent of the strength distribution , .these results highlight the subtleties that may arise when dealing with sf networks even in the most simplified models . this work has been partially supported by dges of the spanish government ,fis2004 - 05923-co2 - 02 , and ec - fet open project cosin ist-2001 - 33555 .m. b. acknowledges financial support from the mcyt ( spain ) .
|
the configuration model is one of the most successful models for generating uncorrelated random networks . we analyze its behavior when the expected degree sequence follows a power law with exponent smaller than two . in this situation , the resulting network can be viewed as a weighted network with non trivial correlations between strength and degree . our results are tested against large scale numerical simulations , finding excellent agreement . address = departament de fsica fonamental , universitat de barcelona , mart i franqus 1 , 08028 barcelona , spain address = departament de fsica fonamental , universitat de barcelona , mart i franqus 1 , 08028 barcelona , spain
|
consider the set of all simple graphs on vertices ( `` simple '' means undirected , with no loops or multiple edges ) . by a -parameter family of exponential random graphswe mean a family of probability measures on defined by , for , ,\ ] ] where are real parameters , are pre - chosen finite simple graphs ( and we take to be a single edge ) , is the density of graph homomorphisms ( the probability that a random vertex map is edge - preserving ) , and is the normalization constant , .\ ] ] sometimes , other than homomorphism densities , we also consider more general bounded continuous functions on the graph space ( a notion to be made precise later ) , for example the degree sequence or the eigenvalues of the adjacency matrix .these exponential models are commonly used to model real - world networks as they are able to capture a wide variety of common _ network tendencies _ by representing a complex global structure through a set of tractable local features .intuitively , we can think of the parameters as tuning parameters that allow one to adjust the influence of different subgraphs of on the probability distribution , whose asymptotics will be our main interest since networks are often very large in size .our main results are ( theorem [ main1 ] ) a variational principle for the normalization constant ( partition function ) for graphons with constrained edge density , and an associated concentration of measure ( theorem [ main2 ] ) indicating that almost all large constrained graphs lie near the maximizing set .we then specialize to the edge - triangle model , and show the existence of ( first - order ) phase transitions in the edge - density constrained models ._ acknowledgements ._ we thank charles radin , kui ren , and lorenzo sadun for helpful conversations .we begin by reviewing some notation and results concerning the theory of graph limits and its use in exponential random graph models . following the earlier work of aldous and hoover , lovsz and coauthors ( v.t .ss , b. szegedy , c. borgs , j. chayes , k. vesztergombi , ... ) have constructed an elegant theory of graph limits in a sequence of papers .see also the recent book for a comprehensive account and references .this sheds light on various topics such as graph testing and extremal graph theory , and has found applications in statistics and related areas ( see for instance ) .though their theory has been developed for dense graphs ( number of edges comparable to the square of number of vertices ) , serious attempts have been made at formulating parallel results for sparse graphs .here are the basics of this beautiful theory .any simple graph , irrespective of the number of vertices , may be represented as an element of a single abstract space that consists of all symmetric measurable functions from ^ 2 ] , by defining a sequence of graphs is said to converge to a function ( referred to as a `` graph limit '' or `` graphon '' ) if for every finite simple graph with vertex set =\{1, ... ,k\} ] represents a `` continuum '' of vertices , and denotes the probability of putting an edge between and .for example , for the erds - rnyi random graph , the `` graphon '' is represented by the function that is identically equal to on ^ 2 ] ( and up to sets of lebesgue measure zero ) , which in the context of finite graphs may be thought of as vertex relabeling .to tackle this issue , an equivalence relation is introduced in .we say that if for some measure preserving bijection of ] to as and then extended to in the usual manner : ^ 2}i(h(x , y))\,dx\,dy,\ ] ] where is any representative element of the equivalence class .it was shown in that is well defined and lower semi - continuous on .let be the subset of where is maximized . by the compactness of , the continuity of and the lower semi - continuity of , is a nonempty compact set .the set encodes important information about the exponential model ( [ pmf2 ] ) and helps to predict the behavior of a typical random graph sampled from this model . the second theorem ( theorem 3.2 in )states that in the large limit , the quotient image of a random graph drawn from ( [ pmf2 ] ) must lie close to with high probability , since the limiting normalization constant is the generating function for the limiting expectations of other random variables on the graph space such as expectations and correlations of homomorphism densities , a phase transition occurs when is non - analytic or when is not a singleton set .the exponential family of random graphs introduced above assumes no prior knowledge of the graph before sampling .but in many situations partial information of the graph is already known beforehand .for example , practitioners might be told that the edge density of the graph is close to or the triangle density is close to or the adjacency matrix of the graph obeys a certain form . a natural question to ask then is what would be a typical random graph drawn from an exponential model subject to these constraints ? or perhaps more importantlywill there be a similar phase transition phenomenon as in the unconstrained exponential model ?the following theorems [ main1 ] and [ main2 ] give a detailed answer to these questions .not surprisingly , the proofs follow a similar line of reasoning as in theorems 3.1 and 3.2 of . however , due to the imposed constraints , instead of working with probability measure and normalization constant as in , we are working with conditional probability measure and conditional normalization constant , so the argument is more involved .the proof of theorem [ main1 ] also incorporates some ideas from theorem 3.1 of . for clarity ,we assume that the edge density of the graph is approximately known , though the proof runs through without much modification for more general constraints .we make precise the notion of `` approximately known '' below .we still assign a probability measure as in ( [ pmf2 ] ) on , but we will consider a conditional normalization constant and also define a conditional probability measure . let ] .let be defined as above .let ( [ cpmf ] ) be the conditional probability measure on .then for any and sufficiently small there exist such that for all large enough , we check that the conditional probability measure is well defined for all large enough .it suffices to show that is finite .but from ( [ chain ] ) , is trapped between and , which are clearly both finite .recall that is the set of reduced graphons with .take any .let be the subset of consisting of reduced graphons that are at least -distance away from , it is easy to see that is a closed set .without loss of generality we assume that is nonempty for every , since otherwise our claim trivially follows . under this nonemptiness assumptionwe can find a sequence of reduced graphons converging to a reduced graphon , which shows that is nonempty as well . by the compactness of and , and the upper semi - continuity of , it follows that from the proof of theorem [ main1 ] we see that similarly , we have this implies that for sufficiently small , choose and define and as in the proof of theorem [ main1 ] .let .then while bounding the last term above , it may be assumed without loss of generality that is nonempty for each . similarly as in the proof of theorem [ main1 ] , the above inequality gives each satisfies .consequently , substituting this in ( [ supsup ] ) gives this completes the proof .theorems [ main1 ] and [ main2 ] in the previous section illustrate the importance of finding the maximizing graphons for subject to certain constraints .they aid us in understanding the limiting conditional probability distribution and the global structure of a random graph drawn from the constrained exponential model .indeed knowledge of such graphons would help us understand the limiting probability distribution and the global structure of a random graph drawn from the unconstrained exponential model as well , since we can always carry out the unconstrained optimization in steps : first consider a constrained optimization ( referred to as `` micro analysis '' ) , then take into consideration of all possible constraints ( referred to as `` macro analysis '' ) . however , as straight - forward as it sounds , due to the myriad of structural possibilities of graphons , both the unconstrained ( [ setmax ] ) and constrained ( [ csetmax ] ) optimization problems are not always explicitly solvable .so far major simplification has only been achieved in the `` attractive '' case and for -star models , whereas a complete analysis of either ( [ setmax ] ) or ( [ csetmax ] ) in the `` repulsive '' region has proved to be very difficult . this section will provide some phase transition results on the constrained `` repulsive '' edge - triangle exponential random graph model and discuss their possible generalizations . as we will see , they come in accordance with their unconstrained counterparts .we make these notions precise in the following .the unconstrained edge - triangle model is a -parameter exponential random graph model obtained by taking to be a single edge and to be a triangle in ( [ pmf ] ) .more explicitly , in the edge - triangle model , the probability measure is where are real parameters , and are the edge and triangle densities of , and is the normalization constant . as before , we assume that the ideal edge density is fixed .the limiting construction described at the beginning of section [ 2 ] will then yield the asymptotic conditional normalization constant . from ( [ csetmax ] ) we see that depends on both parameters and , however the dependence is linear : is equal to plus a function independent of .in particular plays no role in the maximization problem , so we can consider it fixed at value .the only relevant parameters then are and . to highlight this parameter dependence , in the following we will write as instead .we are particularly interested in the asymptotics of when is negative , the so - called repulsive region .naturally , varying allows one to adjust the influence of the triangle density of the graph on the probability distribution .the more negative the , the more unlikely that graphs with a large number of triangles will be observed .when approaches negative infinity , the most probable graph would likely be triangle free . at the other extreme ,when is zero , the edge - triangle model reduces to the well - studied erds - rnyi model , where edges between different vertex pairs are independently included .the structure of triangle free graphs and disordered erds - rnyi graphs are apparently quite different , and thus a phase transition is expected as decays from to .in fact , it is believed that , quite generally , repulsive models exhibit a transition qualitatively like the solid / fluid transition , in that a region of parameter space depicting emergent multipartite structure , which is in imitation of the structure of solids , is separated by a phase transition from a region of disordered graphs , which resemble fluids .the existence of such a transition in unconstrained -parameter models whose subgraph has chromatic number at least has been proved by aristoff and radin based on a symmetry breaking result from .theorem [ csep ] below gives a corresponding result in the constrained edge - triangle model .its proof though is quite different from the parallel result in and relies instead on some analysis arguments .we also remark that , using the same arguments , it is possible to establish the phase transition as grows from to , i.e. , in the `` attractive '' region of the parameter space .there , combined with simulation results , we could conclude that a typical graph consists of one big clique and some isolated vertices as gets sufficiently close to infinity .[ csep ] consider the constrained repulsive edge - triangle exponential random graph model as described above .let be arbitrary but fixed .let vary from to .then is not analytic at at least one value of .we first consider the case ; the case is similar , see the comments at the end of the proof .let be the edge density of a reduced graphon and be the triangle density , obtained by taking to be a triangle in ( [ tt ] ) . by ( [ csetmax ] ) , where for notational convenience , we denote by the maximum value of overall reduced graphons with and .we examine ( [ etsetmax ] ) at the two extreme values of first . since is convex , when , by jensen s inequality , and the equality is attained only when , the associated graphon for an erds - rnyi graph with edge formation probability .this also ensures that when we take , any maximizing graphon for ( [ etsetmax ] ) will satisfy . forthe other extreme , take an arbitrary sequence , and let be a maximizing reduced graphon for each .let be a limit point of in ( its existence is guaranteed by the compactness of ) .suppose . then by the continuity of and the boundedness of , .but this is impossible since is uniformly bounded below , as can be seen by considering as a test function .thus .the rest of the proof will utilize the following useful features of derived in radin and sadun : for , and this maximum is achieved only at the reduced graphon ( [ h ] ) , corresponding to a complete bipartite graph with fraction of edges randomly deleted .moreover , for any ] , ^|v(h)\texttt{\char92}\{r , s\}|}\prod_{(r',s')\in e(h):(r',s')\neq ( r , s)}h(x_{r'},x_{s'})\prod_{v\in v(h):v\neq r , s}dx_v.\ ] ] for example , in the edge - triangle model where is an edge and is a triangle , and . in the constrained case , we could likewise derive the euler - lagrange equation by resorting to the method of lagrange multipliers , which will turn the constrained maximization into an unconstrained one , but we provide an alternative bare - hands approach here .the following theorem may also be formulated in terms of reduced graphons .consider the constrained -parameter exponential random graph model ( [ 2pmf ] ) .let and be arbitrary but fixed homomorphism densities .suppose the graphon maximizes subject to and .if is bounded away from and , then there must exist constants and such that satisfies ( [ lagrange ] ) for almost all ^ 2 ] so they are almost everywhere continuous .let for be three points of ^ 2 ] .we recognize this requirement is equivalent to ( [ lagrange ] ) .suppose we are looking for a graphon that maximizes subject to only .then following the same `` perturbation '' idea , we should examine since the determinant is zero , must be a constant .this is the same conclusion obtained by applying jensen s inequality to the convex function . on the other hand, we may also consider maximizing subject to ( instead of ) constraints for , in which case we would perturb the values of the graphon at points and form a matrix .borgs , c. , chayes , j. , lovsz , l. , ss , v.t . , vesztergombi , k. : counting graph homomorphisms . in : klazar ,m. , kratochvil , j. , loebl , m. , thomas , r. , valtr , p. ( eds . ) topics in discrete mathematics , volume 26 , pp .315 - 371 .springer , berlin ( 2006 ) hoover , d. : row - column exchangeability and a generalized model for probability . in : koch , g. , spizzichino , f. ( eds . ) exchangeability in probability and statistics , pp .281 - 291 .north - holland , amsterdam ( 1982 )
|
the unconstrained exponential family of random graphs assumes no prior knowledge of the graph before sampling , but in many situations partial information of the graph is already known beforehand . a natural question to ask is what would be a typical random graph drawn from an exponential model subject to certain constraints ? in particular , will there be a similar phase transition phenomenon as that which occurs in the unconstrained exponential model ? we present some general results for the constrained model and then apply them to get concrete answers in the edge - triangle model .
|
database search is an elementary computational task with wide - ranging applications .its efficiency is measured in terms of the number of queries one has to make to the database in order to find the desired item . in the conventional formulation of the problem ,the query is a binary oracle ( i.e. a yes / no question ) . for an unsorted database of items ,starting from an unbiased state and using classical boolean logic , one requires on the average queries to locate the desired item .lov grover discovered a search algorithm that , using superposition of states , reduces the number of required queries to .the algorithm was originally proposed in the context of quantum computation , but its applicability has since been widely expanded by realising that the algorithm is an amplitude amplification process that can be executed by a coupled set of wave modes .it has also been proved that the algorithm is optimal for unsorted database search .grover s algorithm starts with a superposition state , where each item has an equal probability to get picked , and evolves it to a target state where only the desired item can get picked .following dirac s notation , the starting and target state satisfy ( index labels the items ) , the algorithm evolves towards , by discrete rotations in the two - dimensional space formed by and .the rotations are performed as an alternating sequence of the two reflection operators , is the binary oracle which flips the sign of the target state amplitude , while performs the reflection - in - the - average operation .solution to eq.(3 ) determines the number of queries as ( in practice , must be an integer , while eq.(4 ) may not have an integer solution .in such cases , the algorithm is stopped when the state has evolved sufficiently close to , although not exactly equal to , .then one finds the desired item with a high probability . )the steps of the algorithm for the simplest case , and , are illustrated in fig.1 .( 150,120 ) ( 15,111)(0,0)[bl]amplitudes ( 65,111)(0,0)[bl]algorithmic ( 65,105)(0,0)[bl]steps ( 120,111)(0,0)[bl]physical ( 120,105)(0,0)[bl]implementation ( 0,87)(0,0)[bl](1 ) ( 12,87)(1,0)32 ( 13,99)(1,0)2 ( 17,99)(1,0)2 ( 21,99)(1,0)2 ( 25,99)(1,0)2 ( 29,99)(1,0)2 ( 33,99)(1,0)2 ( 37,99)(1,0)2 ( 41,99)(1,0)2 ( 45,87)(0,0)[bl]0 ( 45,99)(0,0)[bl]0.5 ( 16,87)(0,1)12 ( 24,87)(0,1)12 ( 32,87)(0,1)12 ( 40,87)(0,1)12 ( 65,93)(0,0)[bl]uniform ( 65,87)(0,0)[bl]distribution ( 120,93)(0,0)[bl]equilibrium ( 120,87)(0,0)[bl]configuration ( 28,84)(0,-1)12 ( 30,75)(0,0)[bl] ( 65,75)(0,0)[bl]binary oracle ( 120,75)(0,0)[bl]yes / no query ( 0,55)(0,0)[bl](2 ) ( 12,55)(1,0)32 ( 13,61)(1,0)2 ( 17,61)(1,0)2 ( 21,61)(1,0)2 ( 25,61)(1,0)2 ( 29,61)(1,0)2 ( 33,61)(1,0)2 ( 37,61)(1,0)2 ( 41,61)(1,0)2 ( 45,55)(0,0)[bl]0 ( 45,61)(0,0)[bl]0.25 ( 16,55)(0,-1)12 ( 24,55)(0,1)12 ( 32,55)(0,1)12 ( 40,55)(0,1)12 ( 65,62)(0,0)[bl]amplitude of ( 65,56)(0,0)[bl]desired state ( 65,50)(0,0)[bl]flipped in sign ( 120,59)(0,0)[bl]sudden ( 120,53)(0,0)[bl]impulse ( 28,42)(0,-1)12 ( 30,37)(0,0)[bl] ( 65,37)(0,0)[bl]reflection ( 65,31)(0,0)[bl]about average ( 120,37)(0,0)[bl]overrelaxation ( 0,22)(0,0)[bl](3 ) ( 12,15)(1,0)32 ( 13,21)(1,0)2 ( 17,21)(1,0)2 ( 21,21)(1,0)2 ( 25,21)(1,0)2 ( 29,21)(1,0)2 ( 33,21)(1,0)2 ( 37,21)(1,0)2 ( 41,21)(1,0)2 ( 45,15)(0,0)[bl]0 ( 45,21)(0,0)[bl]0.25 ( 16,15)(0,1)24 ( 24,15 ) ( 32,15 ) ( 40,15 ) ( 65,22)(0,0)[bl]desired state ( 65,16)(0,0)[bl]reached ( 120,22)(0,0)[bl]opposite end ( 120,16)(0,0)[bl]of oscillation ( 0,4)(0,0)[bl](4 ) ( 12,4)(0,0)[bl]projection ( 65,4)(0,0)[bl]algorithm ( 65,-2)(0,0)[bl]is stopped ( 120,4)(0,0)[bl]measurement the algorithm relies on superposition of and interference amongst a set of states , which are generic features of wave dynamics .it can be executed by any system of coupled wave modes , provided : + ( 1 ) the superposition of modes maintains phase coherence during evolution .+ ( 2 ) the two reflection operations ( phase changes of for the appropriate mode ) can be efficiently implemented .+ ( note that the states can be encoded using bits , but have to be realised as distinct wave modes . ) otherwise , the algorithm is fairly robust , and succeeds even when : + ( a ) the wave modes are anharmonic though symmetric .+ ( b ) the initial state is somewhat randomised .+ ( c ) the phase changes in the reflection operations are slightly different from , + ( d ) the wave modes are weakly damped .the interpretation of amplitude amplification occurring in the algorithm depends on the physical context . in the quantum version , gives the probability of a state , and the algorithm solves the database search problem . in the classical wave version , gives the energy of a mode , and the algorithm provides the fastest method for energy redistribution through the phenomenon of beats .the quantum version of the algorithm involves highly fragile entanglement , and hence very short coherence times .it also needs to be implemented at the atomic scale , which is not at all easy . on the other hand ,the classical wave version uses only superposition , which is much more stable , and hence it is straightforward to design demonstration models . in the following ,i describe implementation of grover s algorithm in a simple mechanical setting , using four harmonic oscillators coupled via the centre - of - mass mode .a system of coupled harmonic oscillators is frequently studied in physics .it involves only quadratic forms , and can be solved exactly in both classical and quantum domains .let the items in the database be represented by identical harmonic oscillators . while they are oscillating in a specific manner , one of them is tapped " ( i.e. elastically reflected ) .the task is to identify which of the oscillators has been tapped , without looking at the tapping .the optimisation criterion is to design the system of oscillators , and their initial state , so as to make the identification as quickly as possible .= 9truecm grover s algorithm requires identical coupling between any pair of oscillators .that is arranged by coupling all the small oscillators to a big oscillator , as shown in fig.2 .the big oscillator then is coupled to the centre - of - mass mode , and becomes an intermediary between any pair of oscillators with the same coupling .indeed , the centre - of - mass mode plays the role of the average state " . in thissetting , elastic reflection of an oscillator implements the binary oracle in velocity space ( see fig.3 ) , and time evolution of the whole system by half an oscillation period carries out the reflection about average operation .the equations of motion are : the non - target ( ) oscillators influence the dynamics of the target oscillator ( ) only through the centre - of - mass position .they make up linearly independent modes , of the form , which decouple from and .effective dynamics is thus in the 3-variable space , with the equations of motion : the last equation is easily solved , with the angular frequency .the first two equations are coupled , and their eigenmodes are of the form .we can find them by requiring that therefore , let the dimensionless ratios for the spring constants and the masses be , and . then the sinusoidal solutions have the angular frequency there are two solutions to these equations , coefficients and the corresponding frequencies .they satisfy the general solution to the dynamical equations is : = 9truecm we constructed the simplest system , with .the binary oracle ( i.e. ) is the elastic reflection when .the reflection about average operation imposes the constraint that the whole system must evolve by half an oscillation period between successive oracles . from the many possibilities , as our design parameters ,we selected the convenient frequency ratios then the time period for the whole system . time evolution for half the period reverses , while leaving unchanged , i.e. it implements the operator in the velocity space . thus grover s algorithm is realised by tapping " the target oscillator at every time interval .the above resonance criterion corresponds to : in a situation where all the non - target oscillators move uniformly together ( i.e. all equal , all equal ) , the displacements are our experimental parameters differed slightly from these ideal values because of various imperfections discussed later .the uniform superposition state corresponds to the initial conditions : in this case , the big oscillator returns to its initial rest state after every half a period , and the first binary oracle is applied at .the starting phases of the solution vanish , .the amplitudes are before the binary oracle , and after the binary oracle .the resultant time evolution of the oscillators in the position and velocity spaces is illustrated in fig.4 .it is observed that grover s algorithm provides position amplification of and velocity amplification of .[ t ] = 9truecm = 9truecm in actual experiment , it is much easier to start with the initial conditions : in this case , the velocities reach their maximum value after a quarter period , and the first binary oracle is applied at .the starting phases of the solution are , .the amplitudes are before the binary oracle , and after the binary oracle . the resultant time evolution ( as a function of ) of the oscillators in the position and velocity spaces is illustrated in fig.5 .it is found that grover s algorithm provides position amplification of and velocity amplification of . in both these cases ,maximal velocity amplification is achieved .on the other hand , position amplification is substantial but not maximal .the reason is that the binary oracle is implemented in the velocity space , and it is physically not possible to implement it in the position space .= 9truecm = 9truecm _ gravity : _ a uniform gravitational field shifts the equilibrium positions of the vertically hanging springs , but apart from that it has no effect on their dynamics .this is obvious from the expression for the potential energy , _ imperfect synchronization of the initial state : _ when the initial velocities are arbitrary instead of uniform , the energy amplification of the target oscillator provided by the algorithm is not four - fold .it is instead limited to the initial energy present in the modes , {t=0 } ~ , \label{maxgain}\ ] ] which is still substantial for the generic situation where the initial and do not differ by a large amount . _ imprecise reflection operations : _ in practice , the reflection operations may not exactly implement phase changes of .also , the measurement operation terminating the algorithm may not take place at the precise instant of maximum amplification .the energy amplification depends only quadratically on such phase errors from the ideal values , e.g. if the reflection phase change is , then the loss in energy amplification is . _inelastic reflections : _ when the binary oracle produces an inelastic reflection of the target oscillator , with a coefficient of restitution ( i.e. ) , the energy amplification decreases by the factor . even in the extreme case of ,the energy amplification is a sizeable -fold ._ spring masses : _ real springs are not massless .to a good approximation , the energy taken up by the springs can be estimated by assuming a constant velocity gradient along the springs , and then absorbing the spring energy by altering the masses of the objects attached to the springs .the dominant correction is to add one - third of the spring mass to the objects at either end .the remainder , in our set up and on the average , amounts to adding one - twelfth of the mass of the small springs to the big mass .this prescription allows tuning of the masses of the oscillators after measuring the masses of the springs , in order to maximise the energy amplification ._ damping : _ for a weakly damped oscillator ( damping force ) , its amplitude changes linearly with the damping coefficient , while its frequency changes quadratically . thus small external disturbances reduce the energy amplification , by a factor , but have little effect on the all important phase coherence amongst the oscillators that governs interference of the modes .overall , we find that most deviations of the parameters from their ideal values affect the performance of the algorithm only quadratically , and can be easily taken care of .damping provides the only linear perturbation , which should be controlled to the best possible extent .a variety of coupled vibrational systems with small damping can be made easily .they can provide either fast focusing or fast dispersal of energy , and can therefore be an important component in processes sensitive to energy availability .efficient concentration of the total energy of a coupled oscillator system into a specific oscillator can be used as a trigger or a sensor , where an external disturbance becomes the cause for the reflection oracle .the focusing of energy could also be useful in nanomechanical systems where the component concerned can not be directly controlled , and a possibility of using coupled cantilever beams as a switch is pointed out in fig.6 .there exist many processes that need crossing of an energy threshold for completion .their rates are typically governed by the boltzmann factor for the energy barrier , .energy amplification can speed up the rates of such processes by large factors , leading to catalysis .grover s algorithm is fully reversible .the reflection operators and are inverses of themselves .so the algorithm can be run backwards as . that disperses large initial energy in the target oscillator to a uniform distribution among its partners . in the coupled oscillator model , the initial condition would be and . after waiting for , and then reversing produces .this behaviour can be useful in quickly reducing localised perturbations by redistributing its energy throughout the system . instead of damping a single perturbed oscillator ,it is much more efficient to disperse the energy into several oscillators while damping all of them together . to illustrate the concept ,consider the situation where all the oscillators have the same damping coefficient .the normal modes in the space then separate the same way as in eq.(10 ) , and the relations in eqs.(12,13 ) are retained .damping shifts the oscillation frequencies according to , and the general solution becomes : the coupled oscillator dynamics of grover s algorithm is maintained by keeping the frequency ratios unchanged , if the initial conditions are chosen as then after half an oscillation period , , this results show that the distribution of energy among the coupled oscillators suppresses the energy of the target oscillator by an extra factor of 4 , in addition to the usual damping factor for a stand - alone oscillator .it is indeed the maximum possible reduction in energy , combining both the mechanisms . with this choice , in time ,the energy of the target oscillator is reduced to of its initial value .although the energy loss is more compared to a stand - alone oscillator , it is less localized because it is distributed among the target oscillator , its partners and the big oscillator .the corresponding spring constant and mass ratios are : a hierarchical system of coupled oscillators ( see fig.7 ) can be even more efficient in dispersal of energy , by implementing the above mechanism simultaneously at multiple scales .the simplest choice would be to couple four small oscillators to a big one at every level , with appropriate mass , spring and damping parameters .it is also possible to combine dispersal and concentration operations to to transfer energy from one oscillator to another via the centre - of - mass mode .for example , in case of four coupled oscillators , initial energy in oscillator can be transferred to oscillator by in this manner , a local signal received by a large detector can be first dispersed over the whole system and then extracted at a specific location .
|
grover s database search algorithm is the optimal algorithm for finding a desired object from an unsorted collection of items . although it was discovered in the context of quantum computation , it is simple and versatile enough to be implemented using any physical system that allows superposition of states , and several proposals have been made in the literature . i study a mechanical realisation of the algorithm using coupled simple harmonic oscillators , and construct its physical model for the simplest case of four identical oscillators . the identification oracle is implemented as an elastic reflection of the desired oscillator , and the overrelaxation operation is realised as evolution of the system by half an oscillation period . i derive the equations of motion , and solve them both analytically and by computer simulation . i extend the ideal case analysis and explore the sensitivity of the algorithm to changes in the initial conditions , masses of springs and damping . the amplitude amplification provided by the algorithm enhances the energy of the desired oscillator , while running the algorithm backwards spreads out the energy of the perturbed oscillator among its partners . the former ( efficient focusing of energy into a specific oscillator ) can have interesting applications in processes that need crossing of an energy threshold for completion , and can be useful in nanotechnological devices and catalysis . the latter ( efficient redistribution of energy ) can be useful in processes requiring rapid dissipation of energy , such as shock - absorbers and vibrational shielding . i present some tentative proposals .
|
the theory of measurements in quantum physics has a long tradition .first , it requires a good understanding of the measurement devices . in quantum optics ,the foundations for the photodetector have been established in the pioneering works of mandel , kelley and kleiner as well as glauber . on this basis ,more elaborate measurement techniques have been developed , such as balanced homodyne detection , unbalanced homodyne detection and eight - port homodyne detection schemes .all mentioned schemes provide the possibility to collect information to completely characterize an arbitrary quantum state of light .once , the experimental techniques are well - known , one needs suitable tomographic methods to convert the raw experimental data into a convenient representation of the quantum state . in case of balanced homodyne measurements , in which one records phase dependent quadratures, there are several approaches which have been proposed . in , the so - called inverse radon transform is applied to estimate the wigner function of the quantum state .this approach can also be generalized to obtain different quasiprobability distributions , such as the glauber - sudarshan function . in order to calculate density matrices , one may choose between fourier techniques , direct sampling schemes or more involved maximum likelihood methods .however , there has only been little interest in the statistical properties of quantum state estimation .the theoretical foundations have already been considered in , but in particular for quantum state tomography , the statistical uncertainties of the estimates are not examined deeply .first examinations have been performed in .the advantage of the sampling approach is that it directly gives an simple estimate of the statistical uncertainty of the estimated quantity , which we will shortly review below . in case of the maximum likelihood approach, statistical uncertainties can be evaluated by the inverse of the so - called fisher information matrix , which requires some numerical effort .alternative methods for the estimation of uncertainties can also be found in . in the present work ,we compare the statistical uncertainty , which arises in the direct sampling approach , to the quantum mechanical variance , which provides a lower bound set by the quantum nature of light . in sec .ii , we consider functions of a single observable , which can be directly measured , and show that the variance of the corresponding pattern function equals to the quantum mechanical one . in sec .iii , we examine observables for which complete quantum state tomography is required .explicitly , we show that sampling from balanced homodyne detection data does not operate on the quantum mechanical level of uncertainty , and find that the statistical independence of quadratures at different phases is the reason .we also discuss the estimation of phase - space distributions in sec .iv , and show that the unbalanced homodyne detection scheme provides estimates with quantum mechanical uncertainties .section v is dedicated to an example in quantum state tomography to illustrate the impact of the results .as a first step , let us assume that we observe a single physical quantity and estimate the expectation value of a function of this observable , .as is a hermitian operator , it can be written in its spectral decomposition where is an eigenvector of with the eigenvalue .the set is the set of all eigenvalues . in case of discrete eigenvalues ,the integration has to be replaced by the corresponding sum .the eigenvectors can be chosen orthogonal , where the right hand side has to be interpreted as the kronecker symbol in the case of discrete eigenvalues .furthermore , the eigenstates provide a resolution of identity , let us now consider an experiment which records a set of eigenvalues from as outcomes .the underlying quantum state shall be denoted by .then , the _ empirical expectation value _ of some function can be estimated as the tilde indicates that this quantity is a random number , since it is obtained from measured ( and therefore random ) values , whose probabilities shall be denoted by .therefore , the expectation value of this random number is given by as above , the integral has to be seen as a sum over the probabilities if the set of eigenvalues is discrete .the key point is now that eq .provides a good estimate for the _ quantum mechanical expectation value _ of the operator .the probabilities of the outcomes are connected to the underlying quantum state by inserting this into eq . and applying , we find the eqs . , and form the basis of the sampling technique . in order to find an unbiased estimate of the expectation value of , one simply has to insert his measurement outcomes into the so - called pattern function and calculate the empirical mean of the values according to eq . . for the ability to make justified statements ,one still needs a measure of the uncertainty of the estimate .the _ empirical variance of the sampling points _ is given by this number quantifies the spreading of the points .the factor guarantees that the estimate is unbiased , i.e. ^ 2 da.\ ] ] practically , the estimation of the empirical variance requires the calculation of the second moment of , we easily see that this is exactly the sampling equation , just with the square of the function . therefore , and due to the orthogonality of the eigenstates , its expectation value again equals to the quantum mechanical one , the same holds for arbitrary moments of the function . as a consequence , also the _ quantum mechanical variance _ of , ^ 2\ } - [ { \rm tr}\{\hat{\rho}\hat{f}(\hat{a})\}]^2\ ] ] can be estimated without bias from the set of sampling points : finally , we are interested in the statistical uncertainty on the estimate . from estimation theory , this is just the empirical variance of the sampling points , divided by the number of measurements : the factor guarantees that for an increasing number of data points , the uncertainty of the empirical mean value is decreasing . for ,the latter approaches stochastically the quantum mechanical expectation value . in conclusion, we may estimate the quantum mechanical expectation value of the operator by the sampling equation , and the variance of is exactly expected to match the quantum mechanical variance , divided by the number of data points . in this sense , we may state that the determination of can be done on the quantum mechanical level of uncertainty .there are no other sources of noise contributing to the uncertainty , and it is not possible to achieve less fluctuations with classical statistical means .we emphasize that so far we only considered a function of the directly measurable operator , making this result possible .after briefly discussing some examples , we show that the situation becomes completely different when we require several non - commuting observables to estimate the expectation value of an operator . as a first example , let us consider the measurement of the quadrature operator , which is frequently done in balanced homodyne measurements . in this case , the set of eigenvalues is given by the continuous spectrum .according to the above calculations , any function of a single quadrature can be estimated at the quantum mechanical level of uncertainty .this includes all kinds of moments of the quadrature , for instance normally ordered ones .however , note that we only may use a quadrature at a single phase .if we consider functions of quadratures of different phases , the situation will become completely different , as we will show below .photon number resolving detectors can record the outcomes of the photon number operator . here, the set of eigenvalues is the discrete spectrum .again , we can state that the expectation value of any function of the photon number operator can be estimated at the quantum mechanical level of uncertainty .we still note that in practice one can only record a finite number of measurements , leading to some uncertainty on photon number probabilities with very low values , which typically occur for very large photon numbers .however , this problem can be minimized by increasing the number of measurements .measurements of a single operator , such as the quadrature or the photon number , can not characterize a quantum state completely .for some operators , one needs more information about the state in order to estimate some expectation value .for instance , measurements of the quadrature distributions for all phases in are informationally complete , and we may calculate any expectation value from this measurement outcomes . on the other hand , it has been shown that so - called quasiprobability representations of quantum states can be retrieved by photon number resolved measurements , when one displaces the quantum state in phase space before the measurement . in the following , we consider these methods more in detail . for having a meaningful reference ,we start with the calculation of the quantum mechanical variance of some operator . here , we express it in terms of the characteristic functions of the wigner function of the density operator of the state and of the observable . in general , the characteristic function of the wigner function is defined as with being the well - known displacement operator . for hermitian operators , we have the relation . moreover , if is the density operator of a state , we will omit the index of throughout the paper .conversely , the operator may be retrieved from its characteristic function by with these quantities at hand , we may calculate the expectation value of with respect to the state by let us now express the second moment of in terms of characteristic functions .inserting eq .for the operator , we obtain by applying the equality and writing both integrals in polar coordinates , we find the final expression in our notation , the integrals over the angles range from to , while the integration over covers the full real line .the variance can then be easily calculated as ^ 2.\label{eq : var : f}\ ] ] equation will be the reference for comparison with the variance arising in sampling methods .let us assume that the state is sent to a balanced homodyne detector , recording quadrature values at different phases . here, we assume that the phase values are uniformly distributed within this interval .then , the quadrature follows the quadrature distribution , which is conditioned on the value of .the joint probability distribution is given by .sampling is an established technique to estimate the expectation value of an hermitian operator from this set of data by an empirical mean of a suitable pattern function , analogously to eq ., this number is a random variable , whose expectation value is given by the pattern function has to be designed such that this expectation value equals to the quantum mechanical one , let us now find such a suitable pattern function belonging to the operator .the characteristic function of the state is connected to the quadrature distribution as inserting this equation into eq . and writing the integration over in polar coordinates , we obtain from this relation , we easily see that the pattern function is given by by construction , the expectation value of this pattern function with respect to the joint quadrature distributions always equals to the quantum mechanical expectation value , see eq .. moreover , we are interested in the empirical variance of the single data points , which can be estimated from the experimental data by being completely analogous to eq . .consequently , the expectation value of this variance is given by ^ 2 } - \overline{f(x,\varphi)}^2.\label{eq : pattern : variance}\ ] ] finally , the variance of the estimate for the expectation value of the operator can be obtained from this number quantifies the uncertainty of the estimate .let us now examine if we can expect the empirical variance eq . to match the quantum mechanical variance of the operator .contrarily to the procedure above , the operator now depends on quadratures at different phases , whose operators do not commute . for getting a deeper understanding, we concentrate on the second moment of . by using the inverse relation of eq ., we find ^ 2\\ & = & \frac{1}{2\pi^2 } \int dx\ , d\varphi\ , db\ , db'\ , db''\,|b'||b''| e^{i(b'+b''-b)x}\nonumber\\&&\times \phi(i b e^{i\varphi})\phi^*_{\hat{f}}(i b ' e^{i\varphi } ) \phi^*_{\hat{f}}(i b ''e^{i\varphi}).\label{eq : expect : f2:1}\end{aligned}\ ] ] we substitute in order to remove the imaginary unit in the arguments of the characteristic function .a careful analysis shows that we do not have to change the integration domain due to the periodicity of the integrand .moreover , the integral over can be evaluated as which can be used to evaluate another integral in : together with eq ., we find the theoretically expected variance of the sampling method .we stress that this equation is not directly evaluated in practice , since the underlying quadrature distribution is unknown , and we only have a sample of quadrature measurements .instead , we use the empirical variance given in eq . .however , the theoretical expectation is given by eq .in combination with eq . and and the basis for all following considerations .let us compare the quantum mechanical variance with the variance expected from the sampling method .since the first moments of the sampling method and quantum mechanics are equal by construction , it is sufficient to examine the second moments of and .the expressions from the sampling method and the quantum mechanical calculation look quite similar , but they are different : in , one integration over is missing .a closer look reveals that if one replaces in the quantum mechanical expectation , one finds the expression for the pattern function .obviously , since we only observe the quadrature distribution at a fixed phase , and all measured quadratures for different are stochastically independent , the `` correlations '' of the quadrature distributions between different phases and are not taken into consideration .this is due to the fact that is not the joint distribution of all quadratures , whose marginals are the observed quadrature distributions .the definition of the joint distribution suffers from the problem of the non - commutativity of the corresponding quadrature operators and is closely related to the different phase - space distributions . as a consequence ,the quantum mechanical variance is not equal to the empirical variance of the pattern function . in this sense ,the statistics of the balanced homodyne measurements is not at the quantum mechanical level , since the expectation value of can not be estimated with quantum mechanical uncertainty .the definition of a joint probability of two quadratures suffers from the problem of the non - commutativity of two quadratures for different phases . in this context thereappear similar problems as the well known ambiguity of the definition of phase - space distributions .we also note that the statement does not change when the examined operator is phase - insensitive , i.e. in this case , it is sufficient to record the completely phase - diffused quadrature distribution which can be easily done by choosing a uniform phase distribution of the local oscillator coherent state .this scheme may be practically more simple , since one does not have to record the phase values .however , it is still necessary to guarantee the uniform phase distribution in .moreover , it does not bring any statistical benefit , since the variance of the pattern function remains the same .therefore , the same number of data points is required to obtain the same statistical precision .finally , we might ask if it is possible to experimentally construct a bipartite state described by the characteristic function , which appears in eq . .the arguments and are assigned to each of the two subsystems .if this was possible , one could use it for estimating the quantum mechanical variance from eq . .practically , we would need to measure joint quadrature distributions which seems to require joint balanced homodyne measurements on the bipartite state .the problem is just that does not refer to a physical state .this becomes clear when we examine the covariance matrix .the required moments can be obtained by taking the derivatives of at .we note that all moments of the bipartite state can be expressed in moments of the state , since therefore , if the state described by has a quadrature covariance matrix the bipartite state can be characterized by the matrix in order to describe a physical state , this matrix has to satisfy the nonnegativity condition where however , the minors of are always negative , e.g. therefore , eq . is violated by the covariance matrix of the bipartite characteristic function , and can never correspond to a physical quantum state , which could be generated in an experiment . as a consequence, it seems unfeasible to estimate the variance on the quantum mechanical level .next , we consider a more general setting .first , we assume that we look at a family of observables , being constructed by a coherent displacement of some initial operator : the characteristic function of these operators is given by with a coherent state . ]furthermore , we also apply the displacement to the state , but we look at a suitable experimental realization .let us send the initial state , described by , to a beamsplitter with real transmissivity and reflectivity , satisfying ( see fig . [fig : setup ] ) . at the second input of the beamsplitter , we irradiate a coherent state with amplitude , described by its characteristic function the resulting output state has the characteristic function we have chosen the amplitude of the coherent state such that the reflectivity does not appear in all calculations .note that the transmissivity can also be used for taking the detector quantum efficiency into account .it is well known that an imperfect detector with efficiency can be modelled by first mixing the input state with a fraction of of vacuum and subsequently performing the measurement with an ideal detector .the transmissivity of the corresponding beamsplitter is given by . by applying eq .( [ eq : mix : input : with : coh ] ) on the state with transmissivity and coherent state amplitude , we find therefore , an imperfect detector can be simply taken into account by replacing the transmissivity of the beam splitter by the effective transmissivity . in consequence , we may consider only ideal detectors .of course , the expectation value of the pattern function corresponding to with respect to the new output state is exactly the quantum mechanical expectation again .let us look at the second moment of the pattern function , by inserting eq . and into : focus on the following aspect : we want to estimate the quantum mechanical expectation value of the operator with respect to the initial state . for this purpose, we have two possibilities : 1 . we do not displace the initial state and omit the beamsplitter .mathematically , this is given by and .the expectation value of is obtained by choosing the pattern function corresponding to , i.e. by suitable calculations after the balanced homodyne measurement .we displace the state by .the calculations after the measurement only require the pattern function for , i.e. we set in eq . .the expectation value of the sampling procedure is in both cases the same , namely the quantum mechanical expectation .however , the uncertainties differ : in the second case , the initial state enters as , which is the state after exposition to losses with .therefore , we may expect a worse result than in the first scheme , which only depends on the perfect state .this finding also holds when we consider imperfect detection : if the quantum efficiency of the balanced homodyne detector equals to , we had to replace in the first case and in the second case .therefore , the beamsplitter which displaces the initial state introduces unavoidable losses .this has consequences for the discussion of quantum state tomography .in the case of quantum state tomography , one is interested to find a complete representation of the quantum state . here, we discuss the representation with means of quasiprobabilities . in many cases ,these quasiprobabilities can be represented as the expectation value of a displaced operator , which is phase - insensitive : the coefficients in the fock basis expansion of , determine a specific quasiprobability .for instance , the wigner function is obtained by choosing , while the function arises from .more generally , the coefficients of -ordered quasiprobabilities can be written as with being the laguerre - polynomial .for the family of -parameterized quasiprobabilities , one chooses , for nonclassicality quasiprobabilities one uses a suitable nonclassicality filter .we have discussed different techniques for estimating expectation values of the form .first , we notice that the so - called cascaded balanced homodyning technique does not provide any advantage over the standard balanced homodyne detection .the former corresponds to the method 2 described in the previous section , where one displaces the state and estimates for , while the latter is realized by the method 1 .obviously , the former suffers a reduction of the quantum efficiency by the transmissivity of the first beamsplitter .this is not affected by the fact that is phase - independent , and recording of the phase values is not required in the cascaded measurement .one can only improve the situation by choosing a beamsplitter with high transmissivity .second , we note that balanced homodyne measurements followed by sampling of quasiprobabilities does not work on the quantum mechanical level of uncertainty .therefore , we can not expect that this scheme is optimal for this purpose .indeed , if one is interested in quasiprobabilities , there is a better alternative : the unbalanced homodyne detection technique is based on a different interpretation of eq . :first , the quantum state is displaced in the same way described in sec .[ sec : displacement : and : operators ] .afterwards , the expectation value of the phase - independent operator is sampled from photon number measurements . since thisonly requires the measurement of a single observable , namely the photon number , the estimation of the quasiprobability at a specific point is performed on the quantum mechanical level of uncertainty .provided that the balanced homodyne detector and the photon number resolved detector had the same quantum efficiency , we recommend to choose the latter one , since we expect it to give better results .finally , we emphasize that the unbalanced scheme is optimal for the estimation of the phase - space representation at fixed points , but does not cover correlations between different points . therefore ,if one wants to estimate quantities which require the knowledge of the quasiprobability at different , one can not expect to achieve this at the quantum mechanical level of uncertainty as well .for instance , quadrature distributions are better measured in balanced homodyne detection schemes . in this sense, the unbalanced measurement is optimal for the estimation of quasiprobabilities , but not always the best in different cases .to demonstrate the difference of the statistical uncertainty in balanced homodyne and photon - number resolved detection , let us consider the determination of a nonclassicality quasiprobability of a squeezed state .its variances are and , whereas the variance of the vacuum state shall be . we will apply as a filter , with .the prefactor guarantees that , and the filter width is fixed with .moreover , the pattern function is defined by choosing . for the balanced homodyne detection scheme, we generate a set of data points , each consisting of a pair .the phase values are uniformly distributed in the interval , whereas follows the quadrature distribution conditioned on the value of . from this simulated set of data, we sample the nonclassicality quasiprobability together with its statistical uncertainty . for reasons of simplicity ,we assume to have an ideal detector , i.e. . in case of the unbalanced homodyne detection scheme, we calculate the photon - number distribution together with its variance theoretically , the maximum photon number is restricted to .then we derive the statistical uncertainty from the result by means of linear error propagation . and .] . ] . ]figure [ fig : pomega ] shows the nonclassicality quasiprobability .we observe negativities for some real , being signatures of the nonclassicality of the squeezed state , the minimum is achieved at with .figures [ fig : stddev : homo ] and [ fig : stddev : unbalanced ] show the standard deviations and , which are obtained from balanced or unbalanced homodyne measurements respectively .they are calculated from eq . and ,each divided by the number of measurements .obviously , they show a completely different behavior .the uncertainty from balanced homodyne detection is more than a factor of larger than the one from the unbalanced technique , the exact difference depends on the point in phase space and on the examined state . in particular , at , the homodyne measurement provides , leading to an insufficient significance of the negativity of standard deviations .contrarily , we have in the unbalanced case , leading to a significance of about standard deviations. therefore , in case of equal quantum efficiency , the unbalanced scheme proves to be much better than the balanced one .[ fig : comparison ] illustrates this conclusion clearly .we compared different approaches for estimating the expectation value of some physical quantity with respect to its statistical uncertainty .first , we showed that whenever one can estimate a quantity of interest as a function of a single operator , which can be directly measured , then the estimate is on the quantum mechanical level of uncertainty , i.e. the empirical variance equals to the quantum - mechanical one . in practice , this works for quadratures and photon number measurements , for instance . however , for many operators a direct measurement is not known , and techniques for quantum tomography have to be applied .we considered sampling methods , which are applied to phase - dependent quadrature data from balanced homodyne measurements .we show that the cascaded balanced homodyne measurement has no statistical advantage over the standard technique , although the first method only requires a phase - randomized local oscillator .moreover , both methods do not operate on the quantum mechanical level of uncertainty . on the contrary, the unbalanced measurement technique can achieve the quantum mechanical level of uncertainty .therefore , the latter shall be generally the best of the considered methods .we also identified the main reason for the difference between the variance from balanced homodyne measurement and the quantum mechanical variance : it is due to the fact that quadratures at different phases are always measured stochastically independently . therefore , there seems to be a lack of `` phase correlations '' in the experimental data .the severe question is how this problem affects other quantum state reconstruction methods , like maximum likelihood techniques . in particular , it is unclear if the latter method is able to perform the estimation on the quantum mechanical level .our results have direct implications to the different approaches for reconstructing quasiprobabilities or density matrices of states .we showed the advantage of the unbalanced measurement with the example of a nonclassicality quasiprobability of a squeezed state . therefore ,if photon - counting devices with quantum efficiencies comparable to balanced homodyne detectors will be available in the future , our findings suggest to prefer the unbalanced scheme for quantum state estimation .the author gratefully thanks w. vogel for fruitful discussions and his assistance .this work was supported by the deutsche forschungsgemeinschaft through sfb 652 .99 l. mandel , progress in optics * 2 * , 181 ( 1963 ) .p. l. kelley and w. h. kleiner , phys .rev . * 136 * , a316 ( 1964 ) .r. j. glauber , in quantum optics and electronics , edited by c. de witt , a. blandin , and c. cohen - tannoudji ( gordon and breach , new york , 1965 ) .h. yuen and v. w. s. chan , opt . lett . * 8 * , 177 ( 1983 ) .w. vogel and j. grabow , phys .rev . a * 47 * , 4227 ( 1993 ) .s. wallentowitz and w. vogel , phys .a * 53 * , 4528 , ( 1996 ) .j. w. noh , a. fougeres , and l. mandel , phys .lett . * 67 * , 1426 ( 1991 ) .m. freyberger and w. schleich , phys .a * 47 * , r30 ( 1993 ) .k. vogel and h. risken , phys .a * 40 * , 2847(1989 ) .t. kiesel , w. vogel , v. parigi , a. zavatta , and m. bellini , phys .a * 78 * , 021804 ( 2008 ) .h. khn , d.g .welsch , w. vogel , j. mod . opt . * 41 * , 1607 ( 1994 ) .g. m. dariano , c. macchiavello , and m. g. a. paris , phys .a * 50 * , 4298 ( 1994 ) .g. m. dariano , u. leonhardt , and h. paul , phys .rev . a * 52 * , r1801 ( 1995 ) .a. zucchetti , w. vogel , m. tasche , and d.g .welsch , phys .a * 54 * , 1678 ( 1996 ) .z. hradil , phys .rev . a * 55 * , r1561 ( 1997 ) .k. banaszek , g. m. dariano , m. g. a. paris , m. f. sacchi , phys .a * 61 * , 010304 ( 1999 ) .a. s. holevo , _ probabilistic and statistical aspects of quantum theory _ , north - holland amsterdam ( 1982 ) .k. banaszek , j. mod . opt . * 46 * , 675 ( 1999 ) .g. m. dariano , quantum semiclass . opt . * 7 * , 693 ( 1995 ) .j. ehek , d. mogilevtsev , and z. hradil , new j. phys .* 10 * , 043022 ( 2008 ) .k. m. r. audenaert , s. scheel , new j. phys . * 11 * , 023028 ( 2009 ) .j. diguglielmo , c. messenger , j. fiurasek , b. hage , a. samblowski , t. schmidt , r. schnabel , phys .a * 79 * , 032114 ( 2009 ) .e. prugoveki , int .* 16 * , 321 ( 1977 ) .p. busch , int .. phys . * 30 * , 1217 ( 1991 ) .a. perelomov , _ generalized coherent states and their applications _ , springer berlin ( 1986 ) .r. simon , n. mukunda , and b. dutta , phys .a * 49 * , 1567 ( 1994 ) . g. s. agarwal and e. wolf , phys .d * 2 * , 2161 ( 1970 ) .t. kiesel and w. vogel , phys .rev . a * 82 * , 032107 ( 2010 ) . t. kiesel and w. vogel , submitted for publication . z. kis , t. kiss , j. janszky , p. adam , s. wallentowitz , and w. vogel , phys . rev .a * 59 * , r39 ( 1999 ) .t. kiesel , w. vogel , b. hage , and r. schnabel , phys .lett * 107 * , 113604 ( 2011 ) .
|
in quantum physics , all measured observables are subject to statistical uncertainties , which arise from the quantum nature as well as the experimental technique . we consider the statistical uncertainty of the so - called sampling method , in which one estimates the expectation value of a given observable by empirical means of suitable pattern functions . we show that if the observable can be written as a function of a single directly measurable operator , the variance of the estimate from the sampling method equals to the quantum mechanical one . in this sense , we say that the estimate is on the quantum mechanical level of uncertainty . in contrast , if the observable depends on non - commuting operators , e.g. different quadratures , the quantum mechanical level of uncertainty is not achieved . the impact of the results on quantum tomography is discussed , and different approaches to quantum tomographic measurements are compared . it is shown explicitly for the estimation of quasiprobabilities of a quantum state , that balanced homodyne tomography does not operate on the quantum mechanical level of uncertainty , while the unbalanced homodyne detection does .
|
since the pioneer work done by grover and sahai , wireless information and power transfer ( wipt ) has spurred considerable interests from academia and industry , especially in the area of wireless communications .nowadays , most wireless devices are powered via power cables or battery replacement , which limits the scalability , sustainability , and mobility of wireless communications . in practice ,wireline charging and battery replacement may be infeasible or incur a high cost under some conditions .for instance , it is impossible to replace the battery of implanted medical devices in human bodies . besides , wireline charging and battery renewal shortens working period of wireless mobile devices . as a result , radio frequency ( rf )signal based wipt was proposed as a complementary technology to prolong the lifetime of power - limited nodes or networks in a relatively simple and reliable way . in wipt systems , information and power signalsare carried by the same rf - wave .then , information is recovered at information receivers , and electromagnetic energy is harvested and converted into electric energy at power receivers .however , due to the open nature of wireless channels , information signals are also received by power receivers or other unintended receivers , resulting in potential information leakage .traditionally , upper - layer encryption techniques are utilized to guarantee secure information transmission .however , conventionally cryptography techniques may consume a large amount of energy due to associated high computational complexity .it may lead to a low system energy efficiency and become a high burden on power transfer to support energy for encryption and decryption .recently , physical layer security ( phy - security ) was proved to be an effective alternative to provide secure communications by exploiting the characteristics of wireless channels , such as fading , noise , and interferences . especially in wipt systems , a power signal , dedicated for transferring wireless power , can be exploited to confuse the eavesdroppers , so as to enhance wireless security . meanwhile , the information signal can also act as a power source to increase the amount of harvested energy at the power receivers .thus , phy - security techniques can be naturally applied to secrecy wipt ( swipt ) . from an information - theoretic viewpoint ,the essence of phy - security is to maximize the secrecy rate , which is defined as a rate difference between the main channel from the transmitter to the legitimate receiver , and the wiretap channel from the transmitter to the eavesdropper .hence , it is necessary to enhance the signal received at the legitimate information receiver and to impair the signal received at the eavesdroppers simultaneously .however , the design of phy - security techniques is a non - trivial task in swipt systems , since it also needs to achieve another important objective , namely the wireless power transfer efficiency .in other words , the design of swipt systems can be naturally formulated as a dual - objective optimization problem , one for secure information transmission , the other for efficient power transfer . in general, these two objectives may conflict with each other .let us take a look at a simple example . if a power receiver is a potential eavesdropper , then the effort in improving the power transfer efficiency via increasing the power of information signal may result in a loss of secrecy rate .thus , it is imperative to strike a good balance between information transmission security and power transfer efficiency . in , the tradeoff between information transmission security and power transfer efficiencyis formulated into two different problems : the first problem maximizes the secrecy rate subject to individual minimum harvested energy requirement , while the second problem maximizes the weighted sum of harvested energy subject to a minimum required secrecy rate constraint .the two problems are solved jointly by designing spatial beamformers for information and power signals at a multiple - antenna base station .it is generally known that channel state information ( csi ) at a transmitter has a great impact on the performance of multiple - antenna systems. however , the csi may be imperfect due to channel estimation errors or limited csi feedback .thus , the authors in designed a robust beamforming scheme for multiple - antenna swipt systems in the presence of channel uncertainty of information and power receivers . to further improve the performance of swipt , the authors in proposed to use both artificial noise ( an ) and power signal to confuse eavesdroppers and increase the amount of harvested energy simultaneously through proper spatial beamforming .moreover , swipt in relaying systems , multicarrier systems , and cognitive radio networks were also investigated , respectively . in general, it is impossible to find a universal solution for guaranteed performance of swipt , since there are a variety of scenarios with different information transmission , power transfer , and signal eavesdropping schemes .moreover , for enabling phy - security , there are a lot of viable techniques .thus , it is necessary to adopt different phy - security techniques according to different systems . in this article, we first investigate the security issues in various wipt scenarios , and point out the fundamental challenges for enabling communication security .then , we provide a survey of several effective physical - layer techniques to realize swipt , with an emphasis on revealing their performance advantages and limitations .furthermore , we propose to use massive mimo techniques to enhance information transmission security and to improve power transfer efficiency simultaneously .finally , we discuss some future research directions .security is a common problem in wireless communication networks due to the broadcast nature of wireless channels . to be more specific, the information sent to a legitimate receiver is also received by unintended receivers , namely eavesdroppers . from the perspective of phy - security , in order to guarantee communication security , it is necessary to enhance the signal received at the legitimate receiver and to impair the signal seen at the eavesdroppers simultaneously .however , in swipt systems , information transmission security and power transfer efficiency are equally important .the dual system objectives introduce a paradigm shift and bring in new challenging issues to the design of phy - security in swipt systems .in particular , information signals and power signals may compete with each other for the limited system resources . besides , the objectives of secure information transmission and efficient power transfer may not align or even conflict with each other . in what follows, we give an overview of security issues in different swipt systems .in swipt scenario , a central node plays the role as both an information and a power transmitter , which broadcasts information and power signals to the corresponding receivers , as shown in fig .however , the power receivers also receive the information signal .if the power receivers are malicious , eavesdropping may take place .thus , they should be treated as potential eavesdroppers for guaranteeing swipt .the challenges of swipt based on phy - security in such a scenario are mainly three - fold : 1 .short distance interception : in practice , information and power receiving circuits have very different power sensitivities .typically , the minimum requirement for received power at an information receiver is -60 dbm , while that of a power receiver is -10 dbm . in order to satisfy the sensitivity requirements ,a power receiver is usually located closer to the transmitter than an information receiver . in other words , the power receiver , acting as an potential eavesdropper , may have a shorter signal propagation distance than the legitimate information receiver . as a well known fact ,the receive signal quality is a decreasing function of propagation distance .thus , the signal received at a power receiver ( potential eavesdropper ) may be much stronger than that received at the legitimate receiver , resulting in a high risk of information interception .cooperative eavesdropping : in swipt , a power receiver may be a potential eavesdropper , and thus it is unlikely to impair the signal at the eavesdropper as much as possible in order to fulfill the requirement of energy harvesting . in this context , malicious power receivers may eavesdrop information cooperatively . for instance, they might cooperate with each other to perform joint signal detection .thus , the quality of intercepted signal may be much better than that of the signal received at the information receiver , leading to potential information leakage .inter - user interference : the transmitter broadcasts multiple information and power signals simultaneously .then , the information receiver suffers strong interference from not only undesired information signals , but also the power signals .however , for conventional interference mitigation schemes , mitigating inter - user interference may lead to a weak rf signal at the power receiver , which are not applicable to the swipt in broadcasting channels .although swipt in broadcasting channels faces several challenging issues , there are some opportunities to realize secure transmission . for example, the transmitter may acquire the channel state information ( csi ) of the power receivers ( potential eavesdroppers ) , since they are the legitimate users for harvesting power of the swipt systems .then , the transmitter can design suitable transmission schemes according to the instantaneous csi , so as to optimize the secrecy performance while satisfying the qos requirements of energy harvesting at the power receivers .cooperative relaying can shorten transmission distance and provide a diversity gain , and thus it is a commonly used performance - enhancing technique in conventional wireless communication networks .similarly , cooperative relaying can also be adopted to improve the performance of swipt , as shown in fig .[ fig2 ] . in swipt systems, there are two fundamental relaying modes .the first mode is wireless powered relaying communication , where a relay without self power supply splits up the received signal sent by the source into two components , one for information relaying , the other for energy harvesting .then , the relay forwards the information signal to the destination with harvested energy .the second mode is that the relay with self power supply forwards the information signal from the source received previously to the information receiver and sends the power signal to the energy receiver concurrently . for both relaying modes , there are some common challenging issues to ensure security in wipt networks as follows . 1. untrusted relay : in a untrusted relay model , although the relay is a cooperative node , it may be a potential eavesdropper or a malicious node . in the swipt , a untrusted relay may increase interception probability significantly .for instance , if the relay is powered by received signal from the source , it may legally receive the signal and decode it .even if the relay only forwards the signal without energy harvesting , it may pollute the information signal with a power signal intentionally , and thus weaken the signal quality at the information receiver .2 . vulnerable transmission : cooperative relaying transmissionusually requires two orthogonal time slots to complete information transmission .thus , an external eavesdropper can receive two copies of the information signal , which can be combined to improve the signal - to - noise - ratio ( snr ) . especially in swipt , the power receiver , as a potential eavesdropper ,may receive two strong copies of signals in order to satisfy the requirement on receiver sensitivity .thus , the signal quality at the eavesdropper might be better than that at the information receiver . in other words ,the transmission is susceptible to eavesdropping . nevertheless , there are still some powerful relaying techniques to enhance both information transmission security and power transfer efficiency . in particular, it is possible to perform multiple - relay cooperative transmission .the relays can cooperative with each other to create a virtual multiple - input multiple - output ( mimo ) .for example , some relays can share their antennas to perform information beamforming to a legitimate information receiver , while the others can adopt power beamforming to transfer wireless power to the power receivers . in an interference network ,multiple information and power transceivers communicate with each other over the same channel , as shown in fig .[ fig3 ] . for a power receiver ,it can receive multiple signal streams from all the transmitters , which can increase the amount of harvested energy .however , the concurrent information and power transmissions will generate severe co - channel interferences at an information receiver , which results in a low received signal - to - interference - plus - noise ratio ( sinr ) and a high risk of information leakage .next , we discuss the following challenging issues for swipt in interference networks . 1 . uncoordinated transmission : to simultaneously enhance information transmission security and improve power transfer efficiency , it is better to coordinate the transmissions between information and power transmitters .however , information and power transmitters are geographically separated in heterogeneous networks , and it is difficult for them to exchange information and coordinate transmissions between each other .2 . unavailability of csi : according to the theory of wireless power transfer , power receivers convert the harvested rf signals into electric energy . in other words , the power receivers are not necessarily equipped with baseband circuits for signal processing .then , the power receivers may not be able to perform channel estimation to get the csi .as a result , the transmitters can not adaptively choose secure schemes according to instantaneous csi , since the csi is usually fed back from the receivers .conflicting objectives : for swipt in interference networks , interference has a detrimental impact on information transmission , but is beneficial for power transfer .it may reduce the amount of harvested energy at a power receiver , while mitigating the interference at an information receiver for improving the secrecy rate .it is a challenging task to balance the two conflicting objectives for the design of swipt in interference networks .note that the co - channel interference in interference networks can also be used to enhance information transmission security and power transfer efficiency , if the transmission scheme is properly designed .first , the multiple signal streams can be exploited to increase the amount of harvested energy at the power receivers .second , the undesired signals can act as artificial noise to confuse the eavesdroppers to improve the secrecy performance . in wireless powered communication networks ,a power receiver uses harvested power from rf signals to send messages to an information receiver , as shown in fig .[ fig4 ] . a practical application for such a network is medical care .specifically , the implanted medical devices transmit the information to the instrument outside with the harvested energy .with respect to other swipt scenarios , swipt in wireless powered communication networks has some special challenging issues , listed as follows .1 . difficulty in performing joint resource allocation : wireless powered communication combines information and power transfer more closely than general wipt . in particular , the harvested power at a power receiver ( also as an information transmitter ) affects the performance of information transmission directly . for phy - security , the secrecy rate is not an increasing function of transmit power .thus , it is necessary to allocate the resources between power transfer and information transmission to optimize the secrecy performance .for example , a time slot should be partitioned into two orthogonal sub - time slots .the first sub - time slot is used for power transfer and the second is exploited for information transmission .in fact , the optimal partition of a time slot is coupled with transmit power at the power transmitter .thus , it makes sense to allocate these resources jointly . yet , the resulting optimization problem is generally non - convex , which does not facilitate the design of computational efficient resource allocation algorithms .weak anti - eavesdropper capability : for swipt in wireless powered communication networks , the transmit power at an information transmitter is obtained through rf energy harvesting , and thus there is only a limited available power . in this case , it is unlikely to adopt sophisticated anti - eavesdropper schemes at the information transmitter , resulting in a weak anti - eavesdropping capability . as a simple example , if there is no enough power , the power of artificial noise is too low to interfere with the eavesdropper effectively . for enabling swipt in wireless powered communication networks ,it is better to give the task of resource allocation and anti - eavesdropping to the power transmitter , which may have enough power to support and facilitate sophisticated secure schemes to guarantee swipt .recently , swipt in cognitive radio networks ( crns ) has received a lot of attentions . in this case, a secondary transmitter broadcasts both information and power signals over a licensed spectrum band owned by a primary network , as shown in fig .in general , the precondition for the secondary network to perform wipt in licensed spectrum is that the activities in the secondary network will not degrade the qos of the primary network .however , to enable swipt in such networks , we need to tackle the following challenging issues : 1 .open architecture : due to the open and dynamic nature of cognitive radio architecture , various unknown information and power devices are allowed to opportunistically access the licensed spectrum .this is vulnerable to eavesdropping as a power receiver , as a potential eavesdropper might obtain more knowledge of information transmitter due to the signal exchanges during cooperative spectrum sensing .2 . restricted secure scheme : in order to fulfil the precondition for spectrum access , there are limited degrees of freedoms available for information and power transfer , resulting in a performance degradation .for instance , the transmit direction and power for artificial noise is further limited due to interference constraint .interference management : primary and secondary networks coexist over the same spectrum .thus , secondary information receivers may encounter the interferences from the primary network , resulting in a low quality of received signal and a low secrecy rate .swipt in cr networks opens up new opportunities for cooperative communications between the primary and secondary systems at both the information and power harvesting levels .in particular , the secondary transmitter can transmit both secret information and power signals to the secondary receivers , while it charges energy limited primary receivers wirelessly , in exchange of utilizing the licensed spectrum .this approach provides more incentives for both systems to cooperate and therefore improves the overall system performance .as mentioned earlier , swipt faces a variety of challenging issues .thus , it is necessary to adopt some effective phy - security techniques to enhance information security .however , different from traditional phy - security techniques in secure communications , the techniques designed for swipt networks focus not only on information transmission security , but also on power transfer efficiency . in the sequel, we introduce several powerful phy - security techniques for guaranteeing swipt .multiple antenna technique is commonly used in various swipt scenarios to improve communication security . by exploiting the spatial degrees of freedom offered by multiple antennas , it is possible to enhance the received signals at both information receivers and power receivers to weaken the received signal at eavesdroppers simultaneously , so as to improve the secrecy performance and to increase the amount of harvested energy .for instance , if information signal is transmitted in the null space of the eavesdroppers channels , the eavesdroppers can not overhear any information .also , the power signal can be used to confuse the eavesdroppers , while still meeting the requirement of energy harvesting at the power receivers via adaptively adjusting the transmit beamforming . from an information - theoretic viewpoint ,the performance of phy - security is determined by the rate difference between the main channel from the transmitter to the legitimate receiver , and the wiretap channel from the transmitter to the eavesdropper .thus , if we can impair the intercepted signal , while causing minimal interference to the signal received at the legitimate receiver , it is likely to improve the secrecy performance .inspired by this idea , artificial noise is introduced into swipt networks . in particular, the power signal can be used as artificial noise , and thus there is no need to consume extra power for a dedicated artificial noise .moreover , it is likely to send the artificial noise from a friend jammer . especially in swipt, the jammer can first harvest energy from the power transmitter , and then transmits the jamming signal with the harvested energy .thus , the jammer does not need self power supply .the key in designing an artificial noise is to adjust the transmit direction in order to avoid the interference to an information receiver .for example , if full csi of the information receiver is available , it is possible to transmit artificial noise in the null space of the main channel .however , if the csi regarding the information receiver is imperfect , the artificial noise will leak into the main channel .thus , it is imperative to adjust the transmit direction , so as to achieve a good balance between the interference to the eavesdroppers and that to the information receiver . on the other hand , the dual use of artificial noise can facilitate efficient wireless power transfer and ensure communication securityspecifically , artificial noise is able to degrade the channels between the transmitter and the potential eavesdroppers and acts as an energy source for power harvesting .thus , the design of artificial noise should leverage the tradeoff between confusing the power receivers when they perform information decoding and increasing the amount of harvested power at the power receivers . in swipt communication systems, there are a variety of available resources , i.e. , antenna , time , frequency , and power . in practice ,resource allocation in swipt plays an important role for performance enhancement .however , it is not a straight forward task in performing efficient resource allocation .this is because resource allocation could affect the performance of information receivers , power receivers , and eavesdroppers simultaneously in swipt systems .in fact , the coupling between the aforementioned performance metrics complicates the algorithm design .additionally , the performance of resource allocation depends heavily on the csi at the transmitter .if there is full csi available , it is possible to perform optimal resource allocation in global sense . however , if csi is unavailable , only a fixed resource allocation scheme can be adopted . in most cases, the transmitter can only get partial and imperfect csi , and thus it is difficult to design an optimal resource allocation scheme to realize both secure information transmission and efficient power transfer simultaneously .path loss is a detrimental effect on both information transmission and power transfer . by applying cooperative relaying techniques, it is possible to shorten the distance of signal propagation , and thus enhance the performance of swipt systems .especially , multiple - relay cooperation can achieve a better performance .this is because the information receiver , the power receiver , and the eavesdropper are geographically separated .then , these relays can play different roles according to their locations .for instance , the relays close to the information receiver forward the messages from the transmitter , the ones close to the power receiver send the power signals , and the ones close to the eavesdropper transmit the artificial noise . through relay selection , it is possible to effectively improve the performance of swipt .however , cooperative relaying requires information exchanges between multiple relays , leading to a high overhead .as mentioned above , the design of swipt consists of two conflicting objectives , including secure information transmission and efficient power transfer . by exploiting conventional physical layer techniques , e.g. , multi - antenna technique , artificial noise , resource allocation , and relay selection ,the performance gain is limited , especially under some adverse conditions , such as short - distance and cooperative eavesdropping .thus , more effective techniques are needed to enhance the performance .recently , massive mimo techniques were introduced to improve the performance of swipt significantly .on one hand , by exploiting its large array gain , information and power signal beams can be steered towards the information receivers and power receivers more accurately , respectively .on the other hand , due to high - resolution of spatial beamformers , the information leakage to unintended receivers can be reduced substantially .theoretically , if the number of antenna is sufficiently large , the leakage information is expected to be negligible .in addition , massive mimo techniques simplify the associated signal processing for information and power transmission .even with a low - complexity transmission scheme , e.g. maximum ratio transmission ( mrt ) , it is able to achieve an asymptotically optimal performance .more importantly , due to channel hardening in massive mimo systems , the performance analysis and optimization become simpler . inwhat follows , we show the performance gain of using massive mimo in swipt through a simple example .[ fig6 ] depicts an example of swipt systems in a single cell network with perfect csi .we show the average total harvested power ( dbm ) versus the minimum required secrecy rate ( bit / s / hz ) per information receiver .in particular , a transmitter equipped with antennas is serving different number of single - antenna information receivers and two multiple - antenna power receivers , each of which is equipped with receive antennas . as can be observed , with an optimal design of information signals and power signals , the trade - off region of the average total harvested power andthe minimum required secrecy rate increases significantly with , in particular with massive mimo , i.e. , . besides , the average total harvested power decreases with the number of information receivers , which illustrates the conflicting system design objectives of communication security and total harvested power in swipt networks .-factor of 6 db . ]swipt will continue to be a critical research topic and there are a variety of challenging issues as mentioned earlier . in what follows , we list some future research directions on swipt networks .as expected , the csi at a multiple antennas transmitter has a great impact on the performance of swipt .if there is global and perfect csi , it is likely to design the optimal transmit beamformers to achieve the goals for both secure information transmission and efficient power transfer .however , it is not a trivial task to collect the csi in swipt networks .first , the eavesdropper csi may be unavailable , since the external eavesdropper is usually passive and well hidden .second , the csi of the power receivers is difficult to obtain , because the power receivers may not have a baseband circuit .third , the csi corresponding to the information receiver may be imperfect , as it is required to be conveyed from the receiver to the transmitter . in this context , it is necessary to design robust secure beamforming for swipt with partial and imperfect csi , , .another concern for secure beamforming is the multiple system design objectives .recently , driven by environmental concerns , energy efficiency ( ee ) has become an important metric for evaluating the performance of wireless communication systems .however , with swipt , the ee of wireless power transfer has the same importance as the ee of secrecy information transmission . as a result ,multiple conflicting system design objectives arise naturally in system design process , and the application of the solutions for single - objective optimization to multi - objective optimization problems in energy - efficient swipt networks may not lead to a satisfactory performance .therefore , the concept of multi - objective optimization or vector optimization should be adopted for handling conflicting objective functions . yetagain , these factors complicate the design of beamforming for swipt . in swipt systems , there are multiple distributed nodes with different types and tasks , such as information - orient nodes , power - orient nodes , and eavesdropping nodes . in order to design an optimal transmission scheme ,it is imperative to collect the knowledge of all the nodes , which results in a significant increment in signalling overhead .thus , transmission schemes with low signalling overhead and distributed structure are the key to unlock the potential of multiple nodes in swipt systems . the secrecy performance of an information receiver is affected by its counterparts , including eavesdroppers , power receivers , and other information receivers .for example , eavesdroppers may interfere with the information receiver by sending an artificial noise , so as to intercept more information .thus , it is imperative to enhance the capability of environment sensing , i.e. , sensing the information and behaviours of the eavesdroppers , which is helpful to improve the performance of swipt .this article provided a review on swipt from both theoretical and technical perspectives .first , we surveyed various swipt scenarios , with an emphasis on revealing the challenging issues .then , we discussed a variety of effective phy - security techniques , which can effectively improve the performance of swipt .in addition , we proposed to use massive mimo techniques to further enhance swipt and showed the performance gain through numerical simulations .finally , some potential research directions were identified .r. feng , q. li , q. zhang , and j. qin , robust secure transmission in miso simultaneous wireless information and power transfer system , " _ ieee trans .400 - 405 , jan . 2015 .d. w. k. ng , e. s. lo , and r. schober , robust beamforming for secure communication in systems with wireless information and power transfer , " _ ieee trans . wireless commun ._ , vol . 13 , no . 8 , pp4599 - 4615 , aug .2014 .d. w. k. ng , e. s. lo , and r. schober , multi - objective resource allocation for secure communication in cognitive radio networks with wireless information and power transfer , " _ ieee trans . veh .technol . _ , 2015 .[ doi ] : 10.1109/tvt.2015.2436334 .q. shi , w. xu , j. wu , e. song , and y. wang , secure beamforming for mimo broadcasting with wireless information and power transfer , " _ ieee trans .wireless commun .5 , pp . 2841 - 2853 , may 2015 .m. tian , x. huang , q. zhang , j. qin , robust an - aided secure transmission scheme in miso channels with simultaneous wireless information and power transfer , " _ ieee signal process .22 , no . 6 , pp .723 - 727 , jun .2015 .b. m. hochwald , t. l. marzetta , and v. tarokh , multiple - antenna channel hardening and its implications for rate - feedback and scheduling , " _ ieee trans .inf . theory _ ,1893 - 1909 , sept . 2004
|
wireless information and power transfer ( wipt ) enables more sustainable and resilient communications owing to the fact that it avoids frequent battery charging and replacement . however , it also suffers from possible information interception due to the open nature of wireless channels . compared to traditional secure communications , secrecy wireless information and power transfer ( swipt ) carries several distinct characteristics . on one hand , wireless power transfer may increase the vulnerability of eavesdropping , since a power receiver , as a potential eavesdropper , usually has a shorter access distance than an information receiver . on the other hand , wireless power transfer can be exploited to enhance wireless security . this article reviews the security issues in various swipt scenarios , with an emphasis on revealing the corresponding challenges and opportunities for implementing swipt . furthermore , we provide a survey on a variety of physical layer security techniques to improve secrecy performance . in particular , we propose to use massive multiple - input multiple - output ( mimo ) techniques to enhance power transfer efficiency and secure information transmission simultaneously . finally , we discuss several potential research directions to further enhance the security in swipt systems . 1.58 wireless information and power transfer ; physical layer security ; massive mimo ; secrecy performance .
|
two - player zero - sum games play a fundamental role in game theory because their analysis is straightforward . the min - max theorem of von neumann a unique value of the game .figure [ figure simple game ] shows the payoff matrix for a basic game .since neither of the pure strategies for player a dominates the other , a mixed strategy is optimal .player a will choose action 0 with probability 1/4 and action 1 with probability 3/4 . by doing this , the expected payoff for player a is 3/4 no matter which strategy player b chooses . [ ] [ ] [ 0.9] [ ] [ ] [ 0.9] [ ] [ ] [ 0.9] [ ] [ ] [ 0.9] [ ] [ ] [ 0.9]0 [ ] [ ] [ 0.9]1 [ ] [ ] [ 0.9]0 [ ] [ ] [ 0.9]1 [ ] [ ] [ 1]player b [ ] [ ] [ 1]player a [ ] [ ] [ 0.9] in a bayesian game , the payoff matrix , which we represent by over the domain of pure strategies and , is a random quantity .we can model this as a game with a random state that determines the payoffs , referred to in the literature as the `` type . ''the value of such a game depends on the information known to the players . if the neither player knows ( aside from its distribution ) , then the value of the game can be derived from the expected value of the payoff matrix .additionally , if both players know then the value can be derived from the payoff matrix associated with every instance of and averaged .more interesting cases occur when only one player knows or when both players have incomplete information about .consider a situation where different functions of the state are known to both of the players of the game .these functions can be represented by a partition over the support of known as the information structure , as illustrated in figure [ figure information structure ] .it has been established that games of this form can be solved by expanding the space of pure strategies to include strategies that depend on the available information .the min - max theorem still holds .samples of inquiries into the value of information can be found in the following publications : , , , . while information can only increase a players optimal score in a game , not all information structures are equal , even if they contain the same quantity of information . for a given resolution ,what is the optimal information structure ?[ ] [ ] [ 0.8]distribution of state [ ] [ ] [ 0.8]quantization into bins in this work we consider a repeated game setting where the state of the game is i.i.d . andknown non - causally to a helper who is assisting one of the players of the game .the helper can communicate with a rate limit of bits per iteration of the game .the opposing player may or may not know but is certainly aware of the conspiracy against him . even the protocol for communication is known ( or learned ) by the opposing player .we establish information theoretic lower - bounds to the optimal performance .we show through examples that the intuition provided by rate - distortion theory can be misleading in this setting .we illustrate in figure [ figure erasure game ] a two - player game with a binary , equally distributed state .player a has three pure strategies , , , and .player b only has two strategies , , and .the matrix values represent the payoff player a receives for any given state and pair of actions and ( pure strategies ) . asthis is a zero - sum game , player b receives negative of the payoff of player a. we concern ourselves with average payoff , averaged with respect to probabilities if mixed strategies are involved .[ ] [ ] [ 0.8]3 [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]1 [ ] [ ] [ 0.8] [ ] [ ] [ 0.8] [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]1 [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]e [ ] [ ] [ 0.8]1 [ ] [ ] [ 0.8]player b [ ] [ ] [ 0.8]player a [ ] [ ] [ 0.8 ] [ ] [ ] [ 0.8] [ ] [ ] [ 0.8] or with probability 1 . therefore , if player a does not know the state , is the only choice.,title="fig : " ] notice that player a will at all costs avoid choosing a pure strategy of when the state is , or vice versa , hence the label `` erasure game . ''the same consequences would result from a finite but greatly negative payoff in place of .] so if player a is ignorant of the state , he has no option but to choose with probability one .we can therefore deduce the value of the game for two of the four information structures in table [ table erasure game value ] .if player b does not know the state of the game , then on average a will get a payoff of 1/2 .if player b does know the state then he will choose the strategy , resulting in a payoff of zero .[ table erasure game value ] .value of erasure game under varying information structures [ cols="<,>",options="header " , ] on the other hand , if player a knows the state while player b does not , then equilibrium will occur with player a choosing and scoring on average .finally , we can consider the case where they both know the state . as we observed in the game of figure[ figure simple game ] , player a will choose the mixed strategy consisting of with probability 1/4 and with probability 3/4 .player b will choose with probability 1/4 and with probability 3/4 .the resulting average payoff is 3/4 . in the `` erasure game ''the state is binary , so these four information structures mentioned exhaust all deterministic information structures .no intriguing optimization problem presents itself in only one iteration of the game .for example , we ca nt ask , `` what is the best information structure for player a that has cardinality two ? ''there is only one choice of information structure .fortunately , a more graceful spectrum of information structures is available with vector quantization .consider a repeated - game setting of the erasure game where a helper observes the state of the game and communicates to player a over a rate - limited channel .both players know all past actions and states .additionally , the helper , and possibly player b , observe the state completely and non - causally .the helper and player a select a block length and arrange a protocol by which the helper will send bits describing the state to player a. player a will then play the game for iterations .player b has full knowledge of the protocol but does not actually see the message .we ask for the maximum average payoff that can be achieved for a give rate . in other words ,what is the max - min value of this game as a function of ?we begin with a simple case . as table [ table erasure game value ]shows , if player a and player b both know the state , the value of the game is 3/4 .but what if player a only learns of the state through communication from a helper , while player b observes the state directly ? what rate of communication is needed for player a to still achieve an average payoff of 3/4 ?recall that the uniquely optimal strategy for player a is to choose with probability 1/4 and with probability 3/4 .we can use rate - distortion theory as inspiration for answering this question .rate - distortion theory prescribes a formula for finding the minimum description rate needed to reconstruct a sequence of observations with limited average distortion .the procedure is to find the correlation of the source and reconstruction that satisfies the distortion constraint and results in the lowest mutual information . by describing a random source at a rate greater than the mutual information , it is possible to assure with high probability that the reproduction will have the desired correlation with the source as measured by first order statistics . in the case of the erasure game where player b knows the state ,suppose we decide to encode the state for player a using a rate - distortion - like code and an erasure test - channel such that the action sequence results in roughly 1/4 of the time and roughly 3/4 of the time .the rate required is bits per iteration .unfortunately , this will not result in a good strategy for player a. the encoding schemes that arise in rate - distortion theory are deterministic .the action sequence is a deterministic function of the state sequence .since player b observes the state sequence , he will deduce the actions of player a and anticipate them every time .player a is effectively not playing a mixed strategy .the resulting payoff will be 0 .the insufficiency of rate - distortion - like codes is a concern even when player b does not know the state . after watching the actions of player a for roughly iterationsit will be possible to deduce the entire action sequence .to a clever opponent who does not know the state , the actions will appear random and appropriately distributed for the beginning portion of the block , for large enough ( related to results from ) , and later in the block , when , the opponent will be able to decode the sequence with high probability and anticipate every action .the bottom line is that rate - distortion - like codes place no emphasis on producing random actions .the work in prescribes an encoding scheme for generating correlated random variables . the minimum description rate for the state sequence needed to produce a sequence of actions with a distribution that is arbitrary close in total variation to the desired mixed strategy is wyner s common information , defined in .the resulting encoding scheme for producing this `` strong coordination '' between the state and the action uses randomized encoding and randomized decoding .figure [ figure encoding diagram ] illustrates that the encoder uses the message to specify the index of a sequence from a predefined codebook , just as in rate - distortion - like codes , but here the sequences do not represent reconstruction sequences . after the decoder identifies the sequence he produces the actions randomly as the output of a memoryless channel from to . in this waythe codebook is separated from the action sequence , allowing more randomness to be injected into the actions .[ ] [ ] [ 0.8] [ ] [ ] [ 0.8] [ ] [ ] [ 0.8] [ ] [ ] [ .6]x [ ] [ ] [ .4]x [ ] [ ] [ .4]x [ ] [ ] [ .4]x [ ] [ ] [ .4 ] to allow an action sequence to be random even to an observer of the state .an auxiliary variable is chosen which separates and into a markov chain , and a codebook of sequences is agreed upon .the state sequence is a random realization .the encoder observes and randomly chooses among jointly typical sequences from the codebook .after sending the index of the sequence to the decoder , an action sequence is generated randomly conditioned on . if the codebook is populated densely enough , the actions will be memoryless and appropriately correlated with the state sequence .the required density of the codebook is a topic visited in , , , and .,title="fig:",scaledwidth=35.0% ] the common information for a binary erasure channel can be found in .thus , the minimum description rate needed for a helper to describe the state to player a in order to achieve the optimal average payoff in the erasure game when player b knows the state is , where is the binary entropy function .just as the information structure in a bayesian game defines the set of pure strategies for each player , the description rate of the state that a helper is allowed to use also defines the set of block strategies allowed , which may have structure and memory from one game iteration to the next .we know from implications in that the set of achievable stationary strategies is the set of conditional distribution that yield . however , stationary strategies are not the only strategies worth considering .a degenerate bayesian game is represented by the payoff matrices in figure [ figure degenerate game ] . in this game playerb has only one pure strategy .player a is not really playing against an opponent but is simply maximizing the average value of a function of the state and action .in fact , the payoffs in this game are simply the negative hamming distortion between and ; therefore , the rate - value tradeoff for this game is a canonical example from rate - distortion theory .a payoff is achievable at rate if for , where is the binary entropy function . [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]-1 [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]0 [ ] [ ] [ 0.8]1 [ ] [ ] [ 0.8]player b [ ] [ ] [ 0.8]player a [ ] [ ] [ 0.8 ] [ ] [ ] [ 0.8] [ ] [ ] [ 0.8] .the rate - value tradeoff of this degenerate game can be cast as a rate - distortion problem .randomized actions for player a are unnecessary.,title="fig : " ] in this degenerate bayesian game the optimal strategy has structure and predictability .there is no adversary to compete with , so producing a random sequence of actions is unnecessary .these degenerate games simplify to rate - distortion problems and poignantly highlight situations where fully generating correlated random variables is not necessary .the state of a bayesian game should be encoded by a helper in such a way that allows a random mixed strategy to be correlated with the state , even if that strategy is not entirely unpredictable for the duration of the communication block .one way to achieve this is to follow the encoding procedure used to generate correlated random variables , as in figure [ figure encoding diagram ] , but use a smaller codebook .specifically , a codebook is constructed by first choosing a conditional distribution and generating sequences independently for each and i.i.d . according to .the encoder chooses randomly ( according to the appropriate distribution prescribed in ) from all sequences that are jointly typical with the state and sends the index to the decoder .the decoder then constructs the sequence from the index and synthesizes a memoryless channel according to to produce an action sequence . if the rate then this would produce a memoryless strategy , meaning that the opponent could not use observations of the past actions and states to infer about future actionshowever , by letting we also allow for situations where memoryless strategies are not of the essence . in order for the encoder to find at least one jointly typical sequence in the codebook with high probability , a rate requirement of is necessary . beyond that , any excess rate will serve to randomize the actions for a portion of the encoding block . at firstthe actions will appear random and correlated with the state .after observing a designated fraction of the block , the opponent will be able to deduce the index of the message using a channel decoder , revealing the future of the sequence , from which the future actions will be randomly generated .the transition from player b knowing nothing about the next action to the point where player b can decode with high probability becomes sharper as the block length grows .let be the transition threshold such that for the first iterations where the actions are random according to the mixed strategy and for the final iterations where the sequence is known with high probability , in the limit as is large .then for the case where player b does not know the state sequence , .for the case where player b knows the state sequence , .let us designate some notation to represent the score player a achieves in a game under different settings .let represent the minimum average payoff player a receives by playing a strategy when player b does not know the state .when player b does know the state , we use a superscript to indicate this and represent the minimum average payoff for player a by .additionally , when an auxiliary random variable is involved in constructing the action such that and player b knows , we represent the minimum average payoff for player b as or , depending on whether or not player b also knows the state .each of these values can be computed from the payoff matrix : [ theorem main ] if player b does not know the state sequence , an average payoff for player a is achievable with a state - information description rate of if if player b knows the state sequence , an average payoff for player a is achievable with a state - information description rate of if the lower bound of theorem [ theorem main ] accommodates settings where full randomization is needed , such as optimal play in the erasure game when player b knows the state , and it also accommodates efficient communication for degenerate games where the payoff does not change when the opponent learns about the action .it is not clear , however , whether or not this gives the whole tradeoff for sub - optimal play in games where the communication limits do not allow for ideal randomization .for example , what is the value of the erasure game when player b knows the state sequence and , where is the binary entropy function ? if the bound in theorem [ theorem main ] is not tight , a couple of ideas come to mind for improvement .one is to use an encoding method that is not stationary but moves from one strategy to another as the opponent learns . in game settings ,time - sharing must be analyzed carefully .the performance depends on whether the time - sharing is interleaved or not .a related idea is to use a layered encoding scheme , so that the opponent learns the message a little bit at a time .a possible layered approach follows . the helper who observes the state can send a message in layers to player a by selecting two auxiliary random variables and and an action such that form a markov chain .a codebook of sequences of size is generated from the i.i.d .distribution , and for each of these sequences a codebook of sequences of size is generated conditioned on .we let be small .the encoder first finds a sequence that is appropriately correlated with the state sequence and then chooses randomly from the sequences in the conditional codebook that are appropriately correlated with both and .the decoder constructs and from the message and synthesizes a memoryless channel to generate the action sequence from and . using this encoding scheme ,the block will be divided into three sections partitioned by and , where the knowledge of player b transitions sharply at these thresholds in the limit as becomes large .for the first iterations where the actions are random according to the mixed strategy .for the middle iterations where the opponent knows , which is correlated with , and for the final iterations where the opponent knows both and . in the settingwhere player b does not know the state sequence , and as long as . in the setting where player b knows the state sequence , and as long as .then this layered encoding scheme does not provide any benefit over theorem [ theorem main ] . and with either assumption about player b , if , the encoding scheme can be modified to increase . ]we omit the explicit lower bound that is obtained from this encoding scheme .it is a straightforward derivation and provides little additional insight .however , if a layered scheme proves to provide improvement over theorem [ theorem main ] , then the idea of adding additional layers with additional auxiliary variables comes to mind .each additional variable would introduce a new phase of performance in the game .this quickly becomes cumbersome .perhaps instead there is a smooth way of adjusting the strategy as the game proceeds and the opponent infers more about the communication .we have considered partial state information in a bayesian game from an optimization perspective .if a limited amount of state information is passed by a helper to one of the players in a two player zero - sum repeated bayesian game , how much can it increase the value of the game ?the description of the state can be used to correlate the actions in the game with the state . in some settings ,mutual information is the description rate needed to adequately correlate behavior .but in the adversarial setting of games , a rate - distortion - like compression of the state information at a rate equal to the mutual information results in behavior that is predictable by the opponent .on the other hand , wyner s common information has been shown to be the description rate needed to fully correlate actions in a completely unpredictable way . however , this can be more than necessary .we introduce a communication scheme that performs efficiently ( theorem [ theorem main ] ) in the two extreme cases , where memoryless randomization is essential and where it is irrelevant . although the communication is based on i.i.d .codebooks , the performance in the game changes dramatically mid - way through the communication block as the opponent infers the compressed message. the non - stationary performance of a stationary encoding scheme introduces new challenges in the quest for efficient compression .1 j. von neumann .`` zur theorie der gesellschaftsspiele . ''_ mathematische annalen _ , 100 ( 1928 ) : 295 - 320 .o. gossner .`` ability and knowledge . '' to appear in _ games and economic behavior_. i. gilboa and e. lehrer .`` the value of information - an axiomatic approach . '' _ journal of mathematical economics _ , 20 ( 1991 ) : 443 - 459 .e. shmaya .`` the value of information structures in zero - sum games with lack of information on one side . '' _ internation journal of game theory _ , 34 ( 2006 ) : 155 - 165 .e. lehrer and d. rosenberg .`` what restrictions do bayesian games impose on the value of information ? '' _ journal of mathematical economics _ , 42 ( 2006 ) : 343 - 357 .`` communication requirements for generating correlated random variables . ''isit 2008 , toronto .`` the common information of two dependent random variables . '' _ ieee trans . on info .it-21 , no .2 , march 1975 .p. cuff , h. permuter , and t. cover .`` coordination capacity . ''submitted to _ieee trans . on info .theory _ , 2009 .t. han and s. verd , `` approximation theory of output statistics , '' _ ieee trans .on info . theory _ , vol .3 , may 1993 .
|
two - player zero - sum repeated games are well understood . computing the value of such a game is straightforward . additionally , if the payoffs are dependent on a random state of the game known to one , both , or neither of the players , the resulting value of the game has been analyzed under the framework of bayesian games . this investigation considers the optimal performance in a game when a helper is transmitting state information to one of the players . encoding information for an adversarial setting ( game ) requires a different result than rate - distortion theory provides . game theory has accentuated the importance of randomization ( mixed strategy ) , which does not find a significant role in most communication modems and source coding codecs . higher rates of communication , used in the right way , allow the message to include the necessary random component useful in games .
|
there is significant value in the ability to associate natural language descriptions with images .describing the contents of images is useful for automated image captioning and conversely , the ability to retrieve images based on natural language queries has immediate image search applications .in particular , in this work we are interested in training a model on a set of images and their associated natural language descriptions such that we can later rank a fixed set of withheld sentences given an image query , and vice versa .this task is challenging because it requires detailed understanding of the content of images , sentences and their inter - modal correspondence .consider an example sentence query , such as _ a dog with a tennis ball is swimming in murky water " _( figure [ fig : pull ] ) . in order to successfully retrieve a corresponding image, we must accurately identify all entities , attributes and relationships present in the sentence and ground them appropriately to a complex visual scene .our primary contribution is in formulating a structured , max - margin objective for a deep neural network that learns to embed both visual and language data into a common , multimodal space . unlike previous work that embeds images and sentences , our model breaks down and embeds fragments of images ( objects ) and fragments of sentences ( dependency tree relations ) in a common embedding space and explicitly reasons about their latent , inter - modal correspondences .reasoning on the level of fragments allows us to impose a new fragment - level loss function that complements a traditional sentence - image ranking loss .extensive empirical evaluation validates our approach .in particular , we report dramatic improvements over state of the art methods on image - sentence retrieval tasks on pascal1k , flickr8k and flickr30k datasets . we plan to make our code publicly available .* image annotation and image search .* there is a growing body of work that associates images and sentences .some approaches focus on describing the contents of images , formulated either as a task of mapping images to a fixed set of sentences written by people , or as a task of automatically generating novel captions .more closely related to our approach are methods that naturally allow bi - drectional mapping between the two modalities .socher and fei - fei and hodosh et al . kernel canonical correlation analysis to align images and sentences , but their method is not easily scalable since it relies on computing kernels quadratic in number of images and sentences .farhadi et al . learn a common meaning space , but their method is limited to representing both images and sentences with a single triplet of ( object , action , scene ) .zitnick et al . use a conditional random field to reason about complex relationships of cartoon scenes and their natural language descriptions .* multimodal representation learning . *our approach falls into a general category of learning from multi - modal data .several probabilistic models for representing joint multimodal probability distributions over images and sentences have been developed , using deep boltzmann machines , log - bilinear models , and topic models .ngiam et al . described an autoencoder that learns audio - video representations through a shared bottleneck layer .more closely related to our task and approach is work of frome et al . , who introduced a visual semantic embedding model that learns to map images and words to a common semantic embedding with a ranking cost . adopting a similar approach , socher et al . described a dependency tree recursive neural network that puts entire sentences into correspondence with visual data . however , these methods reason about the image only on global level , using a single , fixed - sized representation from a top layer of a convolutional neural network as a description for the entire image , whereas our model reasons explicitly about objects that make up a complex scene . * neural representations for images and natural language . *our model is a neural network that is connected to image pixels on one side and raw 1-of - k word representations on the other .there have been multiple approaches for learning neural representations in these data domains . in computer vision ,convolutional neural networks ( cnns ) have recently been shown to learn powerful image representations that support state of the art image classification and object detection . in language domain ,several neural network models have been proposed to learn word / n - gram representations , sentence representations and paragraph / document representations .* overview of learning and inference . *our task is to retrieve relevant images given a sentence query , and conversely , relevant sentences given an image query .we will train our model on a training set of images and corresponding sentences that describe their content ( figure [ fig : teaser ] ) .given this set of correspondences , we train the weights of a neural network to output a high score when a compatible image - sentence pair is fed through the network , and low score otherwise . once the training is complete , all training data is discarded and the network is evaluated on a withheld set of testing images and sentences .the evaluation will score all image - sentence pairs , sort images / sentences in order of decreasing score and record the location of a ground truth result in the list .* fragment embeddings . *our core insight is that images are complex structures that are made up of multiple interacting entities that the sentences make explicit references to .we capture this intuition directly in our model by breaking down both images and sentences into fragments and reason about their ( latent ) alignment .in particular , we propose to detect objects as image fragments and use sentence dependency tree relations as sentence fragments ( figure [ fig : teaser ] ). * fragment - level objective . * in previous related work ,neural networks embed images and sentences into a common space and the parameters are trained such that true image - sentence pairs have an inner product ( interpreted as a score ) higher than false image - sentence pairs by a margin . in our approach , we instead embed the image and sentence fragments , and compute the image - sentence score as a fixed function of the scores of their fragments .thus , in addition to the ranking loss seen in previous work ( we refer to this as the global ranking objective ) , we will add a second , stronger fragment alignment objective .we will show that these objectives provide complementary information to the network .we first describe the neural networks that compute the image and sentence fragment embeddings .then we discuss the objective function , which is composed of the two aforementioned objectives . ) . *right : * dependency tree relations in the sentence are embedded ( section [ sec : sent ] ) . our model interprets inner products ( shown as boxes ) between fragments as a similarity score . the alignment ( shaded boxes ) is latent and inferred by our model ( section [ sec : fao ] ) .the image - sentence similarity is computed as a fixed function of the pairwise fragment scores . ][ sec : sent ] we would like to extract and represent the set of visually identifiable entities described in a sentence .for instance , using the example in figure [ fig : teaser ] , we would like to identify the entities ( dog , child ) and characterise their attributes ( black , young ) and their pairwise interactions ( chasing ) .inspired by previous work we observe that a dependency tree of a sentence provides a rich set of typed relationships that can serve this purpose more effectively than individual words or bigrams .we discard the tree structure in favor of a simpler model and interpret each relation ( edge ) as an individual sentence fragment ( figure [ fig : teaser ] , right shows 5 example dependency relations ) .thus , we represent every word using 1-of - k encoding vector using a dictionary of 400,000 words and map every dependency triplet into the embedding space as follows : + b_{r } \right).\ ] ] here , is a matrix that encodes a 1-of - k vector into a -dimensional word vector representation ( we use = 200 ) .we fix to weights obtained through an unsupervised objective described in huang et al . . note that every relation has its own set of weights and biases .we fix the element - wise nonlinearity to be the rectified linear unit which computes .the dimensionality of is cross - validated .[ sec : img ] similar to sentences , we wish to extract and describe the set of entities that images are composed of . inspired by prior work , as a modeling assumption we observe that the subject of most sentence descriptions are attributes of objects and their context in a scene .this naturally motivates the use of objects and the global context as the fragments of an image .in particular , we follow girshick et al . and detect objects in every image with a region convolutional neural network ( rcnn ) .the cnn is pre - trained on imagenet and finetuned on the 200 classes of the imagenet detection challenge .we use the top 19 detected locations and the entire image as the image fragments and compute the embedding vectors based on the pixels inside each bounding box as follows : ,\ ] ] where takes the image inside a given bounding box and returns the 4096-dimensional activations of the fully connected layer immediately before the classifier .it might be possible to initialize some of the weights in with the parameters in the cnn classifier layer , but we choose to discard these weights after the initial object detection step for simplicity .the cnn architecture is identical to the one described in girhsick et al .it contains approximately 60 million parameters and closely resembles the architecture of krizhevsky et al .[ sec : embed ] we are now ready to formulate the objective function . recall that we are given a training set of images and corresponding sentences . in the previous sections we described parameterized functions that map every sentence and image to a set of fragment vectors and , respectively . as shown in figure [ fig : teaser ], our model interprets the inner product between an image fragment and a sentence fragment as a similarity score .the similarity score for any image - sentence pair will in turn be computed as a fixed function of their pairwise fragment scores .intuitively , multiple matching fragments will give rise to a high image - sentence score .we are motivated by two criteria in designing the objective function .first , the image - sentence similarities should be consistent with the ground truth correspondences .that is , corresponding image - sentence pairs should have a higher score than all other image - sentence pairs .this will be enforced by the * global ranking objective*. second , we introduce a * fragment alignment objective * that learns the appearance of all sentence fragments ( such as black dog " ) in the visual domain . our full objective is the weighted sum of these two objectives and a regularization term : where is a shorthand for parameters of our neural network ( ) and and are hyperparameters that we cross - validate .we now describe both objectives in more detail .[ sec : fao ] the fragment alignment objective encodes the intuition that if a sentence contains a fragment ( e.g.blue ball " , figure [ fig : img ] ) , at least one of the boxes in the corresponding image should have a high score with this fragment , while all the other boxes in all the other images that have no mention of blue ball " should have a low score .this assumption can be violated in multiple ways : a triplet may not refer to anything visually identifiable in the image .the box that the triplet refers to may not be detected by the rcnn .lastly , other images may contain the described visual concept but its mention may omitted in the associated sentence descriptions .nonetheless , the assumption is still satisfied in many cases and can be used to formulate a cost function .consider an ( incomplete ) fragment alignment objective that assumes a dense alignment between every corresponding image and sentence fragments : here , the sum is over all pairs of image and sentence fragments in the training set .the quantity can be interpreted as the alignment score of visual fragment and sentence fragment . in this incomplete objective , we define as if fragments and occur together in a corresponding image - sentence pair , and otherwise .the constants normalize the objective with respect to the number of positive and negative .intuitively , encourages scores in red regions of figure [ fig : img ] to be less than -1 and scores along the block diagonal ( green and yellow ) to be more than + 1 .* multiple instance learning extension .* the problem with the objective is that it assumes dense alignment between all pairs of fragments in every corresponding image - sentence pair .however , this is hardly ever the case .for example , in figure [ fig : img ] , the boy playing " triplet refers to only one of the three detections .we now describe a multiple instance learning ( mil ) extension of the objective that attempts to infer the latent alignment between fragments in corresponding image - sentence pairs . concretely , for every triplet we put image fragments in the associated image into a positive bag , while image fragments in every other image become negative examples .our precise formulation is inspired by the _ , which is a simple and natural extension of a support vector machine to a multiple instance learning setting . instead of treating the as constants, we minimize over them by wrapping equation [ eq : fobj ] as follows : here , we define to be the set of image fragments in the positive bag for sentence fragment . and return the index of the image and sentence ( an element of ) that the fragments and belong to . note that the inequality simply states that at least one of the should be positive for every sentence fragment ( i.e. at least one green box in every column in figure [ fig : img ] ) .this objective can not be solved efficiently but a commonly used heuristic is to set . if the constraint is not satisfied for any positive bag ( i.e. all scores were below zero ) , the highest - scoring item in the positive bag is set to have a positive label .recall that the global ranking objective ensures that the computed image - sentence similarities are consistent with the ground truth annotation .first , we define the image - sentence alignmnet score to be the average thresholded score of their pairwise fragment scores : here , is the set of image fragments in image and is the set of sentence fragments in sentence . both range from .we truncate scores at zero because in the _ mi - svm _ objective , scores greater than 0 are considered correct alignments and scores less than 0 are considered to be incorrect alignments ( i.e. false members of a positive bag ) . in practice, we found that it was helpful to add a smoothing term , since short sentences can otherwise have an advantage ( we found that works well on validation experiments ) .the global ranking objective then becomes : .\ ] ] here , is a hyperparameter that we cross - validate .the objective stipulates that the score for true image - sentence pairs should be higher than or for any by at least a margin of .this concludes our discussion of the objective function .we note that our entire model ( including the cnn ) and objective functions are made up exclusively of dot products and thresholding at zero .we use stochastic gradient descent ( sgd ) with mini - batches of 100 , momentum of 0.9 and make 15 epochs through the training data .the learning rate is cross - validated and annealed by a fraction of for the last two epochs . since both multiple instance learning and cnn finetuning benefit from a good initialization ,we run the first 10 epochs with the fragment alignment objective and cnn weights fixed .after 10 epochs , we switch to the full mil objective and begin finetuning the cnn .the word embedding matrix is kept fixed due to overfitting concerns .our implementation runs at approximately 1 second per batch on a standard cpu workstation .* datasets . *we evaluate our image - sentence retrieval performance on pascal1k , flickr8k and flickr30k datasets . the datasets contain 1,000 , 8,000 and 30,000 images respectively and each image is annotated using amazon mechanical turk with 5 independent sentences . + * sentence data preprocessing .* we did not explicitly filter , spellcheck or normalize any of the sentences for simplicity .we use the stanford corenlp parser to compute the dependency trees for every sentence .since there are many possible relations ( as many as hundreds ) , due to overfitting concerns and practical considerations we remove all relation types that occur less than 1% of the time in each dataset . in practice , this reduces the number of relations from 136 to 16 in pascal1k , 170 to 17 in flickr8k , and from 212 to 21 in flickr30k .additionally , all words that are not found in our dictionary of 400,000 words are discarded . + * image data preprocessing .* we use the caffe implementation of the imagenet detection rcnn model to detect objects in all images . on our machine with a tesla k40 gpu , the rcnn processes one image in approximately 25 seconds .we discard the predictions for 200 imagenet detection classes and only keep the 4096-d activations of the fully connect layer immediately before the classifier at all of the top 19 detected locations and from the entire image . + * evaluation protocol for bidirectional retrieval . * for pascal1kwe follow socher et al . and use 800 images for training , 100 for validation and 100 for testing . for flickr datasets we use 1,000 images for validation , 1,000 for testing and the rest for training ( consistent with ) .we compute the dense image - sentence similarity between every image - sentence pair in the test set and rank images and sentences in order of decreasing score . for both image annotation and image search ,we report the median rank of the closest ground truth result in the list , as well as recall which computes the fraction of times the correct result was found among the top k items . when comparing to hodosh et al . we closely follow their evaluation protocol and only keep a subset of sentences out of total ( we use the first sentence out of every group of 5 ) .* sdt - rnn . * socher et al . embed a fullframe cnn representation with the sentence representation from a semantic dependency tree recursive neural network ( sdt - rnn ) .their loss matches our global ranking objective .we requested the source code of socher et al . and substituted the flickr8k and flick30k datasets .to better understand the benefits of using our detection cnn and for a more fair comparison we also train their method with our cnn features .since we have multiple objects per image , we average representations from all objects with detection confidence above a ( cross - validated ) threshold .we refer to the exact method of socher et al . with a single fullframe cnn as socher et al " , and to their method with our combined cnn features as sdt - rnn " .we were not able to evaluate sdt - rnn on flickr30k because training times on order of multiple days prevent us from satisfyingly cross - validating their hyperparameters . *the devise source code is not publicly available but their approach is a special case of our method with the following modifications : we use the average ( l2-normalized ) word vectors as a sentence fragment , the average cnn activation of all objects above a detection threshold ( as discussed in case of sdt - rnn ) as an image fragment and only use the global ranking objective .l|cccc|cccc + & & + * model * & * r * & * r * & * r * & * mean * _ r & * r * & * r * & * r * & * mean * _ r + random ranking & 4.0 & 9.0 & 12.0 & 71.0 & 1.6 & 5.2 & 10.6 & 50.0 + socher et al . & 23.0 & 45.0 & 63.0 & 16.9 & 16.4 & 46.6 & 65.6 & 12.5 + kcca . & 21.0 & 47.0 & 61.0 & 18.0 & 16.4 & 41.4 & 58.0 & 15.9 + devise & 17.0 & 57.0 & 68.0 & 11.9 & 21.6 & 54.6 & 72.4 & 9.5 + sdt - rnn & 25.0 & 56.0 & 70.0 & 13.4 & * 25.4 * & * 65.2 * & * 84.4 * & * 7.0 * + our model & * 39.0 * & * 68.0 * & * 79.0 * & * 10.5 * & 23.6 & * 65.2 * & 79.8 & 7.6 + __ l|cccc|cccc + & & + * model * & * r * & * r * & * r * & * med * _ r & * r * & * r * & * r * & * med * _ r + random ranking & 0.1 & 0.6 & 1.1 & 631 & 0.1 & 0.5 & 1.0 & 500 + socher et al . & 4.5 & 18.0 & 28.6 & 32 & 6.1 & 18.5 & 29.0 & 29 + devise & 4.8 & 16.5 & 27.3 & 28 & 5.9 & 20.1 & 29.6 & 29 + sdt - rnn & 6.0 & 22.7 & 34.0 & 23 & 6.6 & 21.6 & 31.7 & 25 + fragment alignment objective & 7.2 & 21.9 & 31.8 & 25 & 5.9 & 20.0 & 30.3 & 26 + global ranking objective & 5.8 & 21.8 & 34.8 & 20 & 7.5 & 23.4 & 35.0 & 21 + ( ) fragment + global & 12.5 & 29.4 & 43.8 & 14 & 8.6 & 26.7 & 38.7 & 17 + images : fullframe only & 5.9 & 19.2 & 27.3 & 34 & 5.2 & 17.6 & 26.5 & 32 + sentences : bow & 9.1 & 25.9 & 40.7 & 17 & 6.9 & 22.4 & 34.0 & 23 + sentences : bigrams & 8.7 & 28.5 & 41.0 & 16 & 8.5 & 25.2 & 37.0 & 20 + our model ( + mil ) & * 12.6 * & * 32.9 * & * 44.0 * & * 14 * & * 9.7 * & * 29.6 * & * 42.5 * & * 15 * + hodosh et al . & 8.3 & 21.6 & 30.3 & 34 & 7.6 & 20.7 & 30.1 & 38 + * our model ( + mil ) & * 9.3 * & * 24.9 * & * 37.4 * & * 21 * & * 8.8 * & * 27.9 * & * 41.3 * & * 17 * + _ _ the quantitative results for pascal1k , flickr8k , and flickr30k are in tables [ fig : p1 ] , [ fig : f8 ] , and [ fig : f30 ] respectively . *our model outperforms previous methods . *our full method consistently and significantly outperforms previous methods on flickr8k ( table [ fig : f8 ] ) and flickr30k ( table [ fig : f30 ] ) datasets .on pascal1k ( table [ fig : p1 ] ) the sdt - rnn appears to be competitive on image search . + * fragment and global objectives are complementary . *as seen in tables [ fig : f8 ] and [ fig : f30 ] , both objectives perform well but there is a noticeable improvement when the two are combined , suggesting that the objectives bring complementary information to the cost function .note that the global objective consistently performs slightly better , possibly because it directly minimizes the evaluation criterion ( ranking cost ) , while the fragment alignment objective only does so indirectly . + * extracting object representations is important . * using only the global scene - level cnn representation as a single fragment for every image leads to a consistent drop in performance , suggesting that a single fullframe cnn alone is inadequate in effectively representing the images .( table [ fig : f8 ] ) + * dependency tree relations outperform bow / bigram representations .* we compare to a simpler bag of words ( bow ) baseline to understand the contribution of dependency relations . in bow baselinewe iterate over words instead of dependency triplets when creating bags of sentence fragments ( set in equation[eq : s ] ) . as can be seen in the table [ fig : f8 ] , this leads to a consistent drop in performance .this drop could be attributed to a difference between using one word or two words at a time , so we also compare to a bigram baseline where the words in equation [ eq : s ] refer to consecutive words in a sentence , not nodes that share an edge in the dependency tree .again , we observe a consistent performance drop , which suggests that the dependency relations provide useful structure that the neural network takes advantage of . + * finetuning the cnn helps on flickr30k . *our end - to - end neural network approach allows us to backpropagate gradients all the way down to raw data ( pixels or 1-of - k word encodings ) .in particular , we observed additional improvements on the flickr30k dataset ( table [ fig : f30 ] ) when we finetune the cnn .we were not able to improve the validation performance on pascal1k and flickr8k datasets and suspect that there is an insufficient amount of training data .l|cccc|cccc + & & + * model * & * r * & * r * & * r * & * med * _ r & * r * & * r * & * r * & * med * _ r + random ranking & 0.1 & 0.6 & 1.1 & 631 & 0.1 & 0.5 & 1.0 & 500 + devise & 4.5 & 18.1 & 29.2 & 26 & 6.7 & 21.9 & 32.7 & 25 + fragment alignment objective & 11 & 28.7 & 39.3 & 18 & 7.6 & 23.8 & 34.5 & 22 + global ranking objective & 11.5 & 33.2 & 44.9 & 14 & 8.8 & 27.6 & 38.4 & 17 + ( ) fragment + global & 12.0 & 37.1 & 50.0 & 10 & 9.9 & 30.5 & 43.2 & 14 + our model ( + mil ) & 14.2 & 37.7 & 51.3 & 10 & 10.2 & 30.8 & 44.2 & 14 + our model + finetune cnn & * 16.4 * & * 40.2 * & * 54.7 * & * 8 * & * 10.3 * & * 31.4 * & * 44.5 * & * 13 * + _ _ * interpretable predictions . *we show some example sentence retrieval results in figure [ fig : f8r ] . the alignment in our model is explicitly inferred on the fragment level , which allows us to interpret the scores between images and sentences . for instance , in the last image it is apparent that the model retrieved the top sentence because it erroneously associated a mention of a blue person to the blue flag on the bottom right of the image .+ * fragment alignment objective trains attribute detectors . *the detection cnn is trained to predict one of 200 imagenet detection classes , so it is not clear if the representation is powerful enough to support learning of more complex attributes of the objects or generalize to novel classes . to see whether our model successfully learns to predict sentence triplets, we fix a triplet vector and search for the highest scoring boxes in the test set .qualitative results shown in figure [ fig : tsearch ] suggest that the model is indeed capable of generalizing to more fine - grained subcategories ( such as black dog " , soccer ball " ) and to out of sample classes such as rocky terrain " and jacket ". + * limitations . *our method has multiple limitations and failure cases .first , from a modeling perspective , sentences are only modeled as bags of relations .therefore , relations that belong to the same noun phrase can sometimes align to different objects .additionally , people frequently use phrases such as three children playing " but the model is incapable of counting .moreover , the non - maximum suppression in the rcnn can sometimes detect , for example , multiple people inside one person .since the model does not take into account any spatial information associated with the detections , it is hard for it to tell the difference between two distinct people or spurious detections of one person . on the language side ,there are many dependency relations that do nt have a natural grounding in an image ( for example , each other " , four people " , etc . ) .compound relations and attributes can also become separated .for instance black and white dog " is parsed as two relations ( conj , black , white ) and ( amod , white , dog ) . while we have shown that the relations give rise to more powerful representations than words or bigrams , a more careful treatment of sentence fragments might be necessary for further progress .we addressed the problem of bidirectional retrieval of images and sentences .our neural network model learns a multi - modal embedding space for fragments of images and sentences and reasons about their latent , inter - modal alignment .reasoning on a finer level of image and sentence fragments allowed us to formulate a new fragment alignment objective that complements a more traditional global ranking objective term .we have shown that our model significantly improves the retrieval performance on image sentence retrieval tasks compared to previous work .our model also produces interpretable predictions . in future workwe hope to extend the model to support counting , reasoning about spatial positions of objects , and move beyond bags of fragments .
|
we introduce a model for bidirectional retrieval of images and sentences through a multi - modal embedding of visual and natural language data . unlike previous models that directly map images or sentences into a common embedding space , our model works on a finer level and embeds fragments of images ( objects ) and fragments of sentences ( typed dependency tree relations ) into a common space . in addition to a ranking objective seen in previous work , this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities . extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image - sentence retrieval tasks . additionally , our model provides interpretable predictions since the inferred inter - modal fragment alignment is explicit .
|
thermodynamics is the study of the flow of heat and the transformation of work into heat .our understanding of thermodynamics is largely confined to equilibrium states .linear irreversible thermodynamics is an extension of the 19th century concepts of equilibrium thermodynamics to systems that are sufficiently close to equilibrium that intensive thermodynamic variables can be approximated by the same functions of local state variables , as would be the case if the entire system was in complete thermodynamic equilibrium .moreover these traditional concepts are limited in application to large systems or averages over an ensemble of states , referred to as the thermodynamic limit " .inventors and engineers endeavour to scale down machines and devices to nanometer sizes for a wide range of technological purposes .however , there is a fundamental limit to miniaturisation since small engines are _ not _ simply re - scaled versions of their larger counterparts .if the work performed during the duty cycle of any machine is comparable to thermal energy per degree of freedom , then one can expect that the machine will operate in reverse " over short time scales .that is , heat energy from the surroundings will be converted into useful work allowing the engine to run backwards . for larger engines , we would describe this as a violation of the second law of thermodynamics , as entropy is consumed rather than generated . until recently ,this received little attention in the nanotechnology literature , as there was no theory capable of describing the probability of entropy consumption in such small engines .in the last fifteen years , several fluctuation theorems have been proposed that revolutionise our understanding and use of thermodynamics .firstly these new theorems lift the requirement of the thermodynamic limit .this allows thermodynamic concepts to be applied to finite , even small systems .secondly , these new theorems can be applied to systems that are arbitrarily far from equilibrium .thirdly for the first time , these theorems explain how macroscopic irreversibility appears naturally in systems that obey time reversible microscopic dynamics .resolution of the loschmidt ( irreversibility ) paradox had defied our best efforts for more than 100 years .one of these fluctuation theorems , the evans - searles fluctuation theorem ( evans - searles ft ) , results in a generalisation of the second law of thermodynamics so that it applies to small systems , including those that evolve far from equilibrium .another , the crooks fluctuation theorem ( crooks ft ) provides a method of predicting equilibrium free energy difference from experimental information taken from nonequilibrium paths that connect two equilibrium states .this ft can be used to derive the well known jarzynski equality , which expresses the free energy difference between two equilibrium states in terms of an average over _ irreversible _ paths .both fts are at odds with a traditional understanding of 19th century thermodynamics .nevertheless , these theorems will be essential for the application of thermodynamic concepts to nanotechnology systems which are currently of such interest to biologists , physical scientists and engineers . in many areas of physical chemistry ,researchers strive to understand new systems through deterministic equations of motion .they seek to quantify microscopic forces and understand how a system responds to external perturbations , using techniques such as molecular dynamics simulation . at the heart of this endeavouris the notion that if the equations of motion or trajectories of the system are known , then any question about that system may be answered . however , such deterministic equations ( such as newton s equations ) are time - reversible , so that for every trajectory there exists a conjugate , time - reversed trajectory or anti - trajectory " which is also a solution to the equations . the relative probabilities of observing bundles of conjugate trajectories can be used to quantify the macroscopic reversibility " of the system : if the probability of observing all trajectories and their respective anti - trajectories are equal , the system is said to be reversible ; on the other hand , if the probability of observing anti - trajectories is vanishingly small , we say that the system is irreversible .the second law of thermodynamics stipulates that a system evolves irreversibly in one time - forward " direction , _i.e. _ , the probability of all anti - trajectories is zero . however , the second law strictly applies to large systems or over long time scales and does not describe the reversibility of small systems that are of current scientific interest , such as protein motors and nano - machines .this long - standing question of how irreversible macroscopic equations , as summarised by the second law of thermodynamics , can be derived from reversible microscopic equations of motion was first noted by loschmidt in 1876 and has been a paradox since then .boltzmann and his successors have simply side - stepped this issue with boltzmann stating as soon as one looks at bodies of such small dimension that they contain only very few molecules , the validity of this theorem [ the second law of thermodynamics ] must cease " the fluctuation theorem ( ft ) of evans & searles describes how a finite sized system s irreversibility develops in time from a completely time - reversible system at short observation times , to an irreversible one at long times .it also shows how irreversibility emerges as the system size increases .that is , it bridges the microscopic and macroscopic descriptions , relating a system s time - reversible equations of motion to the second law , and provides a quantitative resolution to the long - standing irreversibility paradox .specifically , the ft relates the relative probabilities , , of observing trajectories of duration characterised by the dissipation function , , taking on arbitrary values and , respectively : it is an expression that describes the asymmetry in the distribution of over a particular ensemble of trajectories .the dissipation function , , is , in general , a dimensionless dissipated energy , accumulated along the system s trajectory ; expressions for differ from system to system .however , any trajectory of the system that is characterised by a particular value has , under time - reversible mechanics , a conjugate or time - reversed anti - trajectory with . in this way , the lhs of the ft has also been interpreted as a ratio of the probabilities of observing trajectories to their respective anti - trajectories .the dissipation function , , is an extensive property , _i.e. _ , its magnitude scales with system size , and it also scales with the observation time , .thus , eqn [ eqn : ft ] also shows that as the system size gets larger or the observation time gets longer , anti - trajectories become rare and it becomes overwhelmingly likely that the system appears time - irreversible , in accord with the second law .that is , the evolution of a large macroscopic system proceeds preferentially in one direction .in addition , eqn [ eqn : ft ] also shows that the ensemble average of the dissipation function is positive for all for all nonequilibrium systems and for any system size ; _ i.e. _ , , .we will refer to this as the second law inequality . from classical thermodynamics ,the work done by an external field to drive a system from one equilibrium state to another equilibrium state is equivalent to the change of free energy , , between the states , only in the special case where the path is traversed quasi - statically .that is the path between the two states must be traversed so slowly that intermediate , as well as the initial and final states of the system , are all in thermodynamic equilibrium .crooks fluctuation theorem ( crooks ft ) states something quite remarkable . in the case of paths that are traversed at arbitrary rate , ranging from quasi - static to far - from - equilibrium " , the distribution of trajectories , characterised by the work done by the external field over the duration of the trajectory , follows }. \label{eqn : crooks}\ ] ] where , is boltzmann s constant and is the initial temperature of the system on which the external field does work , or equivalently the temperature of the surroundings with which the system is initially at equilibrium .this expression is similar to evans - searles ft in that it relates distributions of trajectories , characterised by an energy , specifically the work , .while eqn [ eqn : ft ] describes the asymmetry in the distribution of trajectories starting from the same initial distribution , crooks ft , eqn [ eqn : crooks ] , relates trajectories initiated from two different equilibrium states , and .that is , it considers ( i ) a distribution , , of _ forward _ trajectories , , where the free energy change between equilibrium states a and b is , and ( ii ) the distribution , of _ reverse _ trajectories , where the respective equilibrium free energy change is . like the ft ,crooks ft also quantifies how irreversibility evolves out of reversible equations of motion .a perfectly reversible ( quasi - static ) system is one where the work required to traverse is equal but opposite in sign to the work required in the time - reversed trajectory , .thus the rhs of eqn .[ eqn : crooks ] is unity for these reversible paths and , in agreement with classical thermodynamics .taking the ensemble average of and using the crooks ft gives , here the notation implies an ensemble average using the distribution function of state a , and the work is measure over forward trajectories . this expressionwas first posed by jarzynski in 1997 before eqn [ eqn : crooks ] was discovered , and is known as the jarzynski equality . it states that the free energy can be determined by measuring the work , , done by an external field along dynamical paths that connect the two states .these forward paths may be traversed _ at arbitrary rates _ , so that the intervening states may not be in thermodynamic equilibrium .this provides a completely new way of treating thermodynamics .if instead of averaging the work , you average the exponential of the work , then you can calculate the equilibrium free energy difference from information obtained along _ nonequilibrium _ paths .on the practical side , eqn [ eqn : jarzynksi ] , suggests that measuring work on small microscopic processes could yield thermodynamic quantities that are traditionally inferred by calorimetric measurements .the importance here is that in order to understand molecular - scale processes , it is necessary to probe them using molecular time / length scales . in the following section we will introduce some background concepts required for understanding the derivation and consequences of the fts .we introduce general equations of motion for nonequilibrium systems and give expressions for the work and heat transferred from the systems .we discuss how thermostatting is achieved in simulations , and in particular discuss the thermostatting mechanisms that produce a canonical equilibrium distribution , since it is the canonical state that was originally treated in the crooks ft . in section 3we review the concepts of the original ft derivation made by evans and searles for systems governed by deterministic mechanics . today , many proofs exist for the fts , extending to systems described by quantum , _ e.g. _ and stochastic dynamics _ e.g. _ . herewe limit our review to the original deterministic treatment and to concepts central to the understanding the ft in the context of modern statistical mechanics and its applications .we extend this description to provide a derivation of the crooks ft for deterministic systems so as to demonstrate the similar basis it has with the evans - searles ft , and we refer collectively to these as the fts .these derivations consider the response of equilibrium systems to external perturbations .however , we also show how steady state fluctuation relations can be derived for systems that approach a unique steady state , and in those cases , how the fts can be used to obtain einstein and green - kubo relations . in section 4we review applications of the fts to experimental and model systems .the degrees of freedom for a system of particles can be the represented by the vectors of time - dependent generalized coordinates , , and momenta , .in addition , a single point in the system s phase - space is denoted as .consider a closed system that is in thermal equilibrium with a reservoir . from equilibrium statistical mechanics , we know that the equilibrium probability distribution of the system is given by the canonical distribution function , where is the equilibrium partition function , and is the internal energy is the phase - variable corresponding to the internal energy . for simplicity , we will use the term internal energy " to refer to , and will refer to the thermodynamic internal energy as the mean internal energy " .] which is the sum of the kinetic and potential energies , and of the system .adiabatic _ system can exchange energy with its environment in the form of work .we can think of work as the form of energy exchange that is directly controllable by the environment .for example , it might be desirable to change the mean internal energy , of the system , and this can be achieved by externally controlling some parameter in the potential energy function of the system , .examples of such -parameters include the switch on an externally applied electric field in a crystalline salt , the trapping constant in an optical trap holding a colloidal particle , or a mathematical agent that changes the size of lennard - jones spheres in a computer simulation .we emphasise this mode of external control by formally making time - dependent and by writing the internal energy as when an external agent does work on a system without changing its underlying equilibrium state , which has mean internal energy , we refer to that field as a purely dissipative field , denoting it generally by . this dissipative field does not figure in the underlying equilibrium distribution or partition function , but drives the system away from equilibrium . while it may be possible to represent the external agent using either or , we choose the convention that if and , the system will always relax to an ( nondissipative ) equilibrium state , and that if , the system will never relax to a nondissipative state .this distinction may depend on the state of system ( _ e.g. , _ fluid or solid ) . examples of such dissipative fields include a fluid under a shear flow , dragging a colloidal particle in a fluid , and an electric field acting on a molten salt . under adiabatic conditions ( _ i.e. _ the rate of heat exchange with the reservoir is ) , the combined action of both kinds of external agents , _i.e. , _ a time dependent potential represented by a -parameter and a dissipative field , results in the equations of motion , with given by eqn .[ eqn : fullham ] . is a force " , andneither does it have to always be a vector .for example , it could be a second order tensor , such as the velocity gradient tensor in a fluid .the coupling tensors and are functions of and have no explicit time - dependence , and are formally one tensorial order higher than . ] for an externally driven adiabatic system , the rate of increase of must be identically equal to the rate of work done on the system by the environment .thus , where the superscript _ ad _ emphasises adiabatic conditions .using eqn .[ eqn : adiabaticeom ] , we obtain , where is the volume of the system , and , the dissipative flux due to the field is formally defined through the equation : in the case of small systems such as protein motors or artificial nanomachines , it is quite difficult to thermally isolate the system to achieve perfectly adiabatic conditions .moreover , in most applications of interest , such systems typically function in an environment of constant temperature .molecular dynamics simulations of small systems have employed thermostats " which involve appending eqn [ eqn : adiabaticeom ] with a mathematical constraint to fix the temperature .for example , with a gaussian isokinetic thermostat , eqns .[ eqn : adiabaticeom ] take the form : here , is a thermostat multiplier becomes an additional independent variable and is a function of time governed by an additional equation rather than a direct function of . ] , and is a diagonal matrix ( with 1 s and 0 s on the diagonal ) that describes which components of the system are thermostatted , and the potential s parameter is held fixed , , the equilibrium these equations , eqn [ eqn : fulleom ] , will relax to is given by , where the dirac delta function accounts for the kinetic energy of the thermostatted particles being held fixed to the value and is the particle mass .the partition function is given by . ] .several other mathematical constraints can be constructed , all of which may be argued to be artificial " .we will discuss the implications of such artificial thermostats , and the constraints they must satisfy shortly .as the system is closed , an increase in the internal energy must equal the sum of work done on the system by the environment and the heat added to the system by the thermostat .however , it is clear that the functional form of the expression for the rate of work must still be given by the relation in eqn .[ eqn : working ] regardless of whether the system is thermostatted or not . from the first law of thermodynamics ,the expression for the rate of heat exchange is , with given by eqn .[ eqn : working ] .the actual expression for the rate of change of depends on the mathematical form of the thermostat . for the thermostat represented in eqn .[ eqn : fulleom ] , the total work done on a closed system is hence , and the heat added to the system is .we note here that the integrals above are path integrals .however , since the dynamics described by eqn .[ eqn : fulleom ] are completely deterministic , and are functions solely of the initial point in phase - space at , and the duration .that is , and .consider a particular trajectory initiated at time at , that terminates after time at .let represent an infinitesimal volume of phase space at time about the point . as the dynamics is deterministic , the trajectory is completely determined by the phase space coordinates at any time along the trajectory , and the duration or observation time , , of the trajectory .consequently , for every initial state within a volume element there exists a unique destination point within volume element . as the trajectories in an infinitesimal bundle around the initial state , , form the later bundle , the ratio of the volumes of the infinitesimal volume elements vary as here is the jacobian of the transformation of the initial to the final , and is the _ phase - space compression factor_. it is noted that the integral on the right - hand side of eqn .[ eqn : phasevolcomp ] is also a path - integral .equations [ eqn : phasevolcomp ] and [ eqn : lambdadef ] show how the volume of a small region of phase - space changes as it evolves in time . for adiabatic systems ,there is no change of phase - space volume along a trajectory , and we require that . from the equations for an adiabatic system ( eqn .[ eqn : adiabaticeom ] ) , and the definition of above , we see that the field and the coupling tensors and must be such that irrespective of whether the system is thermostatted or not .this condition is known as the _ adiabatic incompressibility of phase - space _ ( ai ) . however , for thermostatted systems in a driven steady state , a contraction of phase space occurs continually , as the initial phase volume shrinks to a fractal attractor of lower dimension than the ostensible phase - space. for appropriately selected thermostats has to be extended in the case of the nos - hoover thermostat as detailed in ref .] , the phase - space contraction factor is directly proportional to the rate of heat exchange with the thermostat , since the same exclusive set of trajectories passes through both the phase volumes and , the differential probability _ measures _ of the two infinitesimal volumes must be identical : we can express the probability measure in terms of the probability density as where is the time - dependent phase space probability density .the observation that the probability measure is conserved in phase - space also leads to the liouville equation for the probability density : we can recast eqn .[ eqn : liouville ] into the following lagrangian form : from which it can be shown that this equation is also obtained directly from eqns .[ eqn : phasevolcomp ] , [ eqn : dpequality ] , and [ eqn : fdef ]. one may enquire about the effect of the introduction of fictitious thermostats in eqn [ eqn : fulleom ] and the possible introduction of artifacts .as mentioned above , eqn [ eqn : thermostatcondn ] ensures that the equations of motion correctly sample the appropriate equilibrium distribution function in the absence of and when .we also know that these equations of motion do not introduce artifacts when used to determine the linear response of a system to a small external field .further , the equilibrium correlation functions used in the green kubo integrals are affected by the thermostat at most as where is the number of particles .a general description of a system that is driven past a linear response is difficult , and in the nonlinear regime , synthetic thermostatted dynamics as in eqn .[ eqn : fulleom ] may produce artifacts .moreover , mori - zwanzig theory can no longer be combined with the onsager regression hypothesis to rigorously derive stochastic equations . to address this, we can arrange things such that the thermostat only acts on particles that are in a region , which is spatially far enough removed from the nonequilibrium process such that it remains in local equilibrium .a detailed theoretical and simulation study has shown how this approach can , for an infinite family of thermostats , result in the same behavior for the system of interest . equations of motion for isothermal - isobaric systems with a thermostat and barostat which are external to the system of interest , have also been developed .these developments are theoretically important because they allow the derivation of important theorems which require the condition of ergodic consistency , which will be discussed shortly . for driven systemsa satisfying treatment requires a mechanism for heat exchange .the development of synthetic thermostats , which only act on regions removed from the system of interest , allows the ergodic consistency condition to be satisfied without introducing artifacts when far from equilibrium .this outcome is not easy to arrive at by other means .as mentioned in the introduction , the evans - searles ft shows how irreversibility emerges naturally in systems whose equations of motion are time - reversible . in order to fully appreciate the substance of this ft, we need to first define two fundamental concepts : microscopic time - reversibility , and macroscopic irreversibility .the equations of motion in section [ sec : basics ] describe the time evolution of a point , , and may depend explicitly on the time due to the possible time - dependencies of and .if the equations of motion are reversible , then there exists a time reversal mapping that transforms the point to such that if we generate a trajectory starting at and terminating at , then under the same dynamics , we start at and arrive back at after time t. we refer to a trajectory and its anti - trajectory as a conjugate pair of trajectories. the time - average of properties that are even under the mapping will have equal values for the trajectory and its conjugate , whereas the time - average of properties that are odd under the mapping will have values with equal magnitude , but opposite signs for the trajectory and its conjugate .for many dynamics , ( _ e.g. _ newtonian dynamics ) , the appropriate mapping gives .for the equations of motion . this phase space variable must be reversed along with the momentum upon applying the time reversal mapping .] , eqn [ eqn : adiabaticeom ] or [ eqn : fulleom ] , to satisfy this condition we must have and to in effect provide time reversal symmetry .this is why the evans - searles ft can often allow protocols which have an odd time parity . ] let us now consider a system of particles whose overall equations of motion are time - reversible .as discussed above , for every trajectory that is initiated at and terminates at in a system with microscopically time - reversible dynamics , there exists a unique anti - trajectory that starts at the phase - space point at and ends at at .the bundle of anti - trajectories at time passes through the volume element centered about the point .however , the size of the volume element is equal to that of .moreover , if there is a volume contraction from to as shown in figure [ fig : tubes ] , then there is an equivalent volume expansion associated with the bundle of anti - trajectories . given a system whose equations of motion are microscopically time - reversible ,is the macroscopically observed behaviour reversible as well ? as kelvin and loschmidt pointed out in the 1870 s , because newtonian equations of motion are microscopically time - reversible , for every trajectory there is a anti - trajectory which is also a solution to the equations .one might then conclude that microscopically time - reversible systems must also be macroscopically reversible .however the second law of thermodynamics stipulates that a macroscopic system evolves overwhelmingly in one , time - forward direction and is irreversible " .the question of how microscopically time - reversible dynamics gives rise to observable macroscopic irreversibility , is indeed `` loschmidt s paradox '' . to resolve this paradox , we first need an unambiguous measure of `` macroscopic irreversibility '' ,that is consistent with classical thermodynamics in the thermodynamic limit , and applies to microscopic time - reversible equations of motion .a system is said to undergo a _ macroscopically reversible _ process in the time interval , if 1 .the system is _ ergodically consistent_. that is for every trajectory that initiates at , the starting coordinates of its respective anti - trajectory , is represented in the phase space of the system at , or equivalently , the probability density of the initial coordinates of anti - trajectories at time is non - zero : , for all .the probability of observing any bundle of trajectories , occupying an infinitesimal volume , is equal to the probability of observing the conjugate bundle of anti - trajectories , or the latter condition for macroscopic reversibility can be written more conveniently in terms of the distribution function of the phase space : or , but , as mentioned earlier , the volume of is the same as .hence , from eqn .[ eqn : phasevolcomp ] , we see that eqn [ eqn : macrocondition ] , a condition for macroscopic reversibility , becomes - \int_0^t \lambda({\bf \gamma}_s , s ) \,ds \,=\ , 0\ , , \label{eqn : condition}\ ] ] for any initial coordinate .indeed , a quantitative measure of _ irreversibility _ associated with a system with microscopically time - reversible dynamics over the interval , may be defined as the inequivalence of eqn [ eqn : condition ] } \,\nonumber\\ & = & \ln { \left [ \frac{f({\bf \gamma}_0,0)}{f({\bf \gamma}^\ast_t,0 ) } \right ] } - \int_0^t \lambda({\bf \gamma}_s , s)\ , ds\,.\label{eqn : omega1}\end{aligned}\ ] ] the dissipation function is completely determined for a deterministic trajectory by the initial coordinate , , and the duration of the trajectory , .we note here that the time - reversibility of the dynamics dictates that conjugate pairs of trajectories are characterised by the same magnitude of , but of opposite sign : furthermore , if for all trajectories initiated anywhere in phase - space , then the system is in equilibrium and the probabilities of observing any trajectory and its corresponding anti - trajectory are equal . if for a trajectory , then the corresponding anti - trajectory is less likely to be seen , and if the ensemble average is greater than zero , , we have macroscopic dynamics moving in the forward direction " .if , then we would have macroscopic dynamics in the reverse direction .thus , , is the condition for macroscopic irreversibility .our knowledge of the second law however seems to suggest that the arrow of time points unambiguously in one firm direction , accordingly we explain how this comes about next .we consider trajectories of duration in phase - space by selecting all those initial coordinates for which takes on some value between and thus obtain the probability density ({\bf \gamma}_0,0 ) .\label{eqn : forwardp}\ ] ] upon recognising that is merely a dummy variable of integration , we may write down the conjugate probability as , ({\bf \gamma}_t^\ast,0 ) . \label{eqn : reversep}\ ] ] note that eqn [ eqn : reversep ] selects those trajectories which are conjugate to those which are selected by eqn [ eqn : forwardp ] .using the definition of in eqn [ eqn : omega1 ] along with eqn [ eqn : posneg ] and eqn [ eqn : phasevolcomp ] , we have }\int d{\bf \gamma}_0 \ , \delta[\omega_t({\bf \gamma}_0)-{\cal a})f({\bf \gamma}_0 , 0),\end{aligned}\ ] ] which leads to the evans - searles fluctuation theorem ( evans - searles ft ) : }\,,\ ] ] andusing this we can average over all values of to give the second law inequality , , . in the above derivation of the evans - searlesft it was assumed that : 1 .the dynamics is ergodically consistent with the initial distribution function 2 . , and 3 .the dynamics are deterministic and microscopically reversible . for systems of particles, the third condition implies that the time - dependent and must have an even time - parity .these are sufficient conditions for the evans - searles ft to be valid , but the condition of microscopic reversibility can be relaxed to some degree , and stochastic versions of the evans - searles ft exist . what is the significance of the evans - searles ft ?one of the most important consequences of the evans - searles ft is that it shows how macroscopic irreversibility can eventuate from microscopically reversible equations of motion . as described above, the systems we are considering are microscopically time - reversible .the evans - searles ft defines the variable which is a time - averaged phase variable and is zero for all initial phases if the system is macroscopically reversible , but will be non - zero for some initial conditions if it is macroscopically irreversible .furthermore , the evans - searles ft shows that for any ergodically consistent , microscopically time - reversible system , and only zero if the system is at equilibrium .the significance of the evans - searles ft has been discussed in , and other features are discussed in sections 3.4 below however , where does the irreversibility come from ? in obtaining the evans - searles ft , it is assumed that the initial distribution function is known and then , typically , the response of this system to a field , , or variation of is considered .thus , we make the assumption of causality .if , instead , we had assumed that the system ends in a known state , we would have obtained the result .therefore the assumption of causality underlies the final result .similar to the work , , the dissipation function can be expressed in terms of a trajectory of duration with initial coordinate , _i.e. _ .analogous to , we restrict ourselves to trajectories initiated at equilibrium in the canonical ensemble and consider the action of both , a time - dependent -controlled potential where initially and finally , as well as a time - dependent dissipative field , .for the definition of the dissipation function , eqn [ eqn : omega1 ] , becomes -\beta \int_0^t ds\ , \dot{q}({\bf \gamma}_s , s).\end{aligned}\ ] ] noting that the coordinates of the trajectory and anti - trajectory , and , differ only in the direction of momenta , , and that is even in momenta , the lhs can be cast as a time integral over the trajectory , ,\ ] ] so that \\ & = & w({\bf \gamma}_0 , t ) - \int_0^tds\bigg[\dot \phi({\bf q } , \lambda_s ) + \dot \phi({\bf q},\lambda = a)\bigg ] , \end{aligned}\ ] ] or explicitly in terms of the potential and the external field as , .\end{aligned}\ ] ] we can generate probability distribution functions for , which is the work done on the system , be it in terms of a parametrically , , dependent potential or a dissipative field , , in the same way that we generated probability distributions in section 2 for .in contrast to the evans - searles ft , the crooks ft considers the probability of observing trajectories from two different equilibrium states . the probability density for a trajectory of duration , initiated at equilibrium with , is f_{eq}({\bf \gamma}_0 , \lambda = a),\ ] ] where , denotes the work done over a trajectory of duration , initiated at , and is the equilibrium distribution in the canonical ensemble .now the reverse trajectory or anti - trajectory starts at coordinates , and is guaranteed under deterministic dynamics to give a value of work that is equal and opposite that of the forward trajectory , see figures [ fig : crookfw ] and [ fig : crookrv ] . in this case, the reverse trajectory must also initiate under equilibrium conditions , however with , and the time - dependence of the parameter and field must be reversed compared to the forward trajectory .so , the probability density for this reverse trajectory is f_{eq}({\bf \gamma}^\ast_t , \lambda = b).\ ] ] at this point is it useful to note that if the system is driven strongly , _i.e. _ far - from - equilibrium , the destination coordinate in the forward trajectory , may not be significantly weighted in the equilibrium distribution associated with the initial coordinates of the reverse trajectory ; or in other words , } } { z_b}\ ] ] may be very small .that is , the reverse trajectory can be rare " , creating a difficult challenge in sampling the distribution in the crooks ft , causing the convergence of the ensemble average in the jarzynski equality to become very slow . recasting in terms of the work done on the system , using the first law , _ i.e. , _ + k_bt\int_0^t ds \lambda(s),\end{aligned}\ ] ] and noting that is an even function of the momenta , _i.e. _ we see , }\frac{\delta v({\bf \gamma}_0)}{\delta v({\bf \gamma}^\ast_t ) } \end{aligned}\ ] ] where we have used the phase space compression factor given earlier .now and the jacobian is such that , \exp{\bigg[\beta w({\bf \gamma}^\ast_t , t)\bigg ] } f_{eq}({\bf \gamma}_t , \lambda = a).\ ] ] furthermore , as forward and reverse trajectories have equal but opposite values of under time - reversible dynamics , thus } \int_{d_a } d{\bf \gamma}_t \ , \delta\big [ w({\bf \gamma}_t , t)-{\cal a}\big ] f_{eq}({\bf \gamma}_t , \lambda = a).\end{aligned}\ ] ] the integral on the rhs can be identified as , resulting in the crooks ft : }.\ ] ] in the above derivation of the crooks ft for the deterministic system it was assumed that : 1 .the dynamics is such that any phase point for which , 2 . , and 3 .when the time evolution of and are reversed , the dynamics remain deterministic and microscopically time - reversible .these are sufficient conditions for the crooks ft to be valid , but the condition of reversibility can be relaxed to some degree , and stochastic versions this ft exist .thus far , we have focused upon fts that apply to a system driven out of an initial equilibrium state by an external field , characterised by , or a parametric change in the potential characterised by .indeed , the evans - searles ft applied to systems driven from a known initial state over transient trajectories , is often referred to as the transient fluctuation theorem ( tft ) . however , according to the derivations of the evans - searles ft , the initial phase - space distribution is not restricted to time - invariant or even equilibrium distributions .the only requirement the evans - searles ft places on the initial distribution function is that it is known and expressible in the ostensible dimension of the equations of motion ( this is not the case for crooks ft ) .here we consider the fts applied to trajectories under a steady - state ; _ i.e. _ , the system is acted upon by a purely dissipative , constant external field , .there are two steady - sate fluctuation theorems ( ssfts ) that appear in the literature .both can be traced back to the original paper on fts that focused upon isoenergetic equations of motion ; but it is only later that two separate theorems were distinguished : ( i ) the steady - state version of the evans - searles ft and ( ii ) the gallavotti - cohen ft .in its simplest formulation , the ssft of evans & searles involves a rearranged from of eqn [ eqn : ft ] applied in the long time limit to trajectories of a system wholly in a nonequilibrium steady - state ; _ i.e. _ the distribution function is time - invariant . a more complete derivation of the ssft valid under conditions including the decay of correlations is available ; however here we provide a simpler presentation that is physically compelling , and suitable for those primarily interested in a scientific justification .the argument of the evans - searles ft , applied to the steady state is }\nonumber\\ & = & \ln{\bigg[\frac{f^{ss}({\bf \gamma}_0,0)}{f^{ss}({\bf \gamma}^\ast_t,0)}\bigg]}-\int_0^t\lambda({\bf \gamma}_s , s)ds \label{eqn : omegass}\end{aligned}\ ] ] where is now the phase - space distribution function associated with a steady - state , rather than with an equilibrium state ( as in the evans - searles ft applied to transient trajectories ) .however , typically this definition of is difficult to to implement .firstly , steady - state distribution functions for the types of deterministic dynamics under consideration are not generally known .what is known is that , in the steady - state , the dynamics approach a strange attractor that has a different fractal dimension to the ostensible phase space .even if we knew the details of this attractor , it would still be difficult to apply the evans - searles ft as it describes bundles of phase - space trajectories in the phase - space dimension and not in the dimension of the strange attractor .note however , that there are special cases where these steady - state distribution functions can be expressed simply and exactly under stochastic equations of motion , and eqn [ eqn : ft ] may be applied .in general these steady - state distribution functions are not known , and consequently , it is not possible to construct exact expressions for for deterministic trajectory segments of duration that are wholly at a nonequilibrium steady - state .however an approximate steady - state dissipation function can be constructed from trajectories initiated at a known equilibrium , in the absence of the dissipative field , .this distribution function is often referred to as the kawasaki distribution function , and can be considered to form the basis of the formal proof . at time , the dissipative field is introduced , and we can express associated with this trajectory in terms of its instantaneous rate of change , at time : here , is some arbitrary long time , say several maxwell times , so that the fluid has completely relaxed into a steady - state .thus , is cast as a sum of transient and steady - state contributions with the steady - state contribution , identified as the steady - state dissipation function , , used to approximate with an error or order .it is instructive to express these dissipation functions as time - averages , such that we make the physically compelling argument that , in the long time limit , the distribution function for steady - state trajectories will asymptotically converge to that for the full transient trajectories : however , the fluctuations in also vanish in the long time limit , and , in order that the ssft be of any importance , it is necessary that these fluctuations vanish more slowly than , the error in the approximation . to argue that this is the case , we re - express as a sum over contiguous trajectory segments of duration : if is larger than the longest correlation time in the system , then the sum is composed of independent segments and the variance in the sum is proportional to the number of segments or .the factor in front of the sum decreases the variance of the sum by a factor .thus , the standard deviation of the steady state dissipation function , , along a steady - state portion of a trajectory is inversely proportional to , and decays at a slower rate than that of the error in the approximation of with .consequently , we can approximate in the ft , eqn [ eqn : ft ] , with the steady - state dissipation function , leading to the ssft the fluctuation relation of the gallavotti - cohen ft can be written where the average phase space compression factor ( or the divergence of the flow ) , measured in the steady state , is given as the original proposal of this ft was made for the special case of isoenergetic dynamics , for which .however , subsequently the work of gallavotti and cohen strongly suggests that under appropriate conditions eqn [ eq : gcft ] ( and with a restriction on the values of is restricted to values of bounded by a value : . in the small field limit this valueis given to leading order as . ] ) can be applied to a larger class of dynamical systems ( e.g. constant temperature systems ) .they arrived at eqn [ eq : gcft ] through a formal derivation , which drew upon the sinai - ruelle - bowen ( srb ) measure ( for a discussion see ) , which requires the dynamics to be an anosov diffeomorphism .a necessary but insufficient condition for this is that the dynamical system must be hyperbolic .this means that the number of expanding and contracting directions on the attractive set must be equal , or in other words the number of positive and negative finite time lyapunov exponents must be equal and no zero exponents are allowed . in general , the equations of motion , eqn [ eqn : fulleom ] , do not form an anosov diffeomorphism . to address thisgallavotti and cohen introduced a new hypothesis , termed the chaotic hypothesis , , of a many - particle system can be regarded as a mixing anosov map . '' in the term `` transitive anosov map '' was used to mean `` mixing anosov map '' . ]which , for the purposes of the gallavotti - cohen ft , allows many - body dynamics to be treated as an anosov diffeomorphism .unfortunately , as yet , there is no way to independently ascertain if a physical system may be treated as anosov diffeomorphic .the requirements for the valid application of the gallavotti - cohen ft to physical systems are therefore extremely difficult to establish .there is a large body of computer simulation results , for various processes , that have tested the steady state fluctuation theorems , e.g. , .we known of no case for which the gallavotti - cohen ft converges faster than the evans - searles ft . for temperature regulated dynamics ,when the dissipative field strength is very small , the gallavotti - cohen ft can take extremely long times to converge .indeed as the dissipative field strength approaches zero the amount of time it takes the gallavotti - cohen ft to converge diverges . to understand thisconsider the arguments of the evans - searles ft and the gallavotti - cohen ft .when the field strength approaches zero so does the instantaneous dissipation function .more precisely the average value of the instantaneous dissipation function , to leading order , is , and the standard deviation is .for the phase space compression factor , the mean is and the standard deviation is , where is the amplitude of the standard deviation at equilibrium .the difference between the behaviour in the amplitude of the fluctuations , , for these two quantities is crucial . as approaches zero so to does the amplitude of the fluctuations in but not those in . nowthe form of the fluctuation formulae is asymmetric . in the limit ,eqn [ eqn : ssft ] ( the ssft ) remains consistent with a given trajectory segment being equally likely to occur as its anti - trajectory segment .this is a necessary condition for equilibrium .in contrast eqn [ eq : gcft ] ( the gallavotti - cohen ft ) is not consistent with this , due to the fluctuations in remaining finite when .one way this could be resolved is for the time averaging , or the time for which the gallavotti - cohen ft is given to converge , to be so long that there are no significant fluctuations remaining .however it is specified by the theory that the largest fluctuations for which the gallavotti - cohen ft may be validly applied vanishes in the small field limit .the evans - searles fluctuation theorem , as well as crooks fluctuation theorem have been reviewed here as recent theorems in nonequilibrium statistical mechanics .here we show that the fts , and in particular the evans - searles ft , is completely consistent with the long - standing and well - known relations in the field , namely the einstein - sutherland relation and the green - kubo relations .the einstein - sutherland relation dates back to the very early days of non - equilibrium statistical mechanics ; it can be written as this important relation describes the average steady - state velocity of a particle , , under an applied field , , to the variance in the particle s displacement over time in the absence of the field , which is commonly referred to as the diffusion constant , , where .starting from the evans - searles ft , we reformulate a generalised form of this einstein - sutherland relation , and from that , the green - kubo relations .while this does not produce new results , it demonstrates the fts consistency with important existing theorems in nonequilibrium statistical mechanics , and it also emphasises / clarifies the conditions necessary for the application of these theories , as we show later , in the case of supercooled liquids . to derive the more generalised version of the einstein - sutherland relation , eqn [ eq : gk - suthe ] , from the ft , we first need to identify the product of the particles drift velocity and the applied field , , as a specific example of a dissipative field flux , that we have represented by , which , in the case of the flux and constant field being in the same direction , we write more simply as . under steady - state, the time - averaged dissipative flux is defined as where is a time long enough after the application of the field so that the system is at a steady - state . the steady - state fluctuation theorem , eqn [ eqn : ssft ] ,may then be written as in the limit of long time , we may invoke the central limit theorem , which states that close to the mean the distribution of will be gaussian . additionally in the limit of small field strength , values of close to the mean will dominate : now as the variance in the distribution of is independent of the field direction or sign of , then , to leading order in , the variance behaves as the ssft , eqn [ eq : gk - ssft ] , the central limit theorem eqn [ eq : gk - gauss ] and eqn [ eq : gk - variance ] combine to give a generalised einstein - sutherland relation : it is generally known that the einstein - sutherland relation is only valid to linear order in the field , .however , the ft provides more detailed understanding of how this relation fails under large fields .when the field is increased , the mean dissipative flux , will also increase , figure [ fig : gaussian ] .when the mean is large relative to the standard deviation , then for every typical value of the flux , its conjugate value in the ssft will be represented in the wings of the distribution where the central limit theorem no longer applies . in this instance , the generalised einstein - sutherland relation is invalid , even if the equilibrium variance is replaced with the variance under steady - state , or .molecular dynamics simulations of planar shear have shown that the breakdown of the central limit theorem dominates over the approximation of ignoring terms of or higher in the variance that describes the distribution near its mean . . in the case of a single ,tagged particle , interacting with a constant field , and embedded in a supercooled liquid , the amount of time it takes for the the steady - state ft to converge as well as the the time it takes for the distribution to become gaussian , increases rapidly upon approach to the glass transition .moreover , the variance decreases with time .as this time increases the variance decreases inversely proportionally . as the nominal glass transition is approached the strongest field for which a linear response may be observed in the steady state , vanishes .the variance of the flux may be expressed in terms of the integral, in the long time limit , the second term on the rhs vanishes and combing this with the generalised einstein - sutherland relation , eqn [ eq : gk - gen - einstein ] , gives the celebrated green - kubo theory for steady - state : it can be used to obtain a transport coefficient in terms of equilibrium fluctuations in the form of an autocorrelation function .one might wonder why the green - kubo theory holds such important status given that the presentation here shows it to be equivalent to the einstein - sutherland relation .in contrast to the einstein - sutherland relation the green - kubo theory is also applicable to time dependent phenomena .a time dependent version of the green - kubo theory can not be obtained from the ft which must satisfy definite time parity conditions .an example of where the ft has been used is couette flow or planar shear using the sllod equations of motion , which in the absence of a fictitious thermostat and for the case of constant shear rate , are equivalent to newtons equations of motion .as we are controlling the shear rate externally we identify it as the external field and the flux as , where is the element of the pressure tensor .the dissipative field or entropy production for couette flow is then the shear viscosity is the rate at which work is being done on the fluid divided by the product of the volume and the shear rate squared .the green kubo expression for the viscosity is thus and the einstein - sutherland expression is if the system is very viscous , the green - kubo expression , eqn [ eq : gk - viscosity - gk ] , will require a long time to converge and consequently , the generalised einstein - sutherland expression , eqn [ eq : gk - viscosity - einst ] , will probably be the better method to extract the viscosity in such a situation .in contrast , if we wish to calculate the self - diffusion coefficient for a very viscous system , the einstein - sutherland expression for the diffusion , can be very slow to converge while the green kubo expression , , will usually converge quite rapidly .much of the work done in developing and extending the fluctuation theorems was accomplished by theoreticians and mathematicians interested in non - equilibrium statistical mechanics . until 2002 ,demonstrations of the theorems were limited to computer simulations and there were no practical experimental demonstrations of the theorems , despite the range of interests in nano / micro machines , or molecular devices that impose nanometer scale displacements with piconetwon scale forces .such small machines include single biomolecules that act as molecular motors and whose experimental observation highlight the nonequilibrium phenomena described by the fts .linear motors , such as the action - myosin or the kinesin - microtubule motor are fuelled by proton currents or atp hydrolysis and function as integral parts of cellular metabolism , and consequently , they work under inherently nonequilibrium conditions . over time , on average , these molecular engines must not violate the second law ; however occasionally they run backwards " , converting heat from the surroundings to generate useful mechanical / chemical energy .this work , done on the molecular time and length scales , will have a natural variation or spread of values , and the conjecture is that this is governed by the fts . in 2002the fts were demonstrated experimentally by two independent groups , each with a unique focus and both using optical tweezers .et al _ demonstrated the evans - searles fluctuation theorem by monitoring the transient trajectory of a single colloidal bead in a translating optical trap .simultaneously , liphardt _et al_ , used optical tweezers to pull the ends of a dna - rna hybrid chain , measuring the work required to unravel or unfold a specific domain in the chain .these experiments had complementary aims : the colloidal experiment was a classical model system constructed to cleanly demonstrate , as rigorously as possible in experiment , the evans - searles ft .in contrast liphardt s rna - unfolding experiment importantly demonstrated the application of crooks ft to a complex biomolecular system , highlighting the potential practical use of fts to a wider range of scientists .in this section we review both experiments in some detail before more briefly mentioning other more recent experimental applications of the fts , as well as other proposed experimental systems and implications .an optical trap is formed when a transparent , micron - sized particle , whose index of refraction is greater than that of the surrounding medium , is located within a focused laser beam .the refracted rays differ in intensity over the volume of the sphere and exert a subpico - newton force on the particle , drawing it towards the region of highest intensity , _i.e. _ , the focal point or trap center .the optical trap is harmonic ; a particle located a distance from the center of the trap has an optical force , , acting to restore its position to the trap center . is the trapping constant which is determined by the distribution of particle positions at equilibrium and is tuned by adjusting the intensity of the laser . using an objective lens of high numerical aperture , the optical trapping is strongest in the direction perpendicular to the focal plane , such that particle remains localised entirely within the focal plane , fluctuating about the focal point . as the particle position is measured at mhz frequency , _i.e. _ over timescales significantly large that inertia of the colloidal particle is negligible , the measured optical force , balances any applied forces , either forces arising from the surrounding solvent , such as brownian or drag forces , or the tension associated with a tethered chain molecule such as dna or rna .the first experiment that demonstrated the fluctuation theorems was carried out by wang _et al _ .they monitored the trajectory of a single colloidal particle , weakly held in a stationary optical trap that was translated uniformly with constant , vanishingly small velocity starting at time .initially , the particle s position in the trap is distributed according to an equilibrium boltzmann distribution with an average particle velocity of 0 . with trap translation ,the particle is displaced from its equilibrium position until , at some time later , the average velocity of the particle is equal to the velocity of trap . from this pointthe system is in a nonequilibrium steady state . to determine the dissipation function ,consider that the the external field is purely dissipative , _i.e. _ and , so that the dissipation function is with the ability to resolve nanometer - scale particle displacements and femtonewton scale optical forces , was determined with sub- resolution .as there is no change in state of the underlying state of the system , and the field has even time - parity , the evans - searles ft and the crooks ft reduce to the same expression and equivalently describe the distributions of we expect , from the second law , that work is done to translate the particle - filled optical trap , or , but according to the fts there should also be a nonvanishing probability of observing short trajectories where , that is thermal fluctuations _ provide _ the work . indeed ,wang _ et al _ showed trajectories with , persisted for 2 - 3 seconds , far longer than had been demonstrated by simulation .however , in this initial 2002 experiment there was an insufficient number of trajectories to properly sample the full distribution , , and the authors instead tested a coarse - grained form of the evans - searles ft , the integrated evans - searles ft : \rangle_{\omega_t>0},\ ] ] where the brackets on the rhs denote the average over that part of the distribution for which .later , wang et al revised this same experiment , and sampled a larger number of trajectories , enabling a direct demonstration of the evans - searles ft with a purely dissipative field .moreover , they also translated the particle - filled optical trap in a circular or race - course " pattern , producing one long single trajectory , which outside of the initial short time interval , was steady - state .using contiguous segments of this single steady - state trajectory , they demonstrated the steady - state version of the evans - searles ft .a particle in an optical trap was also used to demonstrate the distinction between the evans - searles ft and the crooks ft , in the so called capture " experiment . in this single particle experiment ,the strength of the stationary optical trap , or the trapping constant , is changed instantaneously , and the time - dependent relaxation of the particle position from one equilibrium distribution to another distribution is recorded . for this experiment ,a particle is localised in a stationary trap of strength over a sufficiently long time that its position is described by an equilibrium distribution . at time , the optical strength is increased discontinuously from to , , so that we more tightly confine or capture " the particle .alternatively , we can decrease the trap strength from to , to release " the particle .thus , the external field parameter , is the time - dependent trap strength , , which varies discontinuously , and in the absence of a purely dissipative field , or the particle s position is recorded as it relaxes to its new equilibrium distribution and the different functions and are evaluated over an ensemble of nonequilibrium trajectories .work is the change in the internal energy that occurs with the change in the trapping constant : note that will always be positive if the trap strength is and consequently , distributions for can not be gaussian . as all trajectories must initiate under equilibrium conditions under ,the probability distribution of is a boltzmann distribution and the distribution of is then simply }.\ ] ] thus , if we consider the ensemble average , } \rangle ] where } ] in jarzynksi s equality .however , it does not negate the validity of either relation .one may also note that this same sampling problem can occur with evans - searles ft also .however , in evans - searles ft , the external agents must have even time parity , it is the fast " application of external agents that render rare trajectories with that can be difficult sample in experiment or simulation . ] , for the time averaged flux with the ensemble average denoted by the dashed line .the other dashed line is at which is the value the fluctuation theorem compares to the mean . as time proceeds , in the limit ,the variance decreases like .[fig : gaussian ] ] wang gm , sevick em , mittag e , searles dj , evans dj .2002 . experimental demonstration of violations of the second law of thermodynamics for small systems and short time scales ._ 89:050601 carberry dm , reid jc , wang gm , sevick em , searles dj , et al .2004 . fluctuations and irreversibility : an experimental demonstration of a second - law - like theorem using a colloidal particle held in an optical trap ._ 92:140601 chelli r , marsili s , barducci a , procacci p. 2007 . recovering the crooks equation for dynamical systems in the isothermal - isobaric ensemble : a strategy based on the equations of motion ._ j. chem .
|
fluctuation theorems , which have been developed over the past 15 years , have resulted in fundamental breakthroughs in our understanding of how irreversibility emerges from reversible dynamics , and have provided new statistical mechanical relationships for free energy changes . they describe the statistical fluctuations in time - averaged properties of many - particle systems such as fluids driven to nonequilibrium states , and provide some of the very few analytical expressions that describe nonequilibrium states . quantitative predictions on fluctuations in small systems that are monitored over short periods can also be made , and therefore the fluctuation theorems allow thermodynamic concepts to be extended to apply to finite systems . for this reason , fluctuation theorems are anticipated to play an important role in the design of nanotechnological devices and in understanding biological processes . these theorems , their physical significance and results for experimental and model systems are discussed . epsf.def psfig.sty fluctuation theorems , non - equilibrium statistical mechanics , far - from - equilibrium processes , 2nd law of thermodynamics , reversibility , free energy change
|
the field of chemical physics is full of phenomena that occur quantum mechanically , with observable consequences , despite being forbidden by classical mechanics .all such processes are labeled as tunneling with the standard elementary example being that of a particle surmounting a potential barrier despite insufficient energy .however the notion of tunneling is far more general in the sense that the barriers can arise in the phase space . in other wordsthe barriers are dynamical and arise due to the existence of one or several conserved quantities . barriers due to exactly conserved quantities _i.e. , _ constants of the motion are usually easy to identify .a special case is that of a particle in a one dimensional double well potential wherein the barrier is purely due to the conserved energy . on the other hand it is possible , and frequently observed , that the dynamics of the system can result in one or more approximate constants of the motion which can manifest as barriers .the term approximate refers to the fact that the relevant quantities , although strictly nonconserved , are constant over timescales that are long compared to certain system timescales of interest .such approximate dynamical barriers are not easy to identify and in combination with the other exact constants of the motion can give rise to fairly complex dynamics . as usual one anticipates that the dynamical barriers will act as bottlenecks for the classical dynamics whereas quantum dynamics will ` break free ' due to tunneling through the dynamical barriers _i.e. , _ _ dynamical tunneling_. however , as emphasized in this review , the situation is not necessarily that straightforward since the mere existence of a finite dynamical barrier does not guarantee that dynamical tunneling will occur .this is especially true in multidimensions since a variety of other dynamical effects can localize the quantum dynamics . in this sensethe mechanism of dynamical tunneling is far more subtle as compared to the mechanism of tunneling through nondynamical barriers .the distinction between energetic and dynamical barriers can be illustrated by considering the two - dimensional hamiltonian which represents a one - dimensional double well ( in the degree of freedom ) coupled to a harmonic oscillator ( in the degree of freedom ) . for energies below the barrier height , the poincar surface of section in fig .[ fig1 ] shows two disconnected regions in the phase space .this reflects the fact that the corresponding isoenergetic surfaces are also disconnected .thus one speaks of an _energetic barrier _ separating the left and the right wells . on the other hand fig .[ fig1 ] shows that for the poincar surface of section again exhibits two regular regions related to the motion in the left and right wells despite the absence of the energetic barrier . in other wordsthe two regions are now part of the same singly connected energy surface but the left and the right regular regions are separated by _dynamical barriers_. the various nonlinear resonances and the chaotic region seen in fig . [ fig1 ]should be contrasted with the phase space of the one - dimensional double well alone which is integrable . later in this review exampleswill be shown wherein there are only dynamical barriers and no static potential barriers .note that in higher dimensions , see discussions towards the end of this secion , one does not even have the luxury of visualizing the global phase space , as in fig .[ fig1 ] , let alone identifying the dynamical barriers ! [ htbp ] ) .the parameter values are chosen to be , and the static barier height is therefore . the surface of section are defined by and .the top panel shows the phase space for ( below the static barrier height ) and one observes two disconnected regular regions separated by energetic barrier .the bottom panel corresponds to ( above the static barrier height ) and shows the left and right regular regions separated now only by dynamical barriers .model system and parameters taken from ref ..,width=302 ] in the case of multidimensional tunneling through potential barriers it is now well established that valuable insights can be gained from the phase space perspective it is not necessary that tunneling is associated with transfer of an atom or group of atoms from one site to another site .one can have , for instance , vibrational excitations transferring from a particular mode of a molecule to a completely different mode .examples like hydrogen atom transfer and electron transfer belong to the former class whereas the phenomenon of intramolecular vibrational energy redistribution ( ivr ) is associated with the latter class . in recent yearsdynamical tunneling has been realized in a number of physical systems .examples include driven atoms , microwave or optical cavities , bose - einstein condensates , and in quantum dots .thinking of dynamical tunneling as a close cousin of the above barrier reflection " ( cf .example shown in fig .[ fig1 ] ) a recent paper by giese _et al . _ suggests the importance of dynamical tunneling in order to understand the eigenstates of the dichlorotropolone molecule .this review is concerned with the molecular manifestations of dynamical tunneling and specifically on the relevance of dynamical tunneling to ivr and the corresponding signatures in molecular spectra .one of the aims is to reveal the detailed phase space description of dynamical tunneling .this , in turn , leads to the identification of the key structures in phase space and a universal mechanism for dynamical tunneling in all of the systems mentioned above . in ivr studies , which is of paramount interest to chemical dynamics ,one is interested in the fate of an initial nonstationary excitation in terms of timescales , pathways and destinations .will the initial energy `` hot - spot '' redistribute throughout the molecule statistically ?alternatively , to borrow a term from a recent review , is the observed statisticality only `` skin - deep '' ?the former viewpoint is at the heart of one of the most useful theories for reaction rates - the rrkm ( rice - rampsperger - kassel- marcus ) theory . on the other hand ,recent studies seem to be leaning more towards the latter viewpoint . as an aside it is worth mentioningthat ivr in molecules is essentially the fpu ( fermi - pasta - ulam ) problem ; only now one needs to worry about a multidimensional network of coupled nonlinear oscillators . in a broad sense, the hope is that a mechanistic understanding of ivr will yield important insights into mode - specific chemistry and the coherent control of reactions .consequently , substantial experimental and theoretical efforts have been directed towards understanding ivr in both time and frequency domains .most of the studies , spanning many decades , have focused on the gas phase .more recently , researchers have studied ivr in the condensed phase and it appears that the gas phase studies provide a useful starting point .a detailed introduction to the literature on ivr is beyond the scope of the current article .a brief description is provided in the next section and the reader is refered to the literature below for a comprehensive account of the recent advances .the review by nesbitt and field gives an excellent introduction to the literature .the review by gruebele highlights the recent advances and the possibility of controlling ivr .the topic of molecular energy flow in solutions has been recently reviewed by a"smann , kling and abel .tunneling leads to quantum mixing between states localized in classically disconnected regions of the phase space . in thisgeneral setting barriers arise due to exactly or even approximately conserved quantities .thus , for example , it is possible for two or more localized quantum states to mix with each other despite the absence of any energetic barriers separating them .this has significant consequences for ivr in isolated molecules since energy can flow from an initially excited mode to other , qualitatively different , modes ; classically one would predict very little to no energy flow .hence it is appropriate to associate dynamical tunneling with a purely quantum mechanism for energy flow in molecules . in order to have detailed insights into ivr pathways and ratesit is necessary to study both the classical and quantum routes .the division is artificial since both mechanisms coexist and compete with each other .however , from the standpoint of control of ivr deciding the importance of one route over the other can be useful .in molecular systems the dynamical barriers to ivr are related to the existence of the so called _ polyad numbers_ .usually the polyad numbers are quasiconserved quantities and act as bottlenecks for energy flow between different polyads .dynamical tunneling , on the otherhand , could lead to interpolyad energy flow .historically the time scales for ivr via dynamical tunneling have been thought to be of the order of hundreds of picoseconds .however recent advances in our understanding of dynamical tunneling suggest that the timescales could be much smaller in the mixed phase space regimes .this is mainly due to the observations that chaos in the underlying phase space can enhance the tunneling by several orders of magnitude .conversely , very early on it was thought that the extremely long timescales would allow for mode - specific chemistry .presumably the chaotic enhancement would spoil the mode specificty and hence render the initially prepared state unstable .thus it is important to understand the effect of chaos on dynamical tunneling in order to suppress the enhancement .obtaining detailed insights into the phenomenon of dynamical tunneling and the resulting consequences for ivr requires us to address several questions .how does one identify the barriers ?can one explicitly construct the dynamical barriers for a given system ?what is the role of the various phase space structures like resonances , partial barriers due to broken separatrices and cantori , and chaos ?which , if any , of the phase space structures provide for a universal description and mechanism of dynamical tunneling ?finally the most important question : do experiments encode the sensitivity of dynamical tunneling to the nature of the phase space ?this review is concerned with addressing most of the questions posed above in a molecular context . due to the nature of the issues involved considerablework has been done by the nonlinear physics community and in this work some of the recent developements will be highlighted since they can have significant impact on the molecular systems as well .another reason for bringing together developements in different fields has to do with the observation that the literature on dynamical tunneling and its consequences are , curiuosly enough , disjoint with hardly any overlap between the molecular and the ` non ' molecular areas . at this stageit is important and appropriate , given the viewpoint adopted in this review , to highlight certain crucial issues pertaining to the nature of the classical phase space for systems with several ( ) degrees of freedom .a significant portion of the present review is concerned with the phase space viewpoint of dynamical tunneling in systems with .the last section of the review discusses recent work on a model with three degrees of freedom .the sparsity of work in is not entirely surprising and parallels the situation that prevails in the classical - quantum correspondence studies of ivr in polyatomic molecules .indeed it will become clear from this review that classical phase space structures are , to a large extent , responsible for both classical and quantum mechanisms of ivr .thus , from a fundamental , and certainly from the molecular , standpoint it is highly desirable to understand the mechanism of classical phase space transport in systems with . in the early eighties the seminal work by mackay , meiss , and percival on transport in hamiltonian systems with motivated researchers in the chemical physics community to investigate the role of various phase space structures in ivr dynamics .in particular these studies helped in providing a deeper understanding of the dynamical origin of nonstatistical behaviour in molecules .however molecular systems typically have atleast and soon the need for a generalization to higher degrees of freedom was felt ; this task was quite difficult since the concept of transport across broken separatrices and cantori do not have a straightforward generalization in higher degrees of freedom .in addition tools like the poincar surface of section cease to be useful for visualizing the global phase space structures .wiggins provided one of the early generalizations based on the concept of normally hyperbolic invariant manifolds ( nhim ) which led to illuminating studies in order to elucidate the dynamical nature of the intramolecular bottlenecks for .although useful insights were gained from several studies the intramolecular bottlenecks could not be characterized at the same levels of detail as in the two degree of freedom cases .it is not feasible to review the various studies and their relation / consequences to the topic of this article .the reader is referred to the paper by gillilan and ezra for an introduction and to the monograph by wiggins for an exposition to the theory of nhims . following the initial studies , which were perhaps ahead of their time ,far fewer efforts were made for almost a decade but there has been a renewal of interest in the problem over the last few years with the nhims playing a crucial role .armed with the understanding that the notion of partial barriers , chaos , and resonances are very different in fresh insights on transition state and rrkm theories , and hence ivr , are beginning to emerge . to put the issues raised above in perspective for the current review note that there are barriers in phase space through which dynamical tunneling occurs and at the same time there are barriers that can also lead to the localization of the quantum dynamics .the competition between dynamical tunneling and dynamical localization is already important for systems with and this will be briefly discussed in the later sections . towards the end of the review some work on a three degrees of freedom model are presented .however , as mentioned above , the ideas are necessarily preliminary since to date one does not have a detailed understanding of either the classical barriers to transport nor the dynamical barriers which result in dynamical tunneling .naturally , the competition between them is an open problem .we begin with a brief overview of ivr and the explicit connections to dynamical tunneling .consider a molecule with atoms which has ( for linear molecules ) vibrational modes .dynamical studies , classical and/or quantum , require the -dimensional born - oppenheimer potential energy surface in terms of some convenient generalized coordinates and their choice is a crucial issue .obtaining the global by solving the schrdinger equation is a formidable task even for mid - sized molecules . traditionally , therefore , a perturbative viewpoint is adopted which has enjoyed a considerable degree of success .for example , near the bottom of the well and for low levels of excitations the approximation of vibrations by uncoupled harmonic normal modes is sufficient and the molecular vibrational hamiltonian can be expressed perturbatively as : in the above are the dimensionless vibrational momenta and normal mode coordinates respectively .the deviations from the harmonic limit are captured by the anharmonic terms with strengths .such small anharmonicities account for the relatively weak overtone and combination transitions observed in experimental spectra .however with increasing energy the anharmonic terms become important and doubts arise regarding the appropriateness of a perturbative approach .the low energy normal modes get coupled and the very concept of a mode becomes ambiguous .there exists sufficient evidence for the appearance of new ` modes ' , unrelated to the normal modes , with increasing vibrational excitations .nevertheless detailed theoretical studies over the last couple of decades has shown that it is still possible to understand the vibrational dynamics via a suitably generalized perturbative approach - the canonical van - vleck perturbation theory ( cvpt) . the cvpt leads to an effective or spectroscopic hamiltonian which can be written down as where , and are the harmonic number , destruction and creation operators .the zeroth - order part _ i.e. , _ the dunham expansion is diagonal in the number representation . in the above expression with being the number of quanta in the j mode with degeneracy .the off - diagonal terms _i.e. , _ anharmonic resonances have the form with .these terms represent the couplings between the zeroth - order modes and are responsible for ivr starting from a general initial state .[ htbp ] is coupled to the various dark states sorted into tiers through perturbations . as an example this sketch indicates fast ivr between and states in whereas further flow of energy to states in the second tier is restricted due to the lack of couplings .nevertheless it is possible that vibrational superexchange can couple the first and second tiers and energy can eventually flow into the final tier ( grey ) over long timescales.,width=302 ] a few key points regarding the effective hamiltonians can be noted at this juncture .there are two routes to the effective hamiltonians . in the cvpt methodthe parameters of the effective hamiltonian are related to the original molecular parameters of .the other route , given the difficulties associated with determining a sufficiently accurate , is _ via _ high resolution spectroscopy .the parameters of eq.([specham ] ) are determined by fitting the experimental data on line positions and , to a lesser extent , intensities .the resulting effective hamiltonian is only a model and hence the parameters of the effective hamiltonian do not have any obvious relationship to the molecular parameters of .although the two routes to eq.([specham ] ) are very different , they complement one another and each has its own advantages and drawbacks .the reader is refered to the reviews by sibert and joyeux for a detailed discussion of the cvpt approach .the review by ishikawa _et al . _ on the experimental and theoretical studies of the hcp cph isomerization dynamics provides , amongst other things , a detailed comparison between the two routes .imagine preparing a specific initial state , say an eigenstate of , denoted as .theoretically one is not restricted to and very general class of initial states can be considered. however , time domain experiments with short laser pulses excite specific vibrational overtone or combination states , called as the zeroth - order bright states ( zobs ) , which approximately correspond to the eigenstates of .the rest of the optically inaccessible states are called as dark states . herethe zeroth - order eigenstates are partitioned as couple the bright state with the dark states leading to energy flow and impose a hierarchical coupling structure between the zobs and the dark states .thus one imagines , as shown in fig .[ fig2 ] , the dark states to be arranged in tiers , determined by the order of the , with an increasing density of states across the tiers .this implies that the the local density of states around is the important factor and experiments do point to a hierarchical ivr process .for example , callegari _ et al ._ have observed seven different time scales ranging from 100 femtoseconds to 2 nanoseconds for ivr out of the first ch stretch overtone of the benzene molecule .gruebele and coworkers have impressively modeled the ivr in many large molecules based on the hierarchical tiers concept . in a way the coupling imposes a tier structure on the ivr ;similar tiered ivr flow would have been observed if one had investigated the direct dynamics ensuing from the global .thus the cvpt and its classical analog help in unraveling the tier structure .nevertheless dominant classical and quantum energy flow routes still have to be identified for detailed mechanistic insights . in the time domainthe survival probability gives important information on the ivr process .the eigenstates of the full hamiltonian have been denoted by being the spectral intensities .the long time average of is known as the inverse participation ratio ( also called as the dilution factor in the spectroscopic community ) .essentially indicates the number of states that participate in the ivr dynamics out of .[ htbp ] to states on the energy shell .examples of such one - step and multistep superexchange paths in the state space are shown.,width=321 ] although the tier picture has proved to be useful in analysing ivr , recent studies emphasize the far greater utility in visualising the ivr dynamics of as a diffusion in the zeroth - order quantum number space or _ state space _ denoted by . the state space picture , shown schematically in fig .[ fig3 ] , highlights the local nature and the directionality of the energy flow due to the various anharmonic resonances . in the one - dimensional " tier scheme shown in fig .[ fig2 ] the anisotropic nature of the ivr dynamics is hard to discern .tiers in the state space are formed by zeroth - order states within a certain distance from the bright state and hence still organized by the order of the coupling resonances . in the long timelimit most states that participate in the ivr dynamics are confined , due to the quasi - microcanonical nature of the laser excitation , to an energy shell with being the bright state energy .the ivr dynamics occurs on a -dimensional subspace , refered to as the ivr `` manifold '' , of the state space due to the local nature of the couplings between and with .one of the key prediction of the state space model is that the survival probability exhibits a power law at intermediate times ^{-d/2 } \label{powerlaw}\ ] ] with , the number of vibrational modes of the molecule .thus the effective dimensionality of the ivr manifold in the state space can be fairly small and a large body of work in the recent years support the state space viewpoint .the effective dimensionality itself is crucially dependent on the extent to which classical and quantum mechanisms of ivr manifest themselves at specific energies .one possible interpretation of is as follows . if then the dynamics in the state space can be thought of as a normal diffusive process and thus ergodic over the state space .in such a limit for large ( molecules ) the survival can be well approximated by an exponential behaviour . for ivr dynamics is anisotropic and the dynamics is nonergodic with typically being nonintegral .note that the terms `` manifold '' and `` effective dimensionality '' are being used a bit loosely since one does not have a very clear idea of the topology of the ivr manifold as of yet .leitner and wolynes , building upon the earlier work of logan and wolynes , have provided criteria for vibrational state mixing and energy flow from the state space perspective . using a local random matrix approach to the hamiltonian in eq .( [ specham ] ) the rate of energy flow out of is given by with being a distance in the state space .the term represents the average effective coupling of to the states , of them with density , a distance away in state space .the extent of state mixing is characterized by the transition parameter and the transition between localized and extended states is located at .the key term is the effective coupling which involves both low and high order resonances .applications to several systems shows that the predictions based on eq .( [ lwrate ] ) and eq .( [ lwqet ] ) are both qualitatively and quantitatively accurate .the perspective of ivr being a random walk on an effective -dimensional manifold in the quantum number space is reasonable as long as direct anharmonic resonances exist which connect the bright state with the dark states .it is useful to emphasize again that the existence of direct anharmonic resonances in itself does not imply ergodicity over state space _ i.e. , _ the random walk need not be normal . what happens if , for certain bright states , there are no direct resonances available ?in other words , the coupling matrix elements for all . what would be the mechanism of ivr , if any , in such cases? examples of such states in fact correspond to overtone excitations which are typically prepared in experiments .the overtone states are also called as edge states from the state space viewpoint since most of the excitation is localized in one single mode of the molecule .thus , for example , in a system with four degrees of freedom a state would be called as an edge state whereas the state , corresponding to a combination state , is called as an interior state .the edge states , owing to their location in the state space , have fewer anharmonic resonances available for ivr as compared to the interior states .to some extent such a line of reasoning leads one to anticipate overtone excitations to have slower ivr when compared to the combination states .a concrete example comes from the theoretical work of holme and hutchinson wherein the dynamics of overtone excitations in a model system representing coupled ch - stretch and cch - bend interacting with an intense laser field was studied .they found that classical dynamics predicted no significant energy flow from the high overtone excitations .however the corresponding quantum calculations did indicate significant ivr . based on their studiesholme and hutchinson concluded that overtone absorption proceeds via dynamical tunneling on timescales of the order of a few nanoseconds .experimental evidence for the dynamical tunneling route to ivr was provided in a series of elegant works by the princeton group .the frequency domain experiments involved the measurement of ivr lifetimes of the acetylinic ch - stretching states in ( cx)*c * molecules with x , d and y , si . the homogeneous line widths , which are related to the rate of ivr out of the initial nonstationary state , arise due to the vibrational couplings between the ch - stretch and the various other vibrational modes of the molecule .surprisingly kerstel _ et al . _found extremely narrow linewidths of the order of 10 - 10 cm which translate to timescales of the order of thousands of vibrational periods of the ch - stretching vibration .thus the initially prepared ch - stretch excitation remains localized for extremely long times .such long ivr timescales were ascribed to the lack of strong / direct anharmonic resonances coupling the ch - stretch with the rest of the molecular vibrations .the lack of resonances , despite a substantial density of states , combined with ivr timescales of several nanoseconds implies that the mechanism of energy flow is inherently quantum .another study by gambogi _et al . _ examines the possibility of long range resonant vibrational energy exchange between equivalent ch - stretches in ch(c) .there have been several experiments that indicate multiquantum zeroth - order state mixings due to extremely weak couplings of the order of 10 - 10 cm in highly vibrationally excited molecules .since the early work by kerstel _other experimental studies have revealed the existence of the dynamical tunneling mechanism for ivr in large organic molecules .for instance in a recent work callegari _performed experimental and theoretical studies on the ivr dynamics in pyrrole ( c ) and 1,2,3-triazine ( c ) . specifically , they chose the initial bright states to be the edge state ch - stretch for pyrrole and the interior state corresponding to the ch stretching - ring breathing combination for triazine . in both cases very narrow ivr features , similar to the observations by kerstel __ , were seen in the spectrum .this pointed towards an important role of the off - resonant states to the observed and calculated narrow ivr features .the analysis by callegari _et al . _ reveals that near the ivr threshold it is reasonable to expect such highly off - resonant coupling mechanisms to be operative .another example for the possible existence of the off - resonant mechanism comes from the experiment by portonov , rosenwaks , and bar wherein the ivr dynamics ensuing from , , and ch - acetylinic stretch of 1-butyne are studied .it was found that the homogeneous linewidth of the state , 0.5 cm , is about a factor of two smaller than the widths of and .it is important to note that the involvement of such off - resonant states can occur at any stage of the ivr process from a tier perspective .in other words it is possible that the bright state undergoes fast initial ivr due to strong anharmonic resonances with states in the first tier but the subsequent energy flow might be extremely slow .several experiments point towards the existence of such unusually long secondary ivr time scales which might have profound consequences for the interpretation of the high resolution spectra .boyarkin and rizzo demonstrated the slow secondary time scales in their experiments on the ivr from the alkyl ch - stretch overtones of cf . upon excitation of the ch - stretchfast ivr occurs to the ch stretch - bend combination states on femtosecond timescales .however the vibrational energy remains localized in the mixed stretch - bend states and flows out to the rest of the molecule on timescales of the order of hundred picoseconds .similar observations were made by lubich __ on the ivr dynamics of oh - stretch overtone states in ch .the interpretation of the above experimental results as due to dynamical tunneling is motivated by the theoretical analysis by stuchebrukhov and marcus in a landmark paper . in the initial work they explained the narrow features , observed in the experiments by kerstel _, by invoking the coupling of the bright states with highly off - resonant gateway states which , in turn , couple back to states that are nearly isoenergetic with the bright state . in fig .[ fig3 ] an example of the indirect coupling via a state which is off the energy shell in the state space is illustrated .more importantly stuchebrukhov and marcus argued that the mechanism , described a little later in this review and anticipated earlier by hutchinson , sibert , and hynes , involving high order coupling chains is essentially a form of generalized tunneling .hence one imagines dynamical barriers which act to prevent classical flow of energy out of the initial ch - stretch whereas quantum dynamical tunneling does lead to energy flow , albeit very slowly .similar arguments had been put forward by hutchinson demonstrating the importance of the off - resonant coupling leading to the mixing of nearly degenerate high energy zeroth - order states in cyanoacetylene hcc . it was argued that such purely quantum relaxation pathway would explain the observed broadening of the overtone bands in various substituted acetylenes .the above examples highlight the possible connections between ivr and dynamical tunneling .however it was realized very early on that observations of local mode doublets in molecular spectra and the clustering of rotational energy sublevels with high angular momenta can also be associated with dynamical tunneling .indeed much of our understanding of the phenomenon of dynamical tunneling in the context of molecular spectra comes from these early works .one of the key feature of the analysis was the focus on a phase space description of dynamical tunneling _i.e. , _ identifying the disconnected regions in the phase space and hence classical structures which could be identified as dynamical barriers . thus the mechanism of dynamical tunneling , confirmed by computing the splittings based on specific phase space structures , could be understood in exquisite detail .however such detailed phase space analysis are exceedingly difficult , if not impossible , for large molecules .for example in the case of the ( cx) system there are vibrational degrees of freedom .any attempt to answer the questions on the origin and location of the dynamical barriers in the phase space is seemingly a futile excercise .it is therefore not surprising that stuchebrukhov and marcus , although aware of the phase space perspective , provided a purely quantum explanation for the ivr in terms of the chain of off - resonant states .thus one can not help but ask if the phase space viewpoint is really needed .the answer is affirmative and many reasons can be provided that justify the need and utility of a phase space viewpoint .first , as noted by heller , the concept of tunneling is meaningless without the classical mechanics of the system as a baseline . in other words , in order to label a process as purely quantum it is imperative that one establishes the absence of the process within classical dynamics .note that for dynamical tunneling it might not be easy to _ a priori _ make such a distinction .there are mechanisms of classical transport , especially in systems with three or more degrees of freedom , involving long timescales which might give an impression of a tunneling process .it is certainly not possible to differentiate between classically allowed and forbidden mechanisms by studying the spectral intensity and splitting patterns alone due to the nontrivial role played by the stochasticity in the classical phase space .secondly , the quantum explanation is in essence a method to calculate the contribution of dynamical tunneling to the ivr rates in polyatomic molecules . in order to have a complete qualitative picture of dynamical tunnelingit is necessary , as emphasised by stuchebrukhov and marcus , to find an explicit form of the effective dynamical potential in the state space of the molecule .third reason , mainly due to the important insights gained from recent studies , has to do with the sensitivity of dynamical tunneling to the stochasticity in the phase space .chaos can enhance as well as supress dynamical tunneling and for large molecules the phase space can be mixed regular - chaotic even at energies corresponding to the first or second overtone levels . undoubtedly the signatures should be present in the splitting and intensity patterns in a high resolution spectrum .however the precise mechanism which governs the substantial enhancement / suppression of dynamical tunneling , and perhaps ivr rates is not yet clear .nevertheless recent developements indicate that a phase space approach is capable of providing both qualitative and quantitative insights into the problem .a final argument in favor of a phase space perspective needs to be mentioned .it might appear that the dimensionality issue is not all that restrictive for the quantum studies _ a la _ stuchebrukhov and marcus .however it will become clear from the discussions in the later sections that in the case of large systems even calculating , for example local mode splittings via the minimal perturbative expression in eq .( [ minimal ] ) can be prohibitively difficult .one might argue that a clever basis would reduce the number of perturbative terms that need to be considered or alternatively one can look for and formulate criteria that would allow for a reliable estimate of the splitting . unfortunately , _ a priori _ knowledge of the clever basis or the optimal set of terms to be retained in eq .( [ minimal ] ) implies a certain level of insight into the dynamics of the system .the main theme of this article is to convey the message that such insights truly originate from the classical phase space viewpoint . with the above reasons in mind the next section provides brief reviews of the earlier approaches to dynamical tunneling from the phase space perspectivethis will then set the stage to discuss the more recent advances and the issues involved in the molecular context .in a series of pioneering papers , nearly a quarter of a century ago , lawton and child showed that the experimentally observed local mode doublets in h could be associated with a generalized tunneling in the momentum space . in molecular spectroscopyit was appreciated from very early on that highly excited spectra associated with x - h stretching vibrations are better described in terms of local modes rather than the conventional normal modes . in a zeroth - order descriptionthis corresponds to very weakly coupled , if not uncoupled , anharmonic oscillators .every anharmonic oscillator , modelled by an appropriate morse function , represents a specific local stretching mode of the molecule .the central question that lawton and child asked was that to what extent are the molecular vibrations actually localized in the individual bonds ( local modes ) ? analyzing the classical dynamics on the sorbie - murrell potential energy surface for h , focusing on the two local o - h stretches , it was found that a clear distinction between normal mode and local mode behaviour could be observed in the phase space .the classical phase space at appropriate energies revealed two equivalent but classically disconnected regions as a signature of local mode dynamics .lawton and child immediately noted the topological similarity between their two dimensional poincar surface of section and the phase space of a one dimensional symmetric double well potential .however they also noted that the barrier was in momentum space and hence the lifting of the local mode degeneracy was due to a generalized tunneling in the momentum space .detailed classical , quantum , and semiclassical investigations of the system allowed lawton and child to provide an approximate formula for the splitting between two symmetry related local mode states as : the variables correspond to the mass - weighted momenta and coordinates .it is noteworthy that lawton and child realized , correctly , that the frequency factor should be evaluated at a fixed total quantum number .however the choice of the tunneling path proved to be more difficult . eventually was taken to be a one dimensional , nondynamical path in the two dimensional coordinate space of the symmetric and antisymmetric normal modes .although eq .( [ lcsplit ] ) correctly predicts the trend of decreasing for increasing and fixed , the origin and properties of the dynamical barrier remained unclear .[ htbp ] space representation of the even state ( ) is shown . in ( b ) and ( c ) the linear combination are shown .right below the configuration space representations the corresponding phase space husimi representations in the poincar surface of section are shown .the rightmost figure on the top panel shows the dynamical tunneling occuring when starting with the initial nonstationary state .the period of the oscillations is consistent with the splitting .the rightmost figure in the bottom panel shows the surface of section.,width=340 ] the analysis by lawton and child , although specific to the vibrational dynamics of h , suggested that the phenomenon of dynamical tunneling could be more general .this was confirmed in an influential paper by davis and heller wherein it was argued that dynamical tunneling could have significant effects on bound states of polyatomic molecules .incidentally , the word _ dynamical tunneling _ was first introduced by davis and heller .this work is remarkable in many respects and provided a fairly detailed correspondence between dynamical tunneling and the structure of the underlying classical phase space for the hamiltonian .specifically , the analysis was done on the following two - dimensional potential : the system has a discrete symmetry . for the specific choice of parameter values , , and dissociation energy is equal to and the potential supports about bound states . at low energiesno doublets are found . however above a certain energy , despite the lack of an energetic barrier , near - degenerate eigenstates were observed with small splittings . in analogy with the symmetricdouble well system , linear combinations yielded states localized in the configuration space .one such example is shown in fig .[ fig4 ] . in the same figure the probabilities and also shown confirming the two state scenario .the main issue once again had to do with the origin and nature of the barrier .davis and heller showed that the doublets could be associated with the formation of two classically disconnected regions in the phase space - very similar to the observations by lawton and child .the symmetric stretch periodic orbit , stable at low energies , becomes unstable leading to the topological change in the phase space ; a separatrix is created which separates the normal and local mode dynamics . in fig .[ fig5 ] the phase space are shown for increasing total energy and the creation of the separatrix can be clearly seen in the surface of section .the near - degenerate eigenstates correspond to the local mode regions of the phase space and several such pairs were identified upto the dissociation energy of the system .the corresponding phase space representation shown in fig .[ fig4 ] ( bottom panel ) confirms the localized nature of .thus a key structure in the phase space , a separatrix , arising due to a : resonance between the two modes is responsible for the dynamical tunneling .although davis and heller gave compelling arguments for the connections between phase space structures and dynamical tunneling , explicit determination of the dynamical barriers and the resulting splittings was not attempted .[ htbp ] section and the bottom panel shows the surface of section for increasing total energy 5.0 ( a ) , 9.0 ( b ) , and 14.0 ( c ) . note that the symmetric stretch periodic orbit ( ) becomes unstable around whereas the asymmetric stretch periodic orbit ( ) becomes unstable around .,width=321 ] the observation by davis and heller , implicily present in the work by lawton and child , regarding the importance of the : resonance to the dynamical tunneling was put to test in an elegant series of papers by sibert , reinhardt , and hynes .these authors investigated in great detail the classical and quantum dynamics of energy transfer between bonds in aba type triatomics and gave explicit expressions for the splittings .in particular they studied the model h hamiltonian : with and denoting the two oh bond stretches ( local modes ) and their corresponding momenta respectively .the bending motion was ignored and the stretches were modelled by morse oscillators ^{2}\ ] ] with being the oh bond dissociation energy . due to the equivalence of the oh stretches and the coupling strength is much smaller than .the authors studied the dynamics of initial states where are eigenstates of the unperturbed j morse oscillator . due tothe coupling the initial states have nontrivial dynamics and thus mix with other zeroth - order states .an initial state implies a certain energy distribution in the molecule and the nontrivial time dependence signals flow of energy in the molecule .later studies by hutchinson , sibert , and hynes showed that based on the dynamics the various initial states could be classified as normal modes or local modes .morever at the border between the two classes were initial states that led to a ` nonclassical ' energy flow .a representative example , using a different but equivalent system , for the three classes is shown in fig .the normal mode regime illustrated with shows complete energy flow between the modes on the time scale of a few vibrational periods - something that occurs classically as well .the local mode behaviour for the initial state exhibits complete energy transfer resulting in the state but the timescale is now hundreds of vibrational periods . despite the large timescales involved it is important to note that such a process does not happen classically .thus this is an example of dynamical tunneling .the so called nonclassical case illustrated with an initial state is also an example of dynamical tunneling .however the associated timescale is nowhere as large when compared to the local mode regime .the vibrational energy transfer process illustrated through the initial state and are examples of pure quantum routes to energy flow .hutchinson , sibert , and hynes proposed that the mechanism for this quantum energy flow can be understood as an indirect state - to - state flow of probability involving normal mode intermediate states .for instance , in the case involving the initial state the following mechanism was proposed : the reason for this indirect route has to do with the fact that estimating the splitting directly ( at first order ) yields a value which is more than an order of magnitude smaller than the the actual value .indeed note that the indirect route corresponds to a third order perturbation in the coupling and hence it is possible to estimate the contribution to the splitting as : with and are the zeroth - order energies associated with the states . once againa clear interpretation of the mechanism comes from taking a closer look at the underlying classical phase space . using the classical action - angle variables for morse oscillators it is possible to write the original hamiltonian in eq .( [ srhham ] ) as : with the harmonic frequency .the coupling term has infinitely many terms of the form which represent nonlinear resonances involving the nonlinear mode frequencies .arguments were provided for the dominance of the resonance and hence to an excellent approximation the form originates via a canonical transformation from with ,, , and .thus the key term in the above hamiltonian is the hindered rotor part denoted by and the analysis now focuses on a one dimensional hamiltonian due to the fact that is a conserved quantity .note that this is consistent with the observation by lawton and child regarding the evaluation of the frequency factor in eq .[ lcsplit ] .the classical variables are quantized as and ;thereby the rotor barrier is different for different states i.e. , _ . in this rotor representation ,motion below and above the rotor barrier correspond to normal and local mode behaviour respectively .exploiting the fact that is a one dimensional hamiltonian a semiclassical ( wkb ) expression for the local mode splitting can be written down as the frequency factor is and the the tunneling action integral is taken between the two turning points . since the local modes are above the rotor barrier the turning points are purely imaginary and thus where , and a crucial assumption has been made that does not depend on .although such an asusmption is not strictly valid the estimates for the splittings agreed fairly well with the exact quantum values .[ htbp ] for the states , , and respectively .the cases l and nc are examples of dynamical tunneling whereas case n occurs classically .note the difference in the timescales for complete energy transfer between modes for l and nc . in the case of nc significant probability builds up in the normal mode state ( symbols ) .very little probability buildup in the intermediate states ( open circles ) and ( filled circles ) is seen in case l.,width=302 ] the analysis thus implicates a :: nonlinear resonance in the phase space for the dynamical tunneling between local modes and several interesting observations were made .first , the rotor hamiltonian explains the coupling scheme outlined in eq .( [ hshpert ] ) .second , although there is a fairly strong coupling between and the intermediate state no substantial probability build - up was noticed in the state ( cf .[ fig6]l ) . on the other hand substantial probabilitydoes accumulate , as shown in fig .[ fig6 ] ( nc ) , in the intermediate state involved in the energy transfer process .it was also commented that the double well analogy provided by davis and heller was very different from the one that emerges from the hindered rotor analysis - local modes in the former case would be trapped below the barrier whereas they are above the barrier in the latter case .finally , a very important observation was that small amounts of asymmetry between the two modes quenched the process whereas there was little effect on the process .the different effects of the asymmetry was one of the primary reasons for distinguishing between local and nonclassical states .several questions arise at this stage .is the rotor analysis applicable to the lawton - child and davis - heller systems ?why is there a difference in the interpretation of the local modes , and hence the mechanism of dynamical tunneling , between the phase space and hindered rotor pictures ?is it reasonable to neglect the higher order :: resonances solely based on the relative strengths ?if multiple resonances do exist then one has the possibility of their overlap leading to chaos .is it still possible to use the rotor formalism in the mixed phase space regimes ?some of the questions were answered in a work by stefanski and pollak wherein a periodic orbit quantization technique was proposed to calculate the splittings .stefanski and pollak pointed out an important difference between the davis - heller and the sibert - reinhardt - hynes hamiltonians in terms of the periodic orbits at low energies .nevertheless , they were able to show that a harmonic approximation to the tunneling action integral eq .( [ tunact ] ) yields an expression for the splitting which is identical to the one derived by assuming that the symmetric stretch periodic ( unstable ) orbit gives rise to a barrier separating the two local mode states - an interpretation that davis and heller provided in their work .thus stefanski and pollak resolved an apparent paradox and emphasized the true phase space nature of dynamical tunneling . in any case it is worth noting that irrespective of the representation the key feature is the existence of a dynamical barrier separating two qualitatively different motions ; the corresponding structure in the phase space had to do with the appearance of a resonance .further support for the importance of the resonance to tunneling between tori in the phase space came from a beautiful analysis by ozorio de almeida which , as seen later , forms an important basis for the recent advances . a different viewpoint , using group theoretic arguments , based on the concept of dynamical symmetry breaking was advanced by kellman .adapting kellman s arguments , originally applied to the local mode spectrum of benzene , imagine placing one quantum in one of the local o - h stretching mode of h .classially the energy remains localized in this bond and thus there is a lowering or breaking of the point group symmetry of the molecule .however quantum mechanically , dynamical tunneling restores the broken symmetry .invoking the more general permutation - inversion group and its feasible subgroups provides insights into the pattern of local mode splittings and hence insights into the energy transfer pathways via dynamical tunneling .the dominance of a pathway of course can not be established within a group theoretic approach alone and requires additional analysis .note that recently babyuk , wyatt , and frederick have reexamined the dynamical tunneling in the davis - heller and the coupled morse systems via the bohmian approach to quantum mechanics .analysing the relevant quantum trajectories babyuk _ et al ._ discovered that there were several regions , at different times , wherein the potential energy exceeds the total energy ( which includes the quantum potential ) . in this sense , locally , one has a picture that is similar to tunneling through a one dimensional potential barrier .interestingly such regions were associated with the so - called quasinodes which arise during the dynamical evolution of the density .therefore the dynamical nature of the barriers is clearly evident , but more work is needed to understand the origin and distributions of the quasinodes in a given system .needless to say , correlating the nature of the quasinodes to the underlying phase space structures would be a useful endeavour .the earlier works established the importance of the phase space perspective for dynamical tunneling using systems that possesed discrete symmetries and therefore implicitly invoked an analogy to symmetric double well models .that is not to say that the earlier studies presumed that dynamical tunneling would only occur in symmetric systems . indeeda careful study of the various papers reveal several insightful comments on asymmetric systems as well .however mechanistic details and quantitative estimates for dynamical tunneling rates were lacking .important contributions in this regard were made by stuchebrukhov , mehta , and marcus nearly a decade ago .the experiments that motivated these studies have been discussed earlier in the introduction . in this sectionthe key aspects of the mechanism are highlighted .the inspiration comes from the well known superexchange mechanism of long distance electron transfer in molecules .stuchebrukhov and marcus argued that since the initial , localized ( bright ) state is not directly coupled by anharmonic resonances to the other zeroth - order states it is necessary to invoke off - resonant virtual states to explain the sluggish flow of energy .specifically , it was noted that at any given time very little probability accumulates in the virtual states . in this sensethe situation is very similar to the mechanism proposed by hutchinson , sibert , and hynes as illustrated by eq .( [ hshpert ] ) .however stuchebrukhov and marcus extended the mechanism , called as vibrational superexchange , to explain the flow of energy between inequivalent bonds in the molecule and noted the surprising accuracy despite the large number of virtual transitions involved in the process .as noted earlier , hutchinson s work on state mixing in cyanoacetylene also illustrated the flow of energy between inequivalent modes . in order to illustrate the essential ideaconsider the hindered rotor hamiltonian ( cf .( [ hindrot ] ) ) : where we have denoted .the free rotor energies and the associated eigenstates are known to be the perturbation to the free rotor connects states that differ by two rotational quanta _i.e. , _ with . clearly the local mode states , and with are not directly connected by the perturbation .nevertheless the local mode states can be coupled through a sequence of intermediate virtual states with quantum numbers .the effective coupling matrix element can be obtained via a standard application of high - order perturbation theory involving the sequence note that the polyad quantum number in the above sequence is fixed and is consistent with the single resonance approximation .in addition must be satisfied for the perturbation theory to be valid .thus the splitting between the symmetry related local mode states is given by .interestingly , stuchebrukhov and marcus showed that could be derived by analysing the semiclassical action integral with and being the minimum classical value of the momentum of the rotor excited above the barrier .this suggests that the high - order perturbation theory , if valid , is the correct approach to calculating dynamical tunneling splittings in multidimensions .an additional consequence is that the nonclassical mechanism of ivr is equivalent to dynamical tunneling as opposed to the earlier suggestion of an activated barrier crossing . as a cautionary noteit must be mentioned that the above statements are based on insights afforded by near - integrable classical dynamics with two degrees of freedom .although the example above corresponds to a symmetric case the arguments are fairly general .in fact the vibrational superexchange mechanism is appropriate for describing quantum pathways for ivr in molecules .for example consider the acetylinic ch - stretch ( ) excitations in propyne , h .experiments by lehmann , scoles , and coworkers indicate that the overtone state has a faster ivr rate as compared to the nearly degenerate combination state .similarly go , cronin , and perry in their study found evidence for a larger number of perturbers for the state than for the state .the spectrum corresponding to both the first and the second overtone states implied a lack of direct low - order fermi resonances .it was shown that the slow ivr out of the and states of propyne can be explained and understood via the vibrational superexchange mechanism .there is little doubt that the vibrational superexchange mechanism , as long as one is within the perturbative regimes , is applicable to fairly large molecules .however several issues still remain unclear .the main issue has to do with the effective coupling between the initial bright state and a nearly degenerate zeroth - order dark state in the multidimensional state space . in the case of a lone anharmonic resonancethere is only a single off - resonant sequence of states connecting and as in eq .( [ coup1dchain ] ) .this sequence or chain of states translates to a path in the state space .note that the ` paths ' being refered to here are nondynamical , although it might be possible to provide a dynamical interpretation by analysing appropriate discrete action functionals . in generalthere are several anharmonic resonances of different orders and in such instances the number of state space paths can become very large and the issue of the relative importance of one path over the other arises .for instance it could very well be the case that and can be connected by a path with many steps involving low order resonances and also by a path involving few steps using higher order resonances .it is not _ a priori _ clear as to which path should be considered .an obvious choice is to use some perturbative criteria to decide between the many paths .such a procedure was used by stuchebrukhov and marcus to create the tiers in their analysis .more recently pearman and gruebele have used the perturbative criteria to estimate the importance of direct high / low order couplings and low order coupling chains to the ivr dynamics in the state space . thus consider the following possible coupling chains , shown in fig .[ fig3 ] , assuming that the states and are separated by quanta in the state space _ i.e. , _ note that is the distance between and in the state space and thus identical to in eq .( [ lwqet ] ) . in the above the first equation indicates a direct coupling between the states of interest by an anharmonic resonance coupling of order . in the second case the couplingis mediated by one intermediate state involving two anharmonic resonances and with .each step of a given state space path , coupling two states and at order , is weighted by the perturbative term with and . a specific path in the state spaceis then associated with the product of the weightings for each step along the path .for example the state space path above involving two intermediate states is associated with the term .it is not hard to see that such products of correspond to the terms contributing to the effective coupling as shown in eq .( [ effcouphrh ] ) and eq .( [ effcoupsm ] ) .the various terms also contribute to the effective coupling ( cf .( [ lwqet ] ) ) in the leitner - wolynes criteria for the transition from localized to extended states .the crucial issue is wether or not one can identify dominant paths , equivalently the key intermediate states , based on the perturbative criteria alone . in a rigorous sensethe answer is negative since such a criteria ignores the dynamics .complications can also arise due to the fact that each segment of the path contributes with a phase .furthermore one would like to construct the explicit dynamical barriers in terms of the molecular parameters and conserved or quasi - conserved quantities .the observation that the superexchange can be derived from a semiclassical action integral provides a clue to some of the issues .however an explicit demonstration of such a correspondence has been provided only in the single resonance ( hindered rotor ) case . in the multidimensional casethe multiplicity of paths obscures this correspondence .the next few sections highlight the recent advances which show that clear answers to the various questions raised above come from viewing the phenomenon of dynamical tunneling in the most natural representation - the classical phase space .nearly a decade ago heller wrote a brief review on dynamical tunneling and its spectral consequences which was motivated in large part by the work of stuchebrukhov and marcus .focusing on systems with two degrees of freedom in the near - integrable regimes , characteristic of polyatomic molecules at low energies , heller argued that the - cm _ broadening of lines is due to dynamical tunneling between remote regions of phase space facilitated by distant resonances_. this is an intriguing statement which could perhaps be interpreted in many different ways .moreover the meaning of the words remote and distant are not immediately clear and fraught with conceptual difficulties in a multidimensional phase space setting .in essence heller s statement , more appropriately called as a conjecture , is an effort to provide a phase space picture of the superexchange mixing between two or more widely separated zeroth - order states in the state space .some of the examples in the later part of the present review , hopefully , demonstrate that the conjecture is reasonable .interestingly though in the same review it is mentioned that in the presence of widespread chaos the issue of tunneling as a mechanism for ivr is of doubtful utility .similar sentiments were echoed by stuchebrukhov _who , in the case of high dimensional phase spaces , envisaged partial rupturing of the invariant tori and hence domination of ivr by chaotic diffusion rather than dynamical tunneling .this too has to be considered as a conjecture at the present time since the timescales and the competition between chaotic diffusion ( classically allowed ) and dynamical tunneling ( classically forbidden ) in systems with three or more degrees of freedom has not been studied in any detail .the issues involved are subtle and extrapolating the insights from studies on two degrees of freedom to higher degrees of freedom is incorrect .for instance , in three and higher degrees of freedom the invariant tori , ruptured or not , do not have the right dimensionality to act as barriers to inhibit diffusion .therefore , is it possible that classical chaos could assist or inhibit dynamical tunneling ?several studies carried out during the early nineties till the present time point to an important role played by chaos in the process of dynamical tunneling .we start with a brief review of these studies followed by very detailed investigations into the role played by the nonlinear resonances .finally , some of the very recent work is highlighted which suggest that the combination of resonances and chaos in the phase space can yet play an important role in ivr .one of the first studies on the possible influence of chaos on tunneling was made by lin and ballentine .these authors investigated the tunneling dynamics of a driven double well system described by the hamiltonian which has the discrete symmetry with . in the absence of the monochromatic field ( )the system is integrable and the phase space is identical to that of a one dimensional symmetric double well potential .however with the system is nonintegrable and the phase space is mixed regular - chaotic . despite the mixed phase spacethe discrete symmetry implies that any regular islands in the phase space will occur in symmetry related pairs and the quantum floquet states will occur as doublets with even / odd symmetries .a coherent state ( wavepacket ) localized on one of the regular islands will tunnel to the other , clasically disconnected , symmetry related island .thus this is an example of dynamical tunneling .an important observation by lin and ballentine was that the tunneling was enhanced by orders of magnitude in regimes wherein significant chaos existed in the phase space .for example with the ground state tunneling time is about whereas with the tunneling time is only about .futhermore strong fluctuations in the tunneling times were observed . in fig .[ fig7 ] a typical example of the fluctuations over several orders of magnitude are shown . the crucial thing to note , however , is that the gross phase space structures are similar over the entire range shown in fig .thus although the chaos seems to influence the tunneling the mechanistic insights were lacking _i.e. , _ the precise role of the stochasticity was not understood .[ htbp ] .the parameters are chosen as , , and .the phase space for in the upper left panel shows the two regular islands embedded in the chaotic sea .the initial state corresponds to a coherent state localized on the left regular island ( indicated by an arrow ) .the three phase space strobe plots at increasing values of indicated in ( a ) show the local structure near the left island . despite the phase space beingvery similar the tunneling times fluctuate over many orders of magnitude .figure courtesy of astha sethi ( unpublished work).,width=321 ] important insights came from the work by grossmann _et al . _ who showed that it is possible to suppress the tunneling in the driven double well by an appropriate choice of the field parameters .the explanantion for such a coherent destruction of tunneling and the fluctuations comes from analysing the the floquet level motions with .gomez llorente and plata provided perturbative estimates within a two - level approximation for the enhancement / suppression of tunneling .a little later utermann , dittrich , and hnggi showed that there is a strong correlation between the splittings of the floquet states and their overlaps with the chaotic part of the phase space . breuer and holthaus highlighted the role of classical phase space structures in the driven double well system .subsequently other works on a wide variety of driven systems established the sensitivity of tunneling to the classical stochasticity .an early discussion of dynamical tunneling and the influence of dissipation can be found in the work by grobe and haake on kicked tops .a comprehensive account of the various studies on driven systems can be found in the review by grifoni and hnggi .such studies have provided , in recent times , important insights into the process of coherent control of quantum processes .recent review by gong and brumer , for instance , discusses the issue of quantum control of classically chaotic systems in detail .perhaps it is apt to highlight a historical fact mentioned by gong and brumer - _ coherent control emerged from two research groups engaged in studies of chaotic dynamics_. in the lin - ballentine example above the perturbation ( applied field ) not only increases the size of the chaotic region but also affects the dynamics of the unperturbed tunneling doublet . hence the enhanced tunneling ,relative to the unperturbed or the weakly perturbed case , can not be immediately ascribed to the increased amount of chaos in the phase space . to observe a more direct influence of chaos on tunneling itis necessary that the classical phase space dynamics scales with energy .investigations of the coupled quartic oscillator system by bohigas , tomsovic , and ullmo provided the first detailed view of the cat process .the choice of the hamiltonian \label{btuham}\ ] ] with was made in order to study the classical - quantum correspondence in detail .this is due to the fact that the potential is homogeneous and hence the dynamics at a specific energy is related to the dynamics at a different energy by a simple rescaling .consequently one can fix the energy and investigate the effect of the semiclassical limit on the tunneling process .the classical dynamics is integrable for and strongly chaotic for .again , note that the hamiltonian has the discrete symmetry and thus the quantum eigenstates are expected to show up as symmetry doublets .in contrast to the driven system discussed above , and similar to the davis - heller system discussed earlier , the system in eq .( [ btuham ] ) does not have any barriers arising from the two dimensional quartic potential .the doublets therefore must correspond to quantum states localized in the symmetry related islands in phase space shown in fig .thus denoting the states localized in the top and bottom regular islands by and respectively the doublets correspond to the usual linear combinations .[ htbp ] surface of section is shown for and the parameter values for the hamiltonian in eq .( [ btuham ] ) .the large islands are part of a : resonance island chain and are related to each other by symmetry .( b ) details of the upper island and ( c ) shows the configuration space plot of the symmetrically related tori.,width=321 ] bohigas , tomsovic , and ullmo focused on quantum states associated with the large regular islands seen in fig .[ fig8 ] . specifically , they studied the variations in the splitting associated with the doublets upon changing and the coupling .since the states are residing in a regular region of the phase space it is expected , based on one dimensional tunneling theories , that the splittings will scale exponentially with _ i.e. , _however the splittings exhibit fluctuations of several orders of magnitude and no obvious indication of a predictable dependence on seemed to exist .note that a similar feature is seen in fig .[ fig7 ] showing the tunneling time fluctuations of the driven double well system .more precisely , it was observed that for the range of that corresponded to integrable or near - integrable phase space roughly follows the exponential scaling .on the other hand for chaos becomes pervasive and exhibits severe fluctuations .tomsovic and ullmo argued that the fluctuations in could be traced to the crossing of the quasidegenerate doublets with a third irregular state . by irregularit is meant that when viewed from the phase space the state density , represented by either the wigner or the husimi distribution function , is appreciable in the chaotic sea .the irregular state , in contrast to the regular states , do not come as doublets since there is no dynamical partition of the chaotic region into mutually exclusive symmetric parts .nevertheless the irregular states do have fixed parities with respect to the reflection operations .thus chaos - assisted tunneling is necessarily atleast a three - state process .in other words , assuming that the even - parity irregular state with energy couples to with strength a relevant , but minimal , model hamiltonian in the symmetrized basis has the form : with being a direct coupling between and .if is dominant as compared to then one has the usual two level scenario .however if then the splitting can be approximated as hence varying a parameter , for example , one expects a peak of height as crosses the level .detailed theoretical studies of the dynamical tunneling process in annular billiards by frischat and doron confirmed the crucial role of the classical phase space structure .specifically it was found that the tunneling between two symmetry related whisphering gallery modes ( corresponding to clockwise and counterclockwise rotating motion in the billiard ) is strongly influenced by quantum states that inhabit the regular - chaotic border in the phase space .such states were termed as `` beach states '' by frischat and doron .soon thereafter dembowski _ et al . _provided experimental evidence for cat in a microwave annular billiard simulated by means of a two - dimensional electromagnetic microwave resonator .details of the experiment can be found in a recent paper by hofferbert __ wherein support for the crucial role of the beach region is provided .it is , however , significant to note that the regular - chaotic border in case of the annular billiards is quite sharp . an important point to noteis that despite the intutively appealing three - state model , determining the coupling between the regular and chaotic states is nontrivial .this , in turn , is related to the fact that accurate determination of the positions of the chaotic levels can be a difficult task and determining the nature of a chaotic state for systems with mixed phase spaces is still an open problem .such difficulties prompted tomsovic and ullmo to adopt a statistical approach , based on random matrix theory , to determine the tunneling splitting distribution in terms of the variance of the regular - chaotic coupling _i.e. , _ . in a later work leyvraz and ullmo showed is a truncated cauchy distribution with being the mean splitting . on the other hand creagh andwhelan showed that the splitting between states localized in the chaotic regions of the phase space but separated by an energetic barrier is characterized by a specific tunneling orbit . specifically , the resulting splitting distribution depends on the stability of the tunneling orbit and is not universal . in certain limits the splittingsobey the porter - thomas distribution .thus the fluctuations in in the cat process are different from those found in the usual double - well system .note that in case of cat the distribution pertains to the splitting between states localized in the regular regions of phase space embedded in the chaotic sea .the reader is referred to the excellent discussion by creagh for details . nevertheless , in a recent work mouchet and delande observed that the exhibits a truncated cauchy behaviour despite a near - integrable phase space .similar observations have been made by carvalho and mijolaro in their study of the splitting distribution in the annular billiard system as well .this indicates that the leyvraz - ullmo distribution eq .( [ trunccauchy ] ) is not sufficient to characterize cat .a first step towards obtaining a semiclassical estimate of the regular - chaotic coupling was taken by podolskiy and narimanov . assuming a regular island seperated by a structureless , on the scale of , chaotic sea they were able to show that : with and a system specific proportionality factor which is independent of . the phase space area covered by the regular islandis denoted by .application of the theory by podolskiy and narimanov to the splittings between near - degenerate optical modes , localized on a pair of symmetric regular islands in phase space , in a non - integrable microcavity yielded very good agreement with the exact data .in fact podolskiy and narimanov have recently shown that the lifetimes and emission patterns of optical modes in asymmetric microresonators are strongly affected by cat .encouraging results were obtained in the case of tunneling in periodically modulated optial lattices as well . howeverthe agreement in the splittings displayed significant deviations in the deep semiclassical regime _i.e. , _ large vaues of .interestingly the critical value of beyond which the disagreement occurs correlates with the existence of plateau regions , discussed in the next section ( cf .[ fig11 ] ) , in the plot of versus .such plateau regions have been noted earlier by roncaglia _et al . _ and in several other recent studies as well .the qualitative picture that emerged from the numerous studies is that cat process is a result of competition between classically allowed ( mixing or transport in the chaotic sea ) and classically forbidden ( tunneling or coupling between regular regions and the chaotic sea ) dynamics . despite the significant advancements , the mechanism by which a state localized in the regular island couples to the chaotic sea continued to puzzle the researchers .it might therefore come as a surprise that based on recent studies there is growing evidence that the explanation for the regular - chaotic coupling lies in the nonlinear resonances and partial barriers in the phase space .the work by podolskiy and narimanov provides one possible answer but an equally strong clue is hidden in the plateaus that occur in a typical versus plot .it is perhaps reasonable to expect that the theory of podolskiy and narimanov is correct when phase space structures like the nonlinear resonances and partial barriers , ignored in the derivation of eq .( [ podnari ] ) , in the vicinity of regular - chaotic border regions are much smaller in area as compared to the planck constant .however , for sufficiently small the rich hierarchy of structures like nonlinear resonances , partially broken tori , cantori , and even vague tori in the regular - chaotic border region have to be taken into consideration .it is significant to note that tomsovic and ullmo had already recognized the importance of accounting for mechanisms which tend to limit classical transport and proposed generalized random matrix ensembles as a possible approach .there also exist a number of studies , involving less than three degrees of freedom , which highlight the nature of the regular - chaotic border and their importance to tunneling between kam tori .a different perspective on the influence of chaos on dynamical tunneling came from the detailed investigations by ikeda and coworkers involving the study of semiclassical propagators in the complex phase space .in particular , deep insights into cat were gained by examining the so called laputa chains " which contribute dominantly to the propagator in the presence of chaos . for a detailed reviewthe paper by shudo and ikeda is highly reccommended .hashimoto and takatsuka discuss a situation wherein dynamical tunneling leads to mixing between states localized about unstable fixed points in the phase space .however in this case the transport is energetically as well as dynamically allowed and it is not very clear whether the term ` tunneling ' is an appropriate choice . in the following subsections recent progress towards quantitative prediction of the average tunneling rates , andhence the coupling strengths between the regular and chaotic states , are described .the key ingredient to the success of the theory are the nonlinear resonances in the phase space .in fact , depending on the relative size of , one needs to take multiple nonlinear resonances into account for a correct description of dynamical tunneling .if one asociates the variation with the related density of states in a molecular system then it is tempting to claim that such a mechanism must be the correct semiclassical limit of the stuchebrukhov - marcus vibrational superexchange theory .the discussions in the next section indicate that such a claim is indeed reasonable .the early investigations of dynamical tunneling made it clear that nonlinear resonances played an important role in the phenomenon of dynamical tunneling .however in the majority of the studies one had a single resonance of low order dominating the tunneling process and hence an effectively integrable classical dynamics .very few attempts were made for systems which had multiple resonances and thus capable of exhibiting near - integrable to mixed phase space dynamics .moreover studying the tunneling process with varying is necessary to implicate the nonlinear resonances and other phase space structures without ambiguity .definitive answers in this regard were provided by brodier , schlagheck , and ullmo in their study of time - dependent hamiltonians . in this sectionwe provide a brief outline of their work and refer the reader to some of the recent reviews which provide details of the theory and applications to several other systems . in order to highlight the salient features of rat consider a one degree of freedom system that evolves under a periodic time - dependent hamiltonian . the phase space structure in this casecan be easily visualized by constructing the stroboscopic poincar section _i.e. , _ plotting at integer values of the period .we are interested in the generic case wherein the phase space exhibits two symmetry - related regular islands that are embedded in the phase space .typically the phase space has various nonlinear resonances that arise due to frequency commensurability between the external driving field frequency and the system frequencies .[ fig9 ] shows an example of a near - integrable phase space wherein a few resonances are visible . for the moment assume that a prominent : , say the : resonance in fig .[ fig9 ] , nonlinear resonance manifests in the phase space .the motion in the vicinity of this : resonance can be analysed using secular perturbation theory . in order to dothis one introduces a time - independent hamiltonian that approximately describes the regular motion in the islands .in particular assume that appropriate action - angle variables can be introduced such that and thus the total hamiltonian can be expressed as .the term represents a weak perturbation in the center of the island .a : nonlinear resonance occurs when {i = i_{r : s}}\ ] ] the resonant action is denoted by . using techniques from secular perturbation theoryone can show that the dynamics in the vicinity of the nonlinear resonance is described by the new hamiltonian with , and the effective mass " parameter is related to the anharmonicity of the system .the new angle varies slowly in time near the resonance and hence , according to adiabatic perturbation theory , one can replace by its time average over -periods of the driving field furthermore , writing the perturbation in terms of a fourier series and neglecting the action dependence of the fourier coefficients _ i.e. , _ evaluated at the effective integrable hamiltonian is obtained as : the above effective hamiltonian couples the zeroth order states and with a matrix element . herethe quantum numbers refer to the zeroth - order kam tori inside the large regular island .thus , the eigenstates of are admixtures of unperturbed states obeying the selection rule " and perturbatively given by where and .associating the quantized actions with states localized in the regular islands the energy difference becomes small if .in other words , the states and are coupled strongly if they straddle the : resonance symmetrically .consequently the nonlinear resonance provides an efficient route to couple the ground state with to a highly excited state with .once again a crucial issue has to do with the the dominant pathways that couple and .clearly the two states can couple in a one - step process via or with a -step process via .the analysis by brodier _, realizing the rapid decrease of the fourier terms with , reveals that the -step process is dominant in the semiclassical limit .note that within the effective hamiltonian one assumes the dominance of a single nonlinear resonance and hence the focus is on the : resonance and its higher harmonics . neglecting the fourier components with and denoting one obtains the standard pendulum hamiltonian for periodically driven systemsthe floquet states can be obtained by diagonalizing the one - period time evolution operator .the near degenerate floquet states are characterized by the splitting between the quasienergies , or eigenphases , of the symmetric and antisymmetric state . within the perturbation scheme the ground state ( )splitting is then obtained as with being a measure of the contribution of the -th excited state to the perturbed ground state .the quantities are the unperturbed splittings resulting from the integrable limit hamiltonian for which extensive semiclassical insights exist .note that the above expression for is essentially the same as the one obtained by stuchebrukhov and marcus within their vibrational superexchange approach .here one has the generalization for driven systems whose phase space exhibits a general : nonlinear resonance .[ htbp ] .the splittings are calculated for the -th excited state at .variation with is equivalent to fixing the state localized on the torus with action and varying .exact quantum splittings ( circles ) are well reproduced by semiclassical prediction ( thick line ) based on the : , the : , and the : resonances .the dashed line is the integrable estimate obtained using the hamiltonian in eq .( [ kharpinteg ] ) and clearly inaccurate .note that multiple precision arithmetics was used to compute eigenphase splittings below the ordinary machine precision limit .figure courtesy of p. schlagheck.,width=302 ] brodier _ et al ._ illustrated the rat mechanism beautifully by studying the kicked harper model whose dynamics is equivalent to the symplectic map which describes the evolution of from time to .for the phase space , shown in fig .[ fig9 ] , is near - integrable and exhibits a large regular island around .the periodicity of the system in both and implies the existence of a symmetry related island at .the floquet states thus come in nearly degenerate pairs with eigenphase splittings associated with the -th excited state in the regular island . since the system is near - integrable for it is possible to use classical perturbation theory to construct a time - independent hamiltonian which is a very good integrable approximation to the kicked harper map . indeed the phase space for the exact time - dependent hamiltonian for and the integrable approximation in eq .( [ kharpinteg ] ) are nearly identical . the crucial difference , however , is that the integrable approximation does not have the various nonlinear resonances which are visible in fig .[ fig9 ] . the consequence of negelcting the nonlinear resonances are catastrophic as far as the splittings are concerned and from fig . [ fig9 ]it is clear that any estimate based on the integrable approximation is doomed to fail .such pendulum - like hamiltonians have been used in the very early studies of dynamical tunneling and were also central to the ideas and conjectures formulated by heller in the context of ivr and spectral signatures .however the importance of the contribution by brodier , schlagheck , and ullmo lies in the fact that a rigorous semiclassical basis for the rat mechanism was established .although the detailed study was in the context of a one - dimensional driven system , the insights into the mechanism are expected to be valid in larger class of systems .these include systems with two degrees of freedom and perhaps to some extent systems with three or more degrees of freedom as well ( see discussions in the introduction and the last section ) .for instance it can be shown that the rat mechanism will _ always _ dominate the regular tunneling process in the semiclassical limit .furthermore , discussions on the importance of the higher harmonics of a given nonlinear resonance and the criteria for identifying and including multiple resonances in the path between and are established . in this regard an important point made by brodier _is that the condition for including a : resonance into the coupling path is not directly related to the size of the resonance islands - a crucial point that emerges only upon investigating the analytic continuation of the classical dynamics to complex phase space .moreover , provided the observations are general enough , the arguments could possibly shed some light on the issue of the competition between low and high order anharmonic resonances for ivr in large molecules .up until now the two processes of cat and rat have been discussed separately .numerous model studies showed that the nonlinear resonances are important for describing the dynamical tunneling process in the near - integrable regimes . at the same time specific statistical signatures could be associated with the influence of chaos on dynamical tunneling .one sticky issue still remained - the mechanism of the coupling between a regular state and the chaotic sea .as noted earlier , the assumption of a structurless chaos - regular border and hence their unimportance for cat process typically gets violated for sufficiently small .many authors suspected that the nonlinear resonances could play an important role in the mixed phase space scenario as well .however the precise role of the nonlinear resonances in determining the regular - chaotic coupling was not clear .an important first step in this direction was recently taken by eltschka and schlagheck . by extending the resonance - assisted mechanism eltschka andschlagheck argue that the average splittings _ i.e. , _ ignoring the fluctuations associated with multistate avoided crosings , can be determined without any adjustable parameters . in order to illustrate the idea note that in the mixed regular - chaotic regime invariant tori corresponding to the regular region are embedded into the chaotic sea .a typical case is shown in fig .[ fig11 ] for the kicked harper map with .therefore only a finite number of invariant tori are supported by the regular island with the outermost torus corresponding to the action .assuming the existence of a dominant : nonlinear resonance implies that the ground state in the regular region is coupled efficiently to the unperturbed state . if is located beyond the outermost invariant torus of the island _ then one expects a significant coupling to the unperturbed states located in the chaotic sea . within this schemethe effective hamiltonian can be thought of as having the form shown in fig .[ fig10 ] . & & & \parbox[c][\csize][t]{\vrs}{ } & \framebox{\parbox[c][\csize][c]{\csize}{\centering\large{\sf chaos } } } & \parbox[l][\csize][b]{\vrs}{}\\ & & & & \parbox{\csize}{\hfill } & \hspace*{-0.3 cm } e_{(k-1)r } \hspace*{-0.3 cm } & \ddots\\ & & & & & \ddots & \ddots & v_{r : s}\\ & & & & & \phantom{\ddots } & v_{r : s } & e_r & v_{r : s}\\ & & & & & & \phantom{\ddots } & v_{r :s } & e_0 \end{array } \right)\ ] ] the effective coupling between the ground state and the chaos obtained by eliminating the intermediate states within the regular island is given by and follows from the near - integrable resonance - assisted theory .the chaotic part of the effective hamiltonian is modeled by a random hermitian matrix from the gaussian orthogonal ensemble . in particular , assuming the validity of the truncated cauchy distribution for the splittings , the mean eigenphase splittings for periodically driven systems assumes the simple form with a universal prediction for the logarithmic variance of the splittings ^{2 } \right \rangle = \frac{\pi^{2}}{4}\ ] ] in other words the actual splittings might be enhanced or suppressed as compared to the mean by about independent of the value of and other external parameters . in fig .[ fig11 ] the expression for the mean splitting in eq .( [ msplit ] ) is compared to the exact quantum results in the case of the kicked harper at .one immediately observes that the agreement is relatively good and the approximate plateau like structures are also reproduced .the plateaus are related to the number of perturbative steps in eq .( [ cateffcoup ] ) that are required to connect the ground state with the chaotic domain . in order to understand the plateaus note that increasing in fig .[ fig11 ] corresponds to decreasing since the state is fixed .as decreases there comes a point when the state localizes on , rather than being located beyond , the outermost invariant torus of the regular island .thus the coupling to the chaotic domain via eq .( [ cateffcoup ] ) now requires steps rather than steps .[ htbp ] is shown in the upper panel .a prominent : nonlinear resonance is visible .the lower panel shows the quantum eigenphase splittings calculated for the ground state within the regular island as a function of . the thick solid line ( red )is the semiclassical prediction based on the : resonance for the average splittings along with the logarithimic standard deviations as dashed lines ( red ) .the long dashed curve ( green ) is the podolskiy - narimanov prediction for the same system .figure courtesy of p. schlagheck.,width=302 ] the mechanism for cat just outlined has been confirmed for a variety of systems .eltschka and schlagheck confirmed that the plateau structures appear for the kicked rotor case as well ._ very recently showed that the decay of the nondispersive electronic wavepackets in driven ionizing rydberg systems is also controlled by the rat mechanism ._ performed a fairly detailed analysis of an effective hamiltonian that can be realized in experiments on cold atoms and provide clear evidence for the important role played by the various nonlinear resonances .a extensive study , by sheinman _et al . _ , of the lifetimes associated with the decay of the quantum accelerator modes revealed regimes wherein the rat mechanism is crucial . in this contextit is interesting to ask wether the mechanism would also explain the cat process in the bohigas - tomsovic - ullmo system described by the hamiltonian in eq .( [ btuham ] ) .it is evident from the inset in fig .[ fig8]b that nonlinear resonances do show up around the chaos - regular border but a systematic study of the original system from the recent viewpoints is yet to be done .similar questions are being addressed in the lin - ballentine system ( cf .( [ lbham ] ) ) _ i.e. , _ to what extent is the decay of a coherent state localized in the center of the regular island controlled by the rat mechanism . naturally , in caseswherein the regular islands are `` resonance - free '' the rat mechanism is absent .a recent example comes from the work of feist _ et al . _ on their studies on the conductance through nanowires with surface disorder which is controlled by dynamical tunneling in a mixed phase space .thus based on the numerous systems studied there are clear indications that the nonlinear resonances are indeed important to understand the cat mechanism .nevertheless certain observations reveal that one does not have a full understanding as of yet .for instance it is clear from fig .[ fig11 ] that the semiclassical theory underestimates the exact quantum splittings in the regimes of large .this could be due to the fact that the key nonlinear resonance , on which the estimate is based , is not clearly resolved .it could also be due to the fact that within the pendulum approximation the action dependence of the fourier coefficients are neglected .alternatively it is possible that a _different _ mechanism is operative at large and in particular the podolskiy - narimanov theory might be relevant in this context .however , as evident from the results reported in the recent work by mouchet __ , the podolskiy - narimanov estimate tends to overestimate in certain systems . in this context note that sheinman has recently derived , using assumptions identical to those of podolskiy and narimanov , an expression for which differs from the result in eq .( [ podnari ] ) .moreover the assumption of neglecting the action dependencies of the fourier coefficients in eq .( [ tavcoup ] ) needs to be carefully studied especially in cases wherein the nonlinear resonances are rather large .a more crucial issue pertains to the role played by the partial barriers like cantori at the regular - chaotic border . at presentthe rat based mechanism still assumes the border to be devoid of such partial barriers and thus models the chaotic part by a random hermitian matrix . as briefly mentioned in the introduction the partial barriers do play an important role in the molecular systems as well . in this contextthe studies by brown and wyatt on the driven morse oscillator are highly relevant . the driven morse system , a model for the dissociation dynamics of a diatomic molecule , clearly points to the crucial role of the cantori in the quantum dynamics .interestingly the dissociation dynamics of the driven morse system can be analysed from a dynamical tunneling perspective to understand the interplay of nonlinear resonances , chaos , and cantori in the underlying phase space .investigations along these lines are in progress .currently there is no theory which combines the rat viewpoint with the effect of the hierarchical structures located in the border regions of the phase space .notwithstanding the issues raised in the previous sections , the key point in the context of this review has to do with the relevance of the rat and cat phenomena to ivr .heller in his stimulating review raised several questions regarding the manifestation of dynamical tunneling in molecular spectra .it is thus interesting to ask as to what extent has the recent progress contributed towards our understanding of quantum mechanisms for ivr .this section therefore summarizes the recent progress from the molecular standpoint .the hamiltonians used in this section are effective spectroscopic hamiltonians of the form given in eq .( [ specham ] ) . the origin and utility of the effective hamiltonianshave already been discussed in the introduction .three reasons are worth stating at this juncture .firstly , the classical limit of the spectroscopic hamiltonians are very easily constructed . secondly , the various nonlinear resonances are specified and hence it is possible to perform a careful analysis of the role of the resonances and chaos in dynamical tunneling .a third reason has to with the fact that effective hamiltonians of similar form arise in different contexts like electron transfer through molecular bridges , coupled bose - einstein condensates , and discrete quantum breathers .interestingly the notion of local modes has close connections with the existence of discrete breathers or intrinsic localized modes which have been , and continue to be , studied by a number of researchers .wether the intrinsic localized modes can exist quantum mechanically is nothing but the issue of dynamical tunneling in such discrete systems .therefore one anticipates that techniques of the previous sections should be useful in studying the breather - breather interactions as well .indeed a recent work by dorignac _. _ invokes the appropriate off - resonant states to couple the degenerate states . to illustrate the rat and cat phenomena recent studies focus on the effective local mode hamiltonian with , describing the highly excited vibrational states of a aba triatomic . in the above the local mode stretchesare denoted by and the bending mode is denoted by . as stated previously such systems were one of the first ones to highlight the manifestation of dynamical tunneling in molecular spectra . here , however , the bending mode is included and thus allows for several anharmonic resonances and a mixed phase space .the zeroth - order anharmonic hamiltonian is with being the occupation number of the -th mode written in terms of the harmonic creation and the destruction operators .the zeroth - order states are coupled by the anharmonic resonances of the form thus couples states and with and . due to the structure of the hamiltonianthe polyad number is exactly conserved and thus the system has effectively two degrees of freedom ..local mode splittings in cm for h [ cols="^,^,^ " , ] the hamiltonian has been proposed by baggott to fit the vibrational states of h and d for specific choice of the parameters of .a comparison of the local mode splittings predicted using the baggott hamiltonian and the recent experimental data of tennyson _ et al ._ is shown in the table [ table1 ] .the agreement is quite good considering the fact that the polyad perhaps breaks down at higher levels of excitations .another reason for the choice of the system had to do with the fact that the classical - quantum correspondence for the hamiltonian in eq .( [ bagham ] ) was already well established from several earlier works .additional motivation has to do with the possibility of studying the rat and the cat mechanisms in a controlled fashion by analysing the following sub - hamiltonians : the hamiltonian for smaller polyads exhibits near - integrable dynamics .the same is true for but with an additional weak : resonance and hence it is possible to study the influence of multiple nonlinear resonances . on the other handthe full hamiltonian has a mixed regular - chaotic phase space and thus on going from a systematic study of the influence of phase space structures on dynamical tunneling is possible .the classical limit can be obtained using the correspondence and yields a nonlinear multiresonant hamiltonian in terms of the action - angle variables of the zeroth - order .for instance the resonant perturbation eq .( [ respert ] ) has the classical limit with the obvious correspondence .[ htbp ] in cm with increasing excitation quantum for polyad .the exact quantum ( lines ) are compared to the vibrational superexchange ( symbols ) at minimal order .note that the agreement is excellent except for the case for the full hamiltonian ( solid line ) .inset shows the contribution from different coupling chains towards obtaining in case a. ( b ) estimated from the : nonlinear resonance alone ( dashed ) and the :+: resonances ( dot - dashed ) compared with the full case ( solid line ) .note that the hamiltonian with : and :+: are classically integrable .the three lower panels show the phase space at increasing energies corresponding to , and for eq .( [ bagham ] ) . significant amount of chaos and the : stretch - bend nonlinear resonances set in for .,width=302 ] the zeroth - order states latexmath:[|b\rangle , long time scales .state on the other hand shows trapping both classically and quantum mechanically .explanation for the observed mixings in fig .[ fig17 ] can be given in terms of the vibrational superexchange mechanism .indeed the ivr ` fractionation ' pattern , shown in fig .[ fig19 ] , indicates a a clump of virtual or off - resonant states about away from the corresponding ` bright ' states .such fractionation patterns are similar to those observed in the experiments of callegari _ et al ._ for example .the off - resonant states have and hence do not mix significantly .however it is important to note that the fractionation pattern looks the same in case of every state and hence there must be subtle phase cancellation effect to explain the trapping exhibited by . at the same time fig .[ fig19 ] also shows that the vastly different ivr from the four states can not be obviously related to any avoided crossing or multistate interactions .[ htbp ] with the coupling parameter with the other couplings fixed as in fig .[ fig11 ] .eigenstates having the largest contribution from the specific zeroth - order states are indicated .bottom panel shows the ` fractionation ' pattern _ i.e. , _ overlap intensities versus on log scale .the cluster of states around are the off - resonant states that mediate the mixings seen in fig .[ fig17 ] and fig .[ fig18 ] via a vibrational superexchange mechanism . adapted from ref ..,width=302 ] in light of the conjecture of davis and heller and the recent progress in our understanding of the mechanistic aspects of dynamical tunneling it is but natural to associate the near - degenerate state mixings with rat .however how does one identify the specific resonances at work in this case ?given that the system has three degrees of freedom it is not possible to visualize the poincar surface of section at . in the present casethe cruicial object to analyse is the arnold web _ i.e. , _ the network of nonlinear resonances and the location of the zeroth - order states therein .fortunately the classical limit hamiltonian from eq .( [ 3dham ] ) is easily obtained , as explained earlier , in terms of the action - angle variables of . in the above equation the variable introduced for convenience during a perturbation analysis of the hamiltonian .the ` static ' arnold web at and fixed polyad , classical analog of the quantum , can then be constructed via the intersection of the various resonance planes with the energy shell .the static web involves all the nonlinear resonances restricted to , say , some maximum order .the reason for calling such a construction as static has to do with the fact that being based on it is possible that many of the resonances do not have any dynamical consequence .thus , although the static web provides useful information on the location of the various nonlinear resonances on the energy shell it is nevertheless critical to determine the ` dynamical ' arnold web since one needs to know as to what part of the static web is actually relevant to the dynamics .further discussions on this point can be found in the paper by laskar wherein time - frequency analysis is used to study transport in the four dimensional standard map . in order to construct the dynamical arnold webit is necessary to be able to determine the system frequencies as a function of time .several techniques have been suggested in the literature for performing time - frequency analysis of dynamical systems .an early example comes from the work of milczewski , diercksen , and uzer wherein the arnold web for the hydrogen atom in crossed electric and magnetic fields has been computed .a critical review of the various techniques is clearly outside the scope of this work and hence , without going into detailed discussion of the advantages and disadvantages , the wavelet based approach developed by wiggins and coworkers is utilized for constructing the web .there is , however , ample evidence that the wavelet based local frequency analysis is ideally suited for this purpose and therefore a brief description of the method follows .classical trajectories with initial conditions satisfying are generated and the nonlinear frequencies are computed by performing the continuous wavelet transform of the time series : with and real .various choices can be made for the mother wavelet and in this work it is chosen to be the morlet - grossman function and the parameter values , .note that eq .( [ wavelet ] ) yields the frequency content of within a time window around . in many instancesone is interested in the dominant frequency and hence the required local frequency is extracted by determining the scale ( , inversely proportional to frequency ) which maximizes the modulus of the wavelet transform _ i.e. , _the trajectories are followed in the frequency ratio space since the energy shell , resonance zones , and the location of the zeroth - order states can be projected onto the space of two independent frequency ratios .such ` tune ' spaces have been constructed and analysed before , for instance , by martens , davis , and ezra in their seminal work on ivr in the ocs molecule . a density plotis then created by recording the total number of visits by the trajectories to a given region of the ratio space .quite naturally the density plot highlights the dynamically significant portions of the arnold web at a given energy and polyad .the computation of such a dynamical web is shown in fig .[ fig20 ] at an energy corresponding to the zeroth - order states of interest .one immediately observes that apart from highlighting the primary resonances fig .[ fig20 ] shows the existence of higher order induced resonances .more importantly the zeroth - order states are located far away from the primary resonances and thus the state mixings can not be ascribed to the direct couplings in the hamiltonian .however , the states are located in proximity to the junction formed by the weak induced resonances denoted as , and .the nature of these induced resonances make it very clear that the state mixings are due to rat .[ htbp ] generated by propagating 25000 classical trajectories for .the primary resonances ( red ) , ( black ) , and ( magenta ) , as predicted by are superimposed for comparison .the zeroth - order states of interest however are located far away from the primary resonances and close to the junction of three weak induced resonances , and indicated by arrows .the induced resonances lead to the observed state mixings in fig .[ fig17 ] and fig .[ fig19 ] via the mechanism of rat .the data for this figure were generated by a. semparithi .reproduced from ref ..,width=302 ] how can one be sure that it is indeed the rat mechanism that is at work here ?the simplest answer is to try and interfere with the rat mechanism by removing or altering the relevant resonances as seen in fig .[ fig20 ] .this was precisely what was done in the recent work and classical perturbation theory is again used to remove the primary resonances in eq .( [ 3dhamc ] ) to . since the methodology is well known only a brief description of the perturbative analysis follows .the first step involves reduction of the hamiltonian in eq .( [ 3dhamc ] ) , noting the exactly conserved polyad , to an effective three degrees of freedom system .this is achieved by performing a canonical transformation via the generating function the new variables can be obtained in terms of the old variables by using the generating function properties and .the reduced hamiltonian takes the form with being determined from the functions .the resonant angles are related to the original angle variables by .the dynamical web in fig . [ fig20 ] clearly shows that the zeroth - order states of interest are far away from the primary resonances .thus the three primary resonances are removed to by making use of the generating function where the unknown functions are to be chosen such that the reduced hamiltonian does not contain the primary resonances to . using the standard generating function relations one obtains with and .writing the reduced hamiltonian in terms of the variables and using the identities involving the bessel functions one determines the choice for the unknown functions to be consequently , at the effective hamiltonian containing the induced resonances is obtained . following the procedure for ratthe induced resonances are approximated by pendulums .for example in the case of one obtains with and .the resonance center is and the effective coupling can now be expressed in terms of the conserved quantities , and .note that is the appropriate polyad for the induced resonance :=: .thus it is possible to explicitly identify ( cf .( [ 3dbar ] ) ) the barrier for dynamical tunneling in this three degrees of freedom case .observe that the barrier is parametrized by an exact constant of the motion and two approximate constants of the motion , and .the effective couplings can be translated back to effective quantum strengths quite easily .the perturbative analysis , for parameters relevant to fig .[ fig18 ] , yields and thus the induced resonances are more than an order of magnitude smaller in strength as compared to the primary resonances . based on the rat theory it is possible to provide an explanation for the drastically different behaviour of state when compared to the other three states as seen in fig . [ fig19 ] .the state is not symmetrically located , refering to the arguments following eq .( [ symloc ] ) , with respect to and thus , combined with the smallness of , does not show significant mixing .given that the key resonances and their strengths are known ( cf .( [ resstr ] ) ) is it possible to interfere with the mixing between the states ?in other words one is exploring the possibility of controlling the dynamical tunneling , a quantum phenomenon , by modifying the local phase space structures .consider modifying the quantum hamiltonian ( based on classical information ! ) of eq .( [ 3dham ] ) as follows : in the above terms have been added to counter the induced resonances . due to the weakness of the induced couplings quantities like mean level spacings , eigenvalue variations shown in fig .[ fig19 ] , and spectral intensities show very little change as compared to the original system. however fig .[ fig21 ] shows that the survival probabilities of all the states exhibit long time trapping .the dilution factors shown in fig . [ fig21 ] also indicate almost no mixing between the near - degenerate states .thus dynamical tunneling has been essentially shut down for these states .the fractionation pattern for the states with the modified hamiltonian is quite similar to the ones seen in fig .[ fig19 ] .one again observes a clump of off - resonant states around the same region .surely the vibrational superexchange calculation would now predict the absence of mixing due to subtle cancellations but it is clear that a more transparent explanation comes from the rat mechanism . more significantly[ fig21 ] shows that other nearby zeorth - order states are unaffected by the counter resonant terms .thus this is an example of local control of dynamical tunneling or equivalently ivr .[ htbp ] ) .note that the counter resonant terms have essentially shut off the dynamical tunneling proving the rat mechanism .the bottom left figure shows the change in the dilution factors .solid symbols refer to the calculated from the original hamiltonian in eq .( [ 3dham ] ) ( cf .[ fig17]c ) and the open squares are the calculated using eq .( [ 3dhammod ] ) . only the states influenced by the induced resonances in fig . [ fig20 ]have their whereas nearby states show very little change .the four panels on the right show the overlap intensity for the modified hamiltonian .notice the clump of states near as in fig .[ fig19].,width=302 ] [ htbp ] ) with cm .the inverse participation ratios of the various eigenstates of in the baggott eigenbasis are shown versus the eigenvalues in cm .( b ) dilution factors of the zeroth - order states for a hamiltonian of the form as in eq .( [ 3dham ] ) but with a different .the polyad chosen is and the three resonant couplings have strengths in units of the mean level spacing , cm .the zeroth - order energies are also in units of .,width=302 ] the model hamiltonian results in this section correspond to the near - integrable limit and thus rat accounts for the state mixings. it would be interesting to study the mixed phase space limit for three degrees of freedom systems where classical transport mechanisms can compete as well .this however requires a careful study by varying the effective of the system in order to distinguish between the classical and quantum mechanisms .note that such zeroth - order state mixing due to weak couplings occurs in other model systems as well .in particular the present analysis and insight suggests that the mixing of edge and interior zeroth - order states in cyanoacetylene must be due to the rat mechanism .two more examples in fig .[ fig22 ] highlight the mixing between near - degenerate states due to very weak couplings .the case in fig .[ fig22]a corresponds to the mixings induced by the weak : resonance in the case of the model hamiltonian of eq .( [ 3dh2o ] ) with cm .the inverse participation ratios of the eigenstates of are calculated in the baggott eigenbasis spanning several polyads centered about . by constructionthe participation ratio of every eigenstate would be close to unity if the small value did not induce any interpolyad mixing .however it is clear from fig .[ fig22]a that several eigenstates do get mixed .the other example shown in fig .[ fig22]b pertains to a case where the hamiltonian is of the same form as in eq .( [ 3dham ] ) but with the parametrs of that corresponds to the cf molecule .the dilution factors of the various zeroth - order states in the polyad with extremely small couplings exhibit mixing . in both the examples it would be interesting to ascertain the percentage of states that are mixed due to cat / rat .[ htbp ] action ( state ) space .the energy surface is intersected by several resonances which form the web and control the ivr out of a state located on the energy surface .the strength of the resonances is indicated by the thickness and shaded regions highlight some of the resonance junctions . mixing _i.e. , _ energy flow between the state on the top ( red circle ) and a far away state ( blue square ) is due to rat , cat ( a possible tunneling sequence is shown by red arrows ) , classical across and along ( thick arrows ) transport .the competition between the tunneling and classical transport mechanism depends on the effective of the system .barriers to classical transport do exist and are expected to play an important role but they have not been indicated in this simple sketch.,width=302 ] the central message of this review is that _ both classical and quantum routes to ivr are controlled by the network of nonlinear resonances_. a simple sketch of ivr in state space via classical diffusion and dynamical tunneling is shown in fig .[ fig23 ] and should be contrasted with the sketch shown in fig .[ fig3 ] . the vibrational superexchange picture of fig .[ fig3 ] explaining ivr through the participation of virtual off - resonant states is equivalent to the picture shown in fig. [ fig23 ] which explains ivr in terms of the arnold web and states on the energy shell . furthermore , as sketched in fig .[ fig23 ] , a given zeroth - order state is always close to some nonlinear resonance or the other .thus it becomes important to know the dynamically relevant regions of the web .classically it is possible to drift along the resonance lines and come close to the junction where several resonances meet . at a junctionthe actions can move along an entirely different resonance line . in this fashion , loosely referred to as arnold diffusion , the classical dynamics can lead to energy flow between a given point in fig .[ fig23 ] and another distant point .arnold diffusion , however , is exponentially slow and its existence requires several constraints to be satisfied by the system. therefore dynamical tunneling between two far removed points shown in fig .[ fig23 ] is expected to be more probable and faster than the classical arnold diffusion . on the other hand , for generic hamiltonian systems, a specific primary resonance will be intersected by infinitely many weaker ( higher order than the primary ) resonances but finitely many stronger resonances .the stronger resonances can lead to a characteristic cross - resonance diffusion near the resonance junction .the cross - resonance diffusion , in contrast to the arnold diffusion , takes place on time scales much shorter than exponential . in fig .[ fig23 ] one can imagine a scenario wherein a sequence of dynamical tunneling events finally lands up close to a resonance junction .would the classical erratic motion near the junction interfere with the dynamical tunneling process ?is it then possible that an edge state might compete with an interior state due to cat and/or rat ?note that the four zeroth - order states in fig .[ fig20 ] are in fact close to one such resonance junction .the classical survival probabilities in fig .[ fig18 ] indicate some transport but further studies are required in order to characterize the classical transport .the issues involved are subtle since the competition between the coexisting classical and quantum routes has hardly been studied up until now and warrants further attention .few of the studies that have been performed to date suggest that quantum effects tend to localize the along - resonance diffusion on the web .nearly a decade ago leitner and wolynes analysed the hamiltonian in eq .( [ specham ] ) to understand the role of the vibrational superexchange at low energies . within the state space approachit was argued that the superexchange mechanisms contribute significantly to the energy flow from the edge states to the interior of the state space .more importantly the superexchange contributions modify the ivr threshold as quantified by the transition parameter defined in eq .( [ lwqet ] ) . from the phase space perspective as in fig . [ fig23 ]the direct and superexchange processes correspond to classical and quantum dynamical tunneling mechanisms respectively which are determined by the topology of the web .in contrast , note that the present studies on the model hamiltonian in eq .( [ 3dham ] ) actually shows dynamical tunneling at fairly high energies and between interior states in the state space .further studies on such model systems would lead to better insights into the nature of .it would certainly be interesting to see if the criteria of leitner and wolynes is related to the criteria for efficient cat and rat in the phase space .needless to say , such studies combined with the idea of local control demonstrated in the pevious section can provide important insights towards controlling ivr .the effective is expected to play a central role in deciding the importance of various phase space structures .however intutive notions based entirely on this ` geometrical ' criteria can be misleading . for instance , gong and brumer in their studies on the modified kicked rotor found the quantum dynamics to be strongly influenced by regular islands in the phase space whose areas were at least an order of magnitude smaller than the effective planck constant .another example comes from the work by sirko and koch wherein it was observed that the ionization of bichromatically driven hydrogen atoms is mediated by nonlinear common resonances with phase space areas of the order of the planck constant .further examples can be found in the recent studies .although for large molecules the vibrational superexchange viewpoint is perhaps the only choice for computations , for a mechanistic understanding the phase space perspective is inevitable .furthermore , as noted earlier , there exists ample evidence for the notion that even in large molecules the actual effective dimensionality of the ivr manifold in the state space is far less than the theoretical maximum of .hence detailed classical - quantum correspondence studies of systems with three or four degrees of freedom can provide useful insights into the ivr processes in large molecules . at the same timea proper understanding of and including the effect of partial barriers into the theory of dynamical tunneling is necessary for further progress .some hints to the delicate competition between quantum and classical transport through cantori can be found in the fairly detailed studies by maitra and heller on the dynamics generated by the whisker map .such studies are required to gain insights into the method of controlling quantum dynamics via suitable modifications of the local phase space structures .extending the local phase space control idea to systems with higher degrees of freedom poses a difficult challenge . if the sketch in fig .[ fig23 ] is reasonable then it suggests that the real origins of the hierarchical nature of ivr , due to both quantum and classical routes , is intricately tied to the geometry of the arnold web .significant efforts are needed to characterize the resonance web dynamically in order to have a clear understanding of how the specific features of the web influence the ivr .recent works are beginning to focus on extracting such details in systems with three or more degrees of freedom which promises to provide valuable insights into the classical and quantum mechanisms of ivr in polyatomic molecules .perhaps such an understanding would allow one to locally perturb specific regions of the resonance web and hence ultimately lead to control over the classical and quantum dynamics .it is a pleasure to acknowledge several useful discussions with arul lakshminarayan and peter schlagheck on the topic of dynamical tunneling .i am grateful to prof .steve wiggins for his hospitality at bristol where parts of this review were written in addition to discussions on the wavelet technique .financial support for the author s research reported here came from the department of science and technology and the council for scientific and industrial research , india .
|
dynamical tunneling , introduced in the molecular context , is more than two decades old and refers to phenomena that are classically forbidden but allowed by quantum mechanics . the barriers for dynamical tunneling , however , can arise in the momentum or more generally in the full phase space of the system . on the other hand the phenomenon of intramolecular vibrational energy redistribution ( ivr ) has occupied a central place in the field of chemical physics for a much longer period of time . despite significant progress in understanding ivr _ a priori _ prediction of the pathways and rates is still a difficult task . although the two phenomena seem to be unrelated several studies indicate that dynamical tunneling , in terms of its mechanism and timescales , can have important implications for ivr . it is natural to associate dynamical tunneling with a purely quantum mechanism of ivr . examples include the observation of local mode doublets , clustering of rotational energy levels , and extremely narrow vibrational features in high resolution molecular spectra . many researchers have demonstrated the usefulness of a phase space perspective towards understanding the mechanism of ivr . interestingly dynamical tunneling is also strongly influenced by the nature of the underlying classical phase space . recent studies show that chaos and nonlinear resonances in the phase space can enhance or suppress dynamical tunneling by many orders of magnitude . is it then possible that both the classical and quantum mechanisms of ivr , and the potential competition between them , can be understood within the phase space perspective ? this review focuses on addressing the question by providing the current state of understanding of dynamical tunneling from the phase space perspective and the consequences for intramolecular vibrational energy flow in polyatomic molecules . to appear in international reviews in physical chemistry ( october 2007 )
|
random graphs are used to model large networks that consist of particles , called nodes , which are possibly linked to each other by edges .the study of random graphs goes back to the works of and .since then , numerous random graph models have been introduced and studied in the literature . for an overviewwe refer the reader to .empirical studies of large data sets of real - life networks have shown that in many cases the degrees of two nodes belonging to the same edge are not independent ( where the degree of a node is defined to be the number of edges attached to it ) .it is observed that in some types of real - life networks the degree of a node is positively related to the degrees of its linked neighbors , while in other situations the degree of a node is negatively related to the degrees of its linked neighbors .this property is called _ assortativity _ or _ assortative mixing_. it has been discovered by that financial networks typically show negative assortativity and that the strength of the assortativity influences the vulnerability of the financial network to shocks , see also .in contrast , social networks tend to be positive assortative , see for instance . more examples of assortative networks are presented in and , where also quantities to measure assortativity in networks are proposed . on the other hand , there is only little literature about explicit constructions of random graphs showing assortative mixing .established constructions of directed random graphs with given bi - degree distribution , called _ configuration graphs _ , lead to _ non - assortative _ graphs , see for instance the construction presented in . here , the bi - degree of a node is a tuple , where is the number of edges arriving at node ( called _ in - degree _ ) and is the number of edges leaving from node ( called _ out - degree _ ) , and we say that node is of type , see figure [ figure : node type ] for an illustration .is of type and node is of type .edge is of type .,width=264 ] in this article we extend the non - assortative construction presented in by giving an explicit algorithm which allows to construct directed configuration graphs with a pre - specified assortativity based on a concept introduced in .namely , proposed to specify the graph not only through their node - types , but also through their edge - types .we define the type of an edge connecting node to node by a tuple with denoting the out - degree of node and denoting the in - degree of node , see figure [ figure : node type ] for an illustration .this notion of edge - types is directly related to the notion of assortativity . in the positive assortative case positively related to meaning that edges tend to connect nodes having similar degrees , and accordingly for the negative assortative case .if is independent of , then the graph is non - assortative .therefore , proposed to construct graphs with given node - type distribution describing the nodes _ and _ with given edge - type distribution describing the edges , while different choices of result in different types of assortativity in the constructed graphs , see figure [ figure : introduction ] for examples . .33 with nodes .they all have the same node - type distribution but different edge - type distributions . there are four different types of nodes present in each graph : , , and . edges of identical type are colored the same .edges that are arriving or leaving a highest - degree node are colored in different shades of blue .these edges are mainly present in the negative assortative case ( a ) . in the positive assortative case ( c ) , mainly nodes of the same type are connected .these edges are colored light blue , purple , orange and green .all other possible edges are colored red , which significantly appear only in ( b ) ., title="fig : " ] .33 with nodes .they all have the same node - type distribution but different edge - type distributions .there are four different types of nodes present in each graph : , , and .edges of identical type are colored the same .edges that are arriving or leaving a highest - degree node are colored in different shades of blue .these edges are mainly present in the negative assortative case ( a ) . in the positive assortative case ( c ) ,mainly nodes of the same type are connected .these edges are colored light blue , purple , orange and green .all other possible edges are colored red , which significantly appear only in ( b ) ., title="fig : " ] .33 with nodes .they all have the same node - type distribution but different edge - type distributions .there are four different types of nodes present in each graph : , , and .edges of identical type are colored the same .edges that are arriving or leaving a highest - degree node are colored in different shades of blue .these edges are mainly present in the negative assortative case ( a ) . in the positive assortative case ( c ) , mainly nodes of the same type are connected .these edges are colored light blue , purple , orange and green .all other possible edges are colored red , which significantly appear only in ( b ) ., title="fig : " ] nevertheless , there was not an explicit construction given in .the aim of this article is to construct random graphs where nodes and edges follow pre - specified given bivariate distributions and , and to give an explicit mathematical meaning to these distributions .let us first interpret the meaning of distributions and in more detail .node - type distribution has the following interpretation .assume we have a large directed network and we choose _ at random _ a node of that network , then the type of has distribution .similarly , edge - type distribution should be understood as follows .if we choose _ at random _ an edge of a large network , then its type has distribution .this concept of and distributions seems straightforward , however , it needs quite some care in order to give a rigorous mathematical meaning to these distributions , the difficulty lying in the `` randomly '' chosen node and edge obeying and , respectively : the graph as total induces dependencies between nodes and edges which implies that the exact distributions can only be obtained in an asymptotic sense ( this will be seen in the construction below ) .we give an explicit algorithm to construct a directed assortative configuration graph with a given number of nodes and given distributions and , and we prove that the type of a randomly chosen node of the resulting graph converges in distribution to as the size of the graph tends to infinity .similarly , the type of a randomly chosen edge converges in distribution to .these convergence results give a rigorous mathematical meaning to and in line with their interpretation given above .the proposed algorithm allows for self - loops and multiple edges . in order to obtain a simple graphwe delete all self - loops and multiple edges , and we show that the convergence results still hold true for the resulting simple graph .recently , an alternative approach to construct assortative configuration graphs with given distributions and was proposed in using techniques from .our construction is different from and more in the spirit of .moreover , we give a rigorous mathematical meaning to the given distributions and which relies on the law of large numbers only . in section [ section : model ]we introduce the model and state our main results .section [ section : algorithm ] specifies the algorithm to generate directed assortative configuration graphs .the implementation of the algorithm in the programming language can be downloaded from : in section [ section : examples ] we illustrate examples of assortative configuration graphs generated by our algorithm showing different assortative mixing .the proofs of the results are given in section [ section : proofs ] .consider fixed finite integers and which describe the maximal in- and out - degree of a node , respectively . for and define =\{l,\ldots , n\} ] and ] and \sum_{j\in[j]_0,k\in[k]_0}p_{j , k}=1 \sum_{k\in[k]_1,j\in[j]_1}q_{k , j}=1j\in[j]_0 k\in[k]_0 k\in[k]_1 j\in[j]_1 ] denotes the in - degree distribution of nodes .observe that in a given graph the number of edges with out - degree of being ] .this relation between nodes and edges implies that we can not choose and independently of each other to achieve that nodes and edges in the constructed graph follow and , respectively .we therefore assume that and satisfy the following consistency conditions , see also and , which implies that the above observation holds true in expectation in graphs where nodes and edges follow distributions and , respectively . ; \\\label{condition 3}\tag{c2 } q_j^-&~=~jp_j^-/z , \qquad j\in [ j]_1,\end{aligned}\ ] ] with _ mean degree _ } k p_k^+ ] .this says that , in expectation , the sum of in - degrees equals the sum of out - degrees if nodes and edges follow distributions and , respectively .given the number of nodes and given distributions and satisfying and the goal is to construct a graph such that the following statement is true in an asymptotic sense as the size of the graph tends to infinity : the type of a randomly chosen node has distribution and the type of a randomly chosen edge has distribution .the following theorem shows that this is indeed the case for graphs constructed by the algorithm provided in section [ section : algorithm ] and , hence , the theorem gives an explicit mathematical meaning to and .[ lemma : random degrees ] fix .let be the types of randomly chosen nodes of the graph generated by the algorithm provided in section [ section : algorithm ] .then , where are independent random variables having distribution .similarly , the types of randomly chosen edges converge in distribution , as , to a sequence of independent random variables having distribution .if we consider a graph where nodes and edges follow distributions and , respectively , then we expect that the relative number of nodes of type is close to and that the relative number of edges of type is close to . theorem [ lemma : empirical distributions 2 ] below makes this statement precise for graphs constructed by the algorithm provided in section [ section : algorithm ] . to formulate the theorem , denote by the number nodes of type , ] , and by the number edges of type , ] , of the constructed graph of size . the total number of edges we denote by .theorem [ lemma : empirical distributions 2 ] says that the relative frequencies and converge to and , respectively , in probability as .[ lemma : empirical distributions 2 ] for the random graph constructed by the algorithm provided in section [ section : algorithm ] we have for any , k\in [ k]_0 } \left| \frac{\mcv_{j , k}}{n } - p_{j , k } \right| + \sum_{k \in [ k]_1 , j\in [ j]_1 } \left| \frac{\mce_{k , j } } { \mce } - q_{k , j } \right| > \varepsilon \right ] ~=~0.\ ] ] the algorithm provided in section [ section : algorithm ] generates a graph possibly not being simple , i.e. it may contain self - loops and multiple edges . to obtain a simple graph we delete ( erase ) all self - loops and multiple edges , andwe call the resulting graph _ erased configuration graph_. the following theorem states that the asymptotic results still hold true for the erased configuration graph . [ theorem : simple ] the results of theorem [ lemma : random degrees ] and theorem [ lemma : empirical distributions 2 ] still hold true for the erased configuration graph , based on the algorithm provided in section [ section : algorithm ] .the algorithm to construct directed assortative configuration graphs starts from , where the authors construct a directed random graph with nodes and given in - degree and given out - degree distributions in the following way .they assign to each node independently an in - degree and an out - degree according to the given distributions , also independently for different nodes .some degrees are then modified if the sum of in - degrees differs from the sum of out - degrees so that these sums of degrees are equal , and the sample is only accepted if the number of modifications is not too large .finally , in - degrees are randomly paired with out - degrees .note that this construction leads to a _ non - assortative _ configuration graph and the in- and out - degree of a given node are independent .the construction of an assortative configuration graph is more delicate since in - degrees can not be randomly paired with out - degrees . in our constructionwe generate node - types using directly node - type distribution . independently of the node - types we generate edges having independent edge - types according to distribution .finally , we match in- and out - degrees of nodes with edges of corresponding types .in general , the matching can not be done exactly , but with high probability the number of types that need to be changed accordingly is small for large , due to consistency conditions and .we first describe the algorithm in detail and then comment on each step of the algorithm below .* algorithm to construct directed assortative configuration graphs .* assume maximal degrees and two probability distributions and satisfying and with mean degree are given .choose fixed .choose so large that there exists with .set . 1. * step 1 . *assign to each node independently a node - type according to distribution .generate edges having independent edge - types according to distribution , independently of the node - types .define j\in[j]_1 } ; \\\left| n_j^- - p_j^- n ' \right| \le p_j^-n^\delta/2 & \quad \text { and } \quad & \left| e_j^- - p_j^- n '' \right| \le p_j^-n^\delta/2 \quad\text { for all }. \end{aligned}\ ] ] proceed to step 2 if event occurs .otherwise , proceed to step 5 . 2 .* for each ] do the following .* add edges of type ; * add edges of type .+ set }r_k^+ ] .* step 3 . * set the type of each node in to . foreach ] do the following .* take the first nodes in having out - degree and change their out - degrees to ; * take the first nodes in having in - degree and change their in - degrees to .* for each ] do the following . *assign to each node having out - degree exactly uniformly chosen edges of type with ; * assign to every node having in - degree exactly uniformly chosen edges of type with .+ proceed to step 6 .* step 5 . *define node - types , and for all .insert an edge that connects node to node .6 . * step 6 .* return the constructed graph .* explanation of the algorithm . *we say that node is a _-node _ if its out - degree is ]. * step 1 .* we generate only node - types and we keep nodes undetermined for possible modifications in later steps .the expected number of generated -nodes is and the expected number of generated -edges is . using condition , the expected number of -nodes needed for the generated -edges is therefore . henceforth , if is close to its expectation , it dominates the number of generated -nodes which is of order .step 3 is then used to correct for this imbalance in a deterministic way , and event guarantees that this correction is possible . for receiving an efficient algorithm we would like event to occur sufficiently likely , which is exactly stated in the next lemma .[ lemma : acceptance set ] we have \to1 ] , see also figure [ figure : construction ] for an illustration ., the number of generated -nodes , , lies in the interval of length around .the number of -nodes needed for the generated -edges , , lies in the interval of length around . by definition of gap between the two intervals is of size between and . from thisit follows that there are at most additional -nodes needed in order to attach all generated -edges to -nodes ., width=453 ] therefore , on event , the total number of additional nodes needed having a positive out - degree is } \left ( e_k^+ - n_k^+\right ) & \le & 2\lceil n^\delta \rceil \sum_{k\in[k]_1 } p_k^+ ~\le~ 2\lceil n^\delta \rceil . \end{aligned}\ ] ] hence , we have sufficiently many undetermined nodes in to which we can assign out - degrees accordingly in step 3 , and similarly for the in - degrees .* step 2 . * in general, the number of generated -edges is not a multiple of .therefore , we use step 2 to correct for this cardinality by defining additional edges of type .note that each such edge requires a node having in - degree .therefore , in total nodes having in - degree are additionally needed , and similarly for the added edges of type , ] , and similarly for the total number of nodes having in - degree ] , we aim to find possible edge - type distributions such that and satisfy and .conditions and imply that the marginal distributions of are fully described by the marginal distributions of , and their respective cumulative distribution functions are given by for ] .the possible joint distributions are therefore given by where ^ 2 \to [ 0,1] ] .then , corresponds to the minimal possible assortativity coefficient ] .copula leads to non - assortativity and in this case we have for all ] .note that for given , does not uniquely determine . on the other hand, one can always find ] .this allows to construct directed assortative configuration graphs having any given assortativity coefficient that is possible for given node - type distribution . to illustrate assortativity in an example we consider maximal in- and out - degree and node - type distribution , , given by distribution only allows for nodes of types and , with respective probabilities and , which results in a mean degree of .clearly , these nodes can only be connected through edges of types , , and . since is diagonal , consistency conditions and fully specify the edge - type distribution which is given by for ] or ] as . for each n\to\infty n\to\infty ] , using condition . choose fixed and let ] , ] with and , hence , the right - hand side is bounded in . to bound the expectation of ,let and denote by the number of multiple edges leaving from node .the probability that two distinct edges leaving from are arriving at the same node is at most it follows that & = & \sum_{v=1}^n \e[\left.m_v\right| a_n ] ~\le~ \sum_{v=1}^n \sum_{w=1}^n \e\left[\left .\binom{k_v}{2 } \frac{j_w(j_w-1)}{j_w\mcv_{j_w}^-\left(j_w\mcv_{j_w}^--1\right ) } 1_{\ { j_w\ge2 \ } } 1_{\ { k_v\ge2 \ } } \right| a_n\right ] \\&\le & \sum_{v=1}^n \sum_{w=1}^n \e\left[\left .\frac{k_v^2 j_w^2}{2j_w\mcv_{j_w}^-\left(j_w\mcv_{j_w}^--1\right ) } 1_{\ { j_w\ge2 \ } } 1_{\ { k_v\ge2 \ } } \right| a_n\right ] . \end{aligned}\ ] ] since for , it follows that & \le & \sum_{v=1}^n \sum_{w=1}^n \e\left[\left .\frac{k_v^2}{\left(\mcv_{j_w}^-\right)^2}1_{\ { j_w\ge2 \ } } 1_{\ { k_v\ge2 \ } } \right| a_n\right ] .\end{aligned}\ ] ] using that on , the number of nodes having in - degree is at least , it follows that & \le & \frac{n^2k^2 } { \max\left\{1,\min_{j\in[j]_1}\left\{p_{j}^-n'-p_{j}^-n^\delta/2\right\ } \right\}^2}. \end{aligned}\ ] ] the right - hand side is again bounded in .this finishes the proof of lemma [ lemma : expected ] .we first show that theorem [ lemma : empirical distributions 2 ] holds true for the erased configuration graph .let be the number of self - loops and let be the number of multiple edges generated by the algorithm . for ] denote by the number of constructed nodes of type after erasing all self - loops and multiple edges .similarly we define for ] . in order to prove that theorem [ lemma : empirical distributions 2 ] holds true for the erased configuration graph, we show that for any , , k\in [ k]_0 } \left| \frac{\mcv^e_{j , k}}{n } - p_{j , k } \right| \ge \varepsilon \quad \text { or } \quad \sum_{k \in [ k]_1 , j\in [ j]_1 } \left| \mce^e_{k , j } - q_{k , j}\mce^e \right| \ge \varepsilon \mce^e \right ] ~=~0,\ ] ] where denotes the total number of edges in the erased configuration graph ( which could be equal to if all edges of the constructed graph are self - loops ) .let and choose ] .we have ~\le~ \p\left[\left| \frac{\mcv^e_{j , k}- \mcv_{j , k}}{n } \right| \ge \varepsilon/2 \right ] + \p\left[\left| \frac{\mcv_{j , k}}{n } - p_{j , k } \right| \ge \varepsilon/2 \right].\ ] ] the second term on the right - hand side converges to as by theorem [ lemma : empirical distributions 2 ] . for the first term , note that ~\le~ \p\left[\sum_{v=1}^n \left|1_{\{j_v^e = j , k_v^e = k\}}- 1_{\{j_v = j , k_v = k\ } } \right| \ge \varepsilon n/2 \right],\ ] ] where denotes the type of node in the erased configuration graph .denote by the number of self - loops attached to .denote by the number of multiple edges leaving from and denote by the number of multiple edges arriving at .note that if and only if , and similarly if and only if .we therefore have that where we set .hence , & \le & \p\left[\sum_{v=1}^n 1_{\ { s_v+m_v>0\ } } \ge \varepsilon n/2 \right ] ~\le~ \p\left[\sum_{v=1}^n ( s_v+m_v ) \ge\varepsilon n/2 \right].\end{aligned}\ ] ] it follows by markov s inequality , lemma [ lemma : acceptance set ] and lemma [ lemma : expected ] that ~\le~ \p\left[s_n+2m_n \ge n\varepsilon/2 \right ] ~\le~ \frac{\e\left[\left .a_n \right]}{n\varepsilon/2 } + \p\left[a_n^c\right ] ~\to~0 , \quad \text { as .}\ ] ] we now prove the same result for the edge - types under the conditional probability , conditional given , which is enough due to lemma [ lemma : acceptance set ] .note that on event the number of generated edges is at least , see step 1 of the algorithm .it follows by markov s inequality and lemma [ lemma : expected ] , and since , that for every , ~=~ \p\left[\left .\frac{s_n+m_n}{\mce } \ge \eta \;\right| a_n \right ] ~\le~ \p\left[\left .zn''\eta \;\right| a_n \right ] ~\to~0,\ ] ] as .note that for fixed ] , ~\le~ \p\left[\left .\left| \mce^e_{k , j}- \mce_{k , j } \right| \ge \varepsilon\mce^e/2 \;\right| a_n \right ] + \p\left[\left .\left| \mce_{k , j } - q_{k , j}\mce^e \right| \ge \varepsilon\mce^e/2 \;\right| a_n \right].\ ] ] for the second term on the right - hand side we have & = & \p\left[\left. \left| \frac{\mce_{k , j}}{\mce } - \frac{\mce - s_n - m_n}{\mce } q_{k , j } \right| \ge \frac{\mce^e}{\mce}\varepsilon/2 \;\right| a_n \right ] \\&\le & \p\left[\left .\left| \frac{\mce_{k , j}}{\mce } - q_{k , j } \right| \ge \frac{\mce^e}{\mce}\varepsilon/4 \;\right| a_n \right ] + \p\left[\left . \left(s_n+ m_n\right ) q_{k , j } \ge \mce^e\varepsilon/4 \;\right| a_n \right ] \\&\le & \p\left[\left .\left| \frac{\mce_{k , j}}{\mce } - q_{k , j } \right| \ge \frac{\mce^e}{\mce}\varepsilon/4 \;\right| a_n \right ] + \p\left[\left .s_n + m_n \ge \frac{zn''\varepsilon/4}{q_{k , j}+ \varepsilon/4 } \;\right| a_n \right],\end{aligned}\ ] ] where in the last step we used that . by theorem [ lemma : empirical distributions 2 ] , and lemma [ lemma : expected ] ,the right - hand side converges to as . to bound the first term ,note that the erasure procedure changes the number of edges of type because such edges may get erased , but also because an erased edge changes the types of unerased edges that are leaving from node or that are arriving at node .it follows that ~\le~ \p\left[\left .s_n+m_n + \sum_{e=1}^{\mce^e } \left| 1_{\ { k^e_e = k , j_e^e = j \}}- 1_{\ { k_e = k , j_e = j \ } } \right| \ge \varepsilon\mce^e/2 \;\right| a_n \right],\ ] ] where denotes the type of edge in the erased configuration graph . for an edge have if and only if , and similarly if and only if . therefore , for , where we set and .it follows that & \le & \p\left[\left .s_n+m_n+ \sum_{e=1}^{\mce^e } 1_{\ { s_e+m_e>0\ } } \ge \varepsilon\mce^e/2 \;\right| a_n \right ] \\&\le & \p\left[\left .s_n+m_n + \sum_{v=1}^n\left ( k_v1_{\ { s_v+m^+_v>0\ } } + j_v1_{\ { s_v+m_v^->0\ } } \right ) \ge \mce^e \varepsilon/2 \;\right| a_n \right ] \\&\le & \p\left[\left .s_n+m_n + k ( s_n+m_n ) + j ( s_n+m_n ) \ge \mce^e\varepsilon/2 \;\right| a_n \right],\end{aligned}\ ] ] which converges to as by lemma [ lemma : expected ] and the fact that .this finally proves .we now prove that theorem [ lemma : random degrees ] holds true for the erased configuration graph .denote by the set of nodes whose types have been changed due to the erasure procedure .for the probability that a uniformly chosen node belongs to we have ~=~ \e\left [ \frac { \left| \{n'+1,\ldots , n\}\cup\mcv_{\rm mod}\right|}{n } \right ] ~\le~ \e\left [ \frac { n - n ' + s_n+2m_n } { n } \right].\ ] ] since the graph returned by the algorithm in case event does not hold has no self - loops or multiple edges , see step 5 , it follows that ~\le~ \e\left[\left .\frac { n - n ' + s_n+2m_n } { n } \right| a_n \right ] + \frac { n - n ' } { n},\ ] ] which converges to as by lemma [ lemma : expected ] .going through the proof of theorem [ lemma : random degrees ] , we see that this observation is enough to conclude that the types of randomly chosen nodes of the erased configuration graph converge in distribution to a sequence of independent random variables each having distribution as . to prove the same result for the edge - types of the erased configuration graph , note that it may happen that all edges of the graph generated by the algorithm of section [ section : algorithm ] are self - loops , i.e. . in this casethe edge set is empty for the erased configuration graph and we define `` the type of a randomly chosen edge '' to be identical to if event occurs , and we define it to be as usual if event does not occur .nevertheless , the probability of event converges to as by lemma [ lemma : acceptance set ] and above .therefore , using similar arguments as above for the node - types , we conclude that the types of randomly chosen edges of the erased configuration graph converge in distribution to a sequence of independent random variables each having distribution as .
|
constructions of directed configuration graphs with given bi - degree distribution were introduced in random graph theory some years ago . these constructions lead to graphs where the degrees of two nodes belonging to the same edge are independent . however , it is observed that many real - life networks are assortative , meaning that edges tend to connect low degree nodes with high degree nodes , or variations thereof . in this article we provide an explicit algorithm to construct directed assortative configuration graphs with given bi - degree distribution and arbitrary pre - specified assortativity . _ keywords : _ random graphs ; degree distribution ; assortativity ; assortative mixing ; assortativity coefficient ; degree correlations
|
[ sec : intro ] in this paper we are interested in finite volume numerical simulations of pdes of hyperbolic type with transport term . more precisely we are looking to correct the numerical diffusion which appears in the simulations of the asymptotic profile of such pdes . indeedthis numerical diffusion is an artifact that is inherent to most of existing numerical schemes even for high - order ones .so , as reported first in and confirmed in for the lifshitz - slyozov equation which is of transport type , capturing numerically the exact asymptotic profile for transport equation is a real challenge because numerical diffusion smooths out the fronts and leads to an artificial profile . in the context of biology modelling , specially in population dynamics ,most of existing models contain at least a transport part which takes into account the growth of the considered population , it is crucial to recover the exact asymptotic profile in order to predict the behavior of the population or to estimate some parameters for instance its growth , division or death rates by using measures that are based on the profile .some authors address the question consisting to correct this inherent numerical diffusion by establishing adequate schemes such as the weno ( weighted essentially non - oscillatory ) scheme , anti - dissipative scheme the adm ( anti dissipative method ) , etc .all these schemes define their numerical fluxes in order to minimize at best the artifacts .we propose here a hybrid finite volume scheme where the construction of the numerical flux appears as a combination of the weno and adm fluxes .this scheme is very suitable for firstly removing de numerical artifacts but also correct the stair treads appearing in adm scheme .more precisely we use a discontinuity detector at each grid point and use the weno order 5 scheme when the solution is regular near the point otherwise we apply the adm . as a validation of this hybrid scheme we use two kind of test cases .the first one is a classical ( academic ) test case for transport equation where the initial distribution is considered in one hand as a very oscillatory one as given in ; in the other hand , we use in 2d the famous zalesak disk test that is given in .the second kind of test is based on population dynamics and polymerization process where we consider a population of cells or polymers growing either by nutrients uptake ( for cells ) or by gain and lost of monomers ( for polymers ) .this application on population dynamics is of great importance because in many cases some predictions on the numerical behavior of the solution allow to investigate inverse problem of estimating relevant parameters of the considered model .so , having a bad numerical reconstruction induces bad parameters estimation .the paper is organized as follows . in section[ sec : adm0 ] , we recall the biology context on which we focus our study . in the third section ,we detail the derivation of our hybrid scheme which is based on a general conservation laws .the section [ sec : detector ] is devoted to the numerical results and comparison of our hybrid method against the weno and adm schemes .in cell biology as in physics of particles , the evolution dynamics of a group of cells or macro - particles in cell culture or in a bath of micro - particles plays a central role in the understanding and the explanation of some physical and biological behaviors. often the observed quantities evolve by growth process either by nutrients uptake ( the example of the micro - organism _ daphnia _ which uses the nutrients to grow ( * ? ? ?4.3.1 ) , ) or by earnings micro - particles by polymerization ( for polymers modeling ) . lot of conjectures are based on the observation of these quantities , especially on their evolution dynamics in long term . in modeling point of viewthis long term evolution dynamic is obtained by the analysis of the asymptotic behavior of the considered quantity . following the processes taken into account in the model ,the asymptotic behavior can either be dependent or be independent of the initial distribution of the considered group of cells or parasites .indeed , in the case where the considered population evolves only by growth it is proven in lifshitz - slyozov equations that the asymptotic behavior depends on the initial distribution . however, when aggregation process or division process is taken into account , the asymptotic behavior is regularized towards a quasi - universal profile as shown in and then it is independent to the initial distribution .+ a crucial point linked to the modelling of these phenomena is the numerical simulation which can lead , following the used scheme , to bad conjectures on the behavior of the model .these bad conjectures result to some numerical artifacts caused by numerical diffusion inherent to some standard schemes . in order to apply the hybrid method proposed later in this paper , we consider the following test case where a size - structured cells population model is taken into account and the cells evolve by gain or loss of micro cells .let denote by the size density repartition of cells of size , located at position ( ) at time where is a smooth bounded domain .then the model can be written for all as follows where is the concentration of micro - organisms ( nutrients ) and it follows a diffusion equation of this form : for this kind of coupling model - , the kinetic coefficients , are interpreted as the rates at which cells gain or loss nutriments ( monomers , micro - organisms ) .we assume the micro - cells ( or monomers ) to follow a diffusion equation as depicted in equation .we endowed this diffusion equation with a homogeneous neumann boundary condition where is the outward unit vector at point .+ the problem - is a variant of the very known standard lifshitz - slyozov system which models the evolution of a population of macro - particles immersed in a bath of monomers .the analytical study of - concerning the existence , uniqueness and properties of the solution is rigorously done in and the main result is based on the following hypothesis : + * hypothesis . *the kinetic coefficients are required to satisfy ; is an increasing function with and ; and for any there exists such that for . + the initial condition satisfy ; .+ with this previous hypothesis , the authors in prove the following statement on the well - posedness of - : there exists a weak solution of - with , for any , + , , ; l^2 ( \omega)-weak) ] .in addition they prove thanks to the neumann boundary condition , the following mass preservation relation : =0.\ ] ] the fact that the space variable acts as a parameter in the size density repartition implies that is a transport equation and the study of its asymptotic behavior is numerically very challenging . indeed , following the chosen model as in , one needs in the modelling and simulations to recover the evolution dynamics of the considered population such as the time evolution of the total number of individuals ( even in asymptotic time ) , the total mass of the population or the conservation law fulfilled by the model . for the numerical simulations of the evolution dynamics ,a very adapted scheme is required in order to capture in exact way the solution of the system without artifact in order to get the right and essential properties . that s the aim to introduce the following hybrid method .in this section , we consider the following advection equation where is a given smooth velocity field .let consider a regular mesh , with constant step : the cells are the intervals , \ ; i\in\mathbb{n} ] for the consistency of the scheme , we write the very definition which is that the numerical flux between cell and cell belongs necessary to the interval defined by the numerical solutions and .so the consistency constraint is written as follows from a positive initial solution , we need to impose a condition on the numerical fluxes in order to ensure the positivity of the numerical approximation of the solution given by .for this , we are looking for conditions so that so , obtaining the later inequality is done by imposing the flux to satisfy because by the consistency constraints we know that . [ propo1 ] assuming the _ cfl _ condition be satisfied. then for any the interval ] for any , then the following assertions hold : * the scheme is consistent with . *the scheme remains positive for positive initial solution : mean if for any then too .* the scheme satisfies : if then , while if then . * proof . *the proof of the proposition is essentially based on the gathering of the results and . for more details on the proof , one can refer to .+ in the previous reasoning , we exclude the case where the velocities on the interfaces cell are of different sign .so in the case where and then it is obvious that the positivity of the numerical solution is obtained without any time step condition .nevertheless , the case where and mean the possible empty of the cell from the two sides .so we choose the upwind fluxes and the numerical solution becomes constant in the cell . in this above presentation, the numerical flux is designed for general conservation laws , while the one in is more suitable for the transport equations . for the anti - dissipative strategy, we define the flux by solving the following minimization problem : \end{array}\ ] ] which solution is given for instance in the case , and by this kind of anti - dissipative method is very suitable for discontinuous initial solution , which has been shown in .however , it is not suitable for smooth solution .indeed , it turns the very smooth solution into a series of step functions with respect to time evolution .the objective of remain part is to find an alternative method such that it keeps the shock near discontinuities while has high accuracy in smooth regions . to the end , we propose the following hybrid method . we denote the flux computed by the anti - dissipative method and the flux computed by a high accurate method , for instance the fifth order method .so our desirable flux by the hybrid method will just be a convex combination of and , _i.e. _ where , .moreover , it is desirable : * and , near discontinuities , * and , in smooth regions , where is a large enough number that causes to be the dominant term in smooth regions . according to these considerations ,we then need to identify the smoothness of solution .we consider a similar smoothness measurement , which was first proposed in where is a scaling value given as figure [ fig : interpolation ] shows the interpolated value , _i.e. _ is obtained by the fourth - order interpolation .note that the point itself , as shown in figure [ fig : interpolation ] , is not included in the interpolation . interpolation error between the node value and interpolated value .,width=264 ] then we choose the smoothness measurement at the cell interface by an upwind way e_{i+1 } , & \text { else}. \end{array } \right.\ ] ] finally , the weights have forms where the parameter is a constant , and a discussion of its value will be given in the section [ sec : parameter_cx ] .we now present the order analysis of the weights . in smooth regions , using a taylor expansion analysis , we have : now using the taylor expansion of the exponential function and assuming , one obtains : and therefore , we use the fifth order weno flux in smooth regions , _ for a discontinuity , since the size of the discontinuity does not change as , one can conclude therefore , applying the same analysis used above leads to thus , at discontinuities the anti - dissipative flux is activated . in the original method , the parameter depended on the index and it was a convex combination of the global and local scales , _i.e. _ where , .the parameter was chosen as or in .this choice can highlight the small jumps in the solution .however , in the paper we focus on the major jumps in the solution. therefore , the global scale seems more appropriate .again , in the original method , the parameter is a constant , which varies from problem to problem .at contrast , we propose the parameter depending on the scaled mesh step , _i.e. _ and the exponent . moreover , according to the numerical experience in section [ sec : parameter_cx ] , the parameter can be a fixed number .in this section , we will present several numerical results to depict the behaviors of our hybrid method and its applications in population dynamics . here, we first discuss the parameter in , which is the key point in our hybrid method .then we perform the classical tests with the 1d free transport equation and the 2d rotation equation to verify the convergence of our hybrid method for the regular and irregular initial data . in the sequel ,the third order runge - kutta method is used for time discretization . to determine the parameter , we use the free transport equation ,\quad t\geq0 , \label{eq : test1d}\ ] ] with a very oscillatory initial condition , given by , ,&\textrm { if } -0.8\leq x\leq-0.6,\\[3 mm ] f_2(x ) = 1,&\textrm { if } -0.4\leq x\leq-0.2,\\[3 mm ] f_3(x ) = 1-|10(x-0.1)| , & \textrm { if } 0\leq x\leq0.2,\\[3 mm ] f_4(x ) = \frac{1}{6}[f(x , z-\delta)\,+\,f(x , z-\delta)\,+\,4\,f(x , z)],&\textrm { if } 0.4\leq x\leq0.6,\\[3 mm ] 0,&\text { otherwise}. \end{array } \right. \label{eq : discontinuous_initial2}\ ] ] where , with , , , and .moreover , the periodic boundary conditions is considered .the numerical results is presented in figure [ fig:1dtransport ] .we first see that the weno method is well adapted for the smooth regions of solution , while it becomes significantly diffuse near the steep regions . for the refined mesh, we can always observed this diffusion near the step function clearly .the anti - diffusive method has perfect performance for the step function , however it destroys the very smooth function .indeed , it turns the smooth function into a sawtooth profile , which can not be regularized by refining mesh .[ cols="^,^ " , ]in this paper we have proposed a hybrid finite volumes scheme based on the flux convex combination between the anti - dissipative method ( adm ) and the weno order 5 method .the obtained numerical results show a good accuracy in reconstructing the solution of transport type equation even in case of discontinuous initial data .indeed the simulations in figure [ fig : growth_homo_reg ] show results that are as good as the ones in weno scheme and better than adm scheme which fail for regular initial distribution . in reverse , for irregular initial distribution ,the hybrid scheme show better numerical results than de adm scheme in the sense that it advects very well the solution without `` stairs '' and it shows also better results than weno scheme which develop numerical diffusion for irregular distribution as depicted in figure [ fig : growth_homo ] .so , when the weno order 5 scheme fails because of numerical diffusion artifact , the hybrid method remains anti - dissipative and when adm scheme develops stairs " like oscillations , the hybrid scheme corrects them .this property is very suitable for long term asymptotic behavior of the solution of population dynamics , as presented in the numerical simulations for the polymerization / depolymerization type models .finally , lets point out that the flux construction of our hybrid scheme depends on an empirical parameter which is related to the weights in the convex combination .nevertheless we ca nt explain yet why exactly but we are working in a preprint paper which is focused on the theoretical and numerical explanation of this parameter .chang yang is supported by national natural science foundation of china ( grant no . 11401138 ) and heilongjiang postdoctoral scientific research development fund ( no .lbh - q14080 ) . , un schma non linaire anti - dissipatif pour lquation dadvection linaire . ( french ) [ a nonlinear antidiffusive scheme for the linear advection equation ] , _ c. r. acadparis sr ._ , 328 ( 1999 ) , no . 10 , pp . 939944 . , a gentle introduction to physiologically structured population models . in : structured - population models in marine terrestrial , and freshwater systems_ chapman hall , population and community biology series _ , vol .18 , ( 1997 ) .
|
we present in this paper a very adapted finite volume numerical scheme for transport type - equation . the scheme is an hybrid one combining an anti - dissipative method with down - winding approach for the flux and an high accurate method as the weno5 one . the main goal is to construct a scheme able to capture in exact way the numerical solution of transport type - equation without artifact like numerical diffusion or without stairs " like oscillations and this for any regular or discontinuous initial distribution . this kind of numerical hybrid scheme is very suitable when properties on the long term asymptotic behavior of the solution are of central importance in the modeling what is often the case in context of population dynamics where the final distribution of the considered population and its mass preservation relation are required for prediction . keywords . discontinuity detector ; weno scheme ; anti - diffusive method ; population dynamics .
|
renewable energy sources such as wind and solar power have a high degree of unpredictability and time variation , which makes balancing demand and supply increasingly challenging .one way to address this challenge is to harness the inherent flexibility in demand of many types of loads .loads can supply a range of ancillary services to the grid , such as the balancing reserves required at bonneville power authority ( bpa ) , or the reg - d / a regulation reserves used at pjm .today these services are secured by a balancing authority ( ba ) in each region .these grid services can be obtained without impacting quality of service ( qos ) for consumers , but this is only possible through design .the term _ demand dispatch _ is used in this paper to emphasize the difference between the goals of our own work and traditional demand response .consumers use power for a reason , and expect some guarantees on the qos they receive .the grid operator desires reliable ancillary service , obtained from the inherent flexibility in the consumer s power consumption .these seemingly conflicting goals can be achieved simultaneously , but the solution requires local control : an appliance must monitor its qos and other state variables , it must receive grid - level information ( e.g. , from the ba ) , and based on this information it must adjust its power consumption . with proper design , an aggregate of loads can be viewed at the grid - level as _ virtual energy storage _ ( ves ) .just like a battery , the aggregate provides ancillary service , even though it can not produce energy .shows the wind generation in the bpa region during the first week of 2015 .there is virtually no power generated on new year s day , and generation ramps up to nearly 4gw on the morning of january 5 . in this examplewe show how to supply a demand of exactly 4gw during this time period , using generation from wind and other resources .let denote the additional power required at time , in units of gws .for example , on the first day of this week we have .day - ahead forecast of the low frequency component of generation from wind is highly predictable .we let denote the signal obtained by passing the forecast of through a low pass filter .the signal is obtained by filtering using a high pass filter , and .each of these filters is causal , and their parameters are a design choice .examples are shown in .it is not difficult to ramp hydro - generation up and down to accurately track the power signal .this is an energy product that might be secured in today s day - ahead markets .the other two signals shown in take on positive and negative values . each represents a total energy of approximately zero ,hence it would be a mistake to attempt to obtain these services in an energy market .either could be obtained from a large fleet of batteries or flywheels .however , it may be much cheaper to employ flexible loads via demand dispatch .the signal can be obtained by modulating the fans in commercial buildings ( perhaps by less than 10% ) .the signal can be supplied in whole or in part by loads such as water heaters , commercial refrigeration , and water chillers .low frequency variability from solar gives rise to the famous `` duck curve '' anticipated at caiso , which is represented as the hypothetical `` net - load curve '' in .the actual net - load curve is the difference between load and generation from renewables ; the drop from 20 to 10 gw is expected with the introduction of 10 gw of solar in the state of california .this curve highlights the ramping pressure placed on conventional generation resources .as shown in this figure , the volatility and steep ramps associated with california s duck curve can be addressed using a frequency decomposition : the plot shows how the net - load can be expressed as the sum of four signals distinguished by frequency .variability introduced by the low frequency component can be rejected using traditional resources such as thermal generators , along with some flexible loads ( e.g. , from flexible industrial manufacturing ) . the mid - pass signal shown in the figurewould be a challenge to generators for various reasons , but this zero - mean signal , as well as the higher frequency components , can be tracked using a variety of flexible loads .the control architecture described in this paper is not limited to handling disturbances from wind and solar energy . illustrates how the same frequency decomposition can be used to allocate resources following an unexpected contingency , such as a generator outage .0.45 0.45 for loads whose power consumption can not be varied continuously , we have argued in prior work that a distributed randomized control architecture is convenient for design .this architecture includes local control to maintain bounds on the quality of service delivered by the loads , and also to ensure high quality ancillary service to the grid .analysis of the aggregate is based on a mean - field model .we restrict to the setting of the prior work , based on the following distributed control architecture .a family of transition matrices is constructed to define local decision making .each load evolves as a controlled markov chain on a finite state space , with common input .it is assumed that the scalar signal is broadcast to each load .if a load is in state at time , and the value is broadcast , then the load transitions to the state with probability .letting denote the state of the load at time , and assuming loads , the empirical pdf ( probability mass function ) is defined as the average , the mean - field model is the deterministic system defined by the evolution equations , in which is a row vector . under general conditions on the model and on it can be shown that is approximated by .in this prior work it is assumed that average power consumption is obtained through measurements or state estimation : assume that is the power consumption when the load is in state , for some function .the average power consumption is defined by , which is approximated using the mean - field model : the mean - field model is a state space model that is linear in the state , and nonlinear in the input .the observation process is also linear as a function of the state .assumptions imposed in the prior work imply that the input is a continuous function of these values . in , the design of the feedback law is based on a linearization of this state space model .one goal of the present paper is to develop design techniques to ensure that the linearized input - output model has desirable properties for control design at the grid level .several new design techniques are introduced in this paper , and the applications go far beyond prior work : _ optimal design_. in the prior work , the family of transition matrices was constructed based on an average - cost optimal control problem ( mdp ) .the cost function in this mdp was parameterized by the scalar . in this prior work ,the optimal control problem was completely unconstrained , in the sense that any choice of was permissible in the optimal control formulation .the optimal control formulation proposed here is far more general : we allow some randomness by design , and some exogenous randomness that is beyond our control .these contributions are summarized in ._ passivity by design_. a discrete - time transfer function is _ positive - real _ if it is stable ( all poles are strictly within the unit disk ) , and the following bound holds : it is _ strictly positive - real _ if the inequality is strict for all .a linear system is passive if it is positive real .let describe a state - space representative of the linearization , with transfer function ^{-1}b ] ( the identity matrix ) , is a matrix in which each row is identical , and equal to , and ^n = p^n - 1\otimes \pi ] .* spd solution : * given , the matrix is obtained , and \util ( x')\ , , \ \\label{e : fishw}\ ] ] under additional assumptions , the algorithm obtained from spd results in a positive real linearization .the proof of the bound is contained in .[ t : spd ] suppose that the markov chain with transition matrix is irreducible , and that ( a model without probabilistic constraints ) .then , the solution to the spd satisfies the following strict positive - real condition : the linearized model at any constant value obeys the bound , where is the variance of under .the irreducibility assumption on does not come for free .consider for example the markov chain on states defined by for , and .this chain is irreducible and aperiodic .the behavior of the adjoint is similar ; in particular , for each .it follows that for , so the irreducibility assumption fails . rather than solve an ode ,it is natural to fix a function , and define for each and , \text{with } \quad\he(x_u ' \mid x ) & \eqdef \sum_{x_n ' } q_0(x , x_n ' ) \prehe(x_u',x_n ' ) \end{aligned}\ ] ] this is a special case of the two - step design described in in which , independent of , and the function is then obtained from . in this case , the transition matrices defined in can be regarded as an _ exponential family_. the exponential family using will be called the _myopic design_. other designs can be obtained as linear approximations to the ipd or spd solutions , with . in the linear approximation of the ipd solution ,this is a solution poisson s equation for the nominal model : where .the resulting exponential family is called the ipd design .it is approximately optimal for near zero a proof of can be found in .[ t : optexpmain ] the following approximations hold for the transition matrices obtained from the ipd design : with the optimal transition matrix in , let denote the value of the quantity in brackets in that is obtained using .then , .a similar result holds if is chosen based on spd , with ^{-1}\ ] ] the linearization at will be positive - real under the assumptions of , because continues to hold at , an example in shows that passivity may fail for the linearization at values of far from zero .geometric sampling is specified by a transition matrix and a fixed parameter . at each time , a weighted coin is flipped with probability of heads equal to . if the outcome is a tail , then the state does not change .otherwise , a transition is made from the current state to a new state with probability .the overall transition matrix is expressed as a convex combination , one motivation for sampling in is to reduce the chance of excessive cycling at the loads , while ensuring that the data rate from balancing authority to loads is not limited .it was also found that this architecture justified a smaller state space for the markov model .based on this nominal model , there are two approaches to applying the design techniques introduced in this paper .if is transformed directly , then the resulting family of transition matrix will be of the form , in which is a function of .that is , if at time the state is and the input , then once again a weighted coin is flipped , but with probability of success equal to .conditioned on success , a transition is made to state with probability . in some casesit is convenient to fix the statistics of the sampling process , and transform using any of the design techniques described in the previous subsections .once the family of transition matrices is constructed , we then define each approach is illustrated through examples in .in this section we describe structure for the linearized model in full generality . we consider a general family of transition matrices of the form , maintaining the assumption that is continuously differentiable in , and that is irreducible and aperiodic .representations of the transfer function for the linearization require a bit more notation .we denote , with .the derivative of the transition matrix is also a matrix , denoted a simple representation for this matrix is obtained in , in terms of the function , the invariant pmf for is regarded as the equilibrium state for the mean - field model , with respect to the constant input value .the linearization about this equilibrium is described in .the proof is omitted since it is minor generalization of ( * ? ? ? * prop .[ t : lin ] the linearization of at a particular value is the state space model with transfer function , ^{-1 } b \label{e : tf}\ ] ] in which , for each , and another representation of is obtained based on the product , where denotes the adjoint of .[ t : b ] the derivative of the transition matrix can be expressed in terms of the function : where for . in the special casein which is independent of , the entries of the vector can be expressed , \label{e : badjform}\ ] ] for each , where we have used the definition .the derivative is computed using , giving which implies . if depends only on then , \ ] ] we can write to obtain \\ & = \pi_\zeta ( x^i)h_\zeta ( x^i ) - \pi_\zeta ( x^i ) \sum_x p^\adj(x^i , x ) [ p_\zeta h_\zeta \ , ( x ) ] \end{aligned}\ ] ] in the transfer function was considered for a linearized mean - field model in continuous time .a representation of this transfer function used in this prior work admits a counterpart in the discrete - time setting .the infinite series on the right hand side of suggests that we require a probabilistic interpretation of the scalar , where are given in .this is achieved on defining for each ; this is a function on whose mean is zero : .[ t : cakb ] let denote a stationary realization of the markov chain with transition matrix , so that in particular , for each . then , \label{e : cakb}\ ] ] we have by definition , , which can be expressed as the sum , this is equivalent to . with these identities in place we are ready to prove the passivity bound in .[ [ proof - of ] ] proof of + + + + + + + + + in the spd solution without probabilistic constraints , it follows from the design rule and the representation for the vector in that . gives the covariance interpretation , \ ] ]let denote the power spectral density , [ e^{jk\theta } + e^{-jk\theta } ] \ ] ] where ] , with also in units of seconds .the value obtained from is in units of hours on scaling to seconds we obtain , .homogeneous air conditioner parameters mean data from table 4.1 of . [cols="^,<,^ " , ] this model is based on the physics of heating and cooling , but the dynamics are accurately captured by a constant drift model : with drift parameters carefully chosen , the behavior of the two models is barely distinguishable .the deterministic model is the basis of a stochastic model , in which is a zero - mean , i.i.d . sequence . in the experiments that follow the sampling timewas taken to be seconds , and was taken to be gaussian with variance ( the small variance is justified with this fast sampling rate ) . a markov chain model can be constructed with state evolving on , where and ; the interpretation is the same as in the pool filtration model , with the interpretation is the same as in .temperatures are restricted to a lattice to obtain a finite state - space markov chain . to obtain states ,assume that is an even number , and discretize the interval ] .this cdf is meant to model the statistics of the time interval during which the unit is off , for the model with continuous state space .we define for , for , and for all other values , /[1-f^\ominus(x_n - t_\delta)]\ ] ] in the experiments that follow , the general form taken for was chosen in the parameterized family , with .the values , and were used for and in the experiments surveyed here . in addition, geometric sampling was applied : a family of models of the form was constructed , in which was used to define in .the construction of a model of this form requires a different interpretation of the nature component of the state . to obtain dynamics of the form ,let denote the discrete renewal process in which and is i.i.d ., with a geometric marginal : the nature component of the state is constant on each discrete - time interval , with on this interval . given a nominal randomized policy for the input process , the nominal transition matrix can be estimated via monte - carlo based on a simulation of , or from measurements of an actual tcl .a markov model with this transition matrix would also require constant on each of the intervals , . in simulationsthis constraint was violated occasionally since when , and when .this leads to modeling error that is small , provided is not too close to unity .shows an example of the evolution of with .the temperature never violates the dead - band constraint because of the constraints imposed on .0.45 0.45 in this example the transition matrix is irreducible , so that the spd solution is computable .because of exogenous randomness , there is no motivation for this approach : guarantees a passive linearization only when .moreover , numerical results using this method were not encouraging : the resulting family of transition matrices is extremely sensitive to .the linearization about for the myopic design was similar to the ipd solution , but as seen in figs .[ f : tcl - optbode ] and [ f : tcl - myopicbode ] , the behaviors quickly diverge for values beyond . in conclusion , although the transfer functions for the linearizations at are nearly identical , in the myopic design the input - output behavior is unpredictable for .the input - output behavior for ipd is much closer to a linear system for a wider range of .this is consistent with results from prior research .this paper has developed new approaches to distributed control for demand dispatch .there is much more work to do on algorithm design , and large - scale testing .p. barooah , a. bui , and s. meyn , `` spectral decomposition of demand - side flexibility for reliable ancillary services in a smart grid , '' in _ proc .48th annual hawaii international conference on system sciences ( hicss ) _ , kauai , hawaii , 2015 , pp . 27002709 .y. chen , a. bui , and s. meyn , `` individual risk in mean - field control models for decentralized control , with application to automated demand response , '' in _ proc . of the 53rd ieee conference on decision and control2014 , pp . 64256432 .h. hao , y. lin , a. kowli , p. barooah , and s. meyn , `` ancillary service to the grid through control of fans in commercial building hvac systems , '' _ ieee trans . on smart grid _ ,vol . 5 , no . 4 , pp .20662074 , july 2014 .y. chen , a. bui , and s. meyn , `` state estimation for the individual and the population in mean field control with application to demand dispatch , '' _ corr and to appear , ieee transactions on auto .control _ , 2015 .[ online ] .available : http://arxiv.org/abs/1504.00088v1 s. meyn , p. barooah , a. bui , and j. ehren , `` ancillary service to the grid from deferrable loads : the case for intelligent pool pumps in florida , '' in _ proceedings of the 52nd ieee conf . on decision and control_ , dec 2013 , pp . 69466953 .b. sanandaji , h. hao , and k. poolla , `` fast regulation service provision via aggregation of thermostatically controlled loads , '' in _47th hawaii international conference on system sciences ( hicss ) _ , jan 2014 , pp .23882397 .e. todorov , `` linearly - solvable markov decision problems , '' in _ advances in neural information processing systems 19 _ , b. schlkopf , j. platt , and t. hoffman , eds.1em plus 0.5em minus 0.4emcambridge , ma : mit press , 2007 , pp . 13691376 .s. p. meyn and r. l. tweedie , _ markov chains and stochastic stability _ , 2nd ed.1em plus 0.5em minus 0.4emcambridge : cambridge university press , 2009 , published in the cambridge mathematical library .1993 edition online .
|
the paper concerns design of control systems for _ demand dispatch _ to obtain ancillary services to the power grid by harnessing inherent flexibility in many loads . the role of `` local intelligence '' at the load has been advocated in prior work ; randomized local controllers that manifest this intelligence are convenient for loads with a finite number of states . the present work introduces two new design techniques for these randomized controllers : the _ individual perspective design _ ( ipd ) is based on the solution to a one - dimensional family of markov decision processes , whose objective function is formulated from the point of view of a single load . the family of dynamic programming equation appears complex , but it is shown that it is obtained through the solution of a single ordinary differential equation . the _ system perspective design _ ( spd ) is motivated by a single objective of the grid operator : passivity of any linearization of the aggregate input - output model . a solution is obtained that can again be computed through the solution of a single ordinary differential equation . numerical results complement these theoretical results .
|
tube drift chambers employing round tubes as the cathode electrode are frequently used as an important device for charged - particle tracking at large experiments , such as the monitored drift tubes ( mdt ) in the muon detection system of the atlas experiment at the large hadron collider ( lhc ) to be built at cern .because of their simple structure , tube drift chambers provide us with easiness in construction and calibration .this is a great advantage for constructing a large detector system .along with such an advantage , tube drift chambers have an undesirable nature in applications to very high - rate experiments , such as those at lhc .they require relatively long time intervals to collect all meaningful signals .for example , this time interval ( _ i.e. _ , the maximum drift time ) is expected to be about 500 ns in the case of atlas - mdt .this is appreciably longer than the planned beam - crossing interval of lhc ( 25 ns ) .hence , the event data will be contaminated by garbage signals produced by particles from neighboring beam - crossings .in addition , since the environmental radiation tends to be severe at high - rate experiments , the data may suffer from continuous garbage produced by the radiation .these contaminations may deteriorate the track - reconstruction capability of the detector .the source of signals from drift chambers is ionization electrons produced by charged particles passing through the chamber gas volume . the produced electrons drift towards an anode wire placed at the center to induce electrical signals via an avalanche process around the wire .the first arriving electrons , produced near the closest approach to the anode wire , form the leading edge of the signals . the leading edge , therefore , provides track position information .on the other hand , if the response to single electrons is sufficiently narrow , the trailing edge of the signals is determined by those electrons produced near the tube wall .thus , in the case of round - tube chambers , trailing edges appear at an approximately identical time with respect to the charged - particle passage , irrelevant of the leading - edge time and the incident angle , as schematically shown in fig .[ fig_idea ] .this argument leads to a prospect that , if we can measure the trailing edge with a sufficient time resolution simultaneously with the leading edge , we may be able to distinguish the beam crossing relevant to each signal . in other words , we may be able to select only those signals relevant to an interesting beam crossing before applying reconstruction analyses . in order to investigate the feasibility of this idea, we carried out a cosmic - ray test using a small tube drift chamber .the leading and trailing - edge times of the signals were simultaneously measured using a multi - hit tdc module employing the time memory cell ( tmc ) lsi .we discuss the observed properties , by comparing the results with predictions from a monte carlo simulation .the tube chamber used for the test is made of a thin aluminum tube having an inner diameter of 15.4 mm , a wall thickness of 0.1 mm , and a length of 20 cm .a 3 mm - diameter window made on the tube wall near the center allows us to feed x rays into the tube .the window is covered with a thin aluminum foil and sealed with kapton adhesive tape .a gold - plated tungsten wire of 30 m in diameter is strung along the center axis of the tube .a positive high voltage was applied to the wire with the tube wall being grounded .a gas mixture of argon ( 50% ) and ethane ( 50% ) at the atmospheric pressure was made to flow inside the tube .the signals from the tube chamber were amplified and discriminated using circuits made for the central drift chamber ( cdc ) of the venus experiment at kek - tristan , where a preamplifier board was attached to one end of the tube .the amplified signals were transferred to a discriminator board through a 30 m - long shielded twisted - pair cable .we added a pulse shaping circuit to the discriminator board , because no intentional shaping was applied in the original circuit .the timing of the discriminated signals was measured using a 32-ch tmc - vme module installed in a vme crate .this tdc module employs the tmc - teg3 lsi , allowing us to measure both the leading and trailing edges simultaneously with a time resolution of 0.37 ns .the module also has an essentially unlimited multi - hit capability .the tdc module was operated in a common - stop mode .stop signals synchronized with the passage of cosmic rays were formed by a coincidence between signals from two scintillation counters , 10 cm by 25 cm each , vertically sandwitching the tube chamber with a separation of 15 cm .the coverage of this counter telescope was sufficiently larger than the tube chamber , providing a uniform track distribution in sampled data .as a trade - off , tube - chamber signals were observed in only about 20% of the data .the data taking was controlled by a board computer installed in the vme crate .the computer collected the digitized data and stored them in a local disk . after completing the data taking ,the stored data were transferred to a workstation through a network and analyzed there .before starting the test , we naively thought that appropriate pulse shaping would be necessary for trailing - edge measurements with a good time resolution , because of the presence of a long tail in drift chamber signals .the existence of a long tail , which may also be produced by readout circuits , would enhance the time - walk effect due to a large fluctuation in the gas - amplification process . in order to eliminate the tail, we added a pulse shaping circuit between the receiver circuit and the comparator on the discriminator board .a diagram of the added circuit is shown in fig .[ fig_shaper ] .the circuit is a double pole - zero filter , capable of converting two poles in the input signal into two new poles .the parameters of the circuit were determined by using signals produced by the 5.9-kev x rays from .the gas volume of the tube chamber was irradiated through the window on the tube wall , and the pulse shape at the discriminator input was investigated using a digital oscilloscope .first of all , the pulse shape was sampled without adding the shaping circuit , and the observed signal tail was approximated by two exponentials .the circuit parameters were calculated so that the two zeros of the circuit should cancel the two time constants ( poles ) of the approximating function .another constraint that we applied was that the amplitude corresponding to one of the newly produced poles , having a longer time constant , should become zero .ideally , a thus - determined circuit should replace a long tail in the input signal with one relatively short exponential tail .however , since the approximation with two exponentials has ambiguity , fine - tuning was necessary in order to achieve satisfactory performance . looking at the resulting pulse shape , we adjusted the values of two resistors ( r1 and r2 ) , with capacitors ( c1 and c2 ) and the other resistor ( rl ) fixed to the calculated values .the optimized parameter values are shown in fig .[ fig_shaper ] .the pulse shapes measured before and after the shaping circuit are shown in fig .[ fig_pulse ] .due to a large fluctuation in the ionization and avalanche processes , discriminated signals for cosmic - ray tracks are sometimes separated into several fragments . in the offline analysis ,successive signals were merged and considered to be one signal if the time interval between them ( the interval between the trailing edge of the preceding signal and the leading edge of the following signal ) was shorter than 40 ns .figure [ fig_scatter ] shows the relation between the trailing - edge time and the leading - edge time of the recorded data .the data were obtained for an anode voltage of 2.0 kv and a discriminator threshold of 10 mv .the average pulse height for the x rays was 750 mv for this anode voltage .since about 200 electrons are expected to be produced by this x ray , the threshold corresponds to about three - times the average pulse height for one ionization electron .we can see that the trailing edge of the signals exhibits a nearly equal time , irrespective of the leading - edge time , as expected. we can also find other interesting and unexpected properties in this result : as the leading - edge time becomes larger , the trailing - edge time resolution becomes better and the average time shifts towards larger values .such properties are expected to emerge as the result of a geometrical focusing effect ; _ i.e. _ , the ionization electron density in the time domain becomes higher as the track distance becomes larger . in a real situation with a finite pulse width, this leads to a larger pulse height for signals having larger leading - edge times , resulting in a better time resolution and a longer delay of the trailing edge .however , as is shown later , this effect is not enough to explain the observed _ swing - up _ behavior at large leading - edge times .along with the dominant data points having a nearly equal trailing - edge time , we can see some data points distributed diagonally in the plot .since the pulse - merging process is applied , they are not fragments of wider signals , but are isolated narrow signals .these data points gradually vanish as the threshold voltage is raised .they are apparently synchronized with the scintillator trigger .in addition , the frequency of these signals increases as the leading - edge time becomes larger , this suggests a uniform occurrence of the causal process over the chamber gas volume .these facts indicate that these signals were produced by soft x rays coming in association with triggering cosmic - ray muons .therefore , they must have nothing to do with the chamber performance that we are now interested in .projections of the scatter plot are shown in fig .[ fig_time ] , where the data points corresponding to those narrow signals described above are excluded .a flat distribution of the leading - edge time confirms a uniform distribution of the cosmic - ray tracks .the trailing - edge data are concentrated around about 40 ns after the maximum leading - edge time ( the maximum drift time in the ordinary definition ) . from a straightforward numerical evaluation, we obtained an rms resolution of 12 ns for the trailing - edge measurement .the measurement was repeated by varying the threshold voltage .figure [ fig_resolution ] shows the obtained rms resolution as a function of the threshold .we can observe that the improvement of the resolution by lowering the threshold is not significant .the improvement is limited by the _ swing - up _behavior seen in fig .[ fig_scatter ] .we developed a simple monte carlo simulation , aiming at understanding the observed properties . in the simulation ,a muon track passing through the chamber gas volume leaves ionization electron clusters along its path , according to a poisson distribution with an average frequency of 3.0 clusters / mm .the number of electrons composing each cluster is subject to a poisson distribution with an average of 3.0 .the drift time of each electron is determined by the distance from the anode wire , assuming a constant drift velocity .the diffusion during the drift is not taken into account , since it is expected to be ineffective , compared to the measured time resolution .the signal shape is simulated by convoluting the drift time distribution with a single - electron response , determined from the average pulse shape for x rays . in the convolution ,the pulse height for each electron is varied in order to simulate the gas - gain fluctuation .a gaussian distribution was assumed for the fluctuation in the first version of the simulation .the leading and trailing - edge times are then determined by applying a threshold to the simulated signal shape . a scatter plot obtained from the simulation , which should be compared with fig . [ fig_scatter ] , is shown in fig .[ fig_scattermc ] , where the gain fluctuation was assumed to be 30% in the standard deviation .we can see in this result an improvement of the resolution and a shift of the average at larger leading - edge times. however , these variations are apparently less significant than those observed in the measurement result .namely , the geometrical focusing effect , which is automatically included in the simulation , is not enough to reproduce the observation . aiming at a better reproduction , we applied several modifications to the simulation .the polya distribution was applied to make the gas - gain fluctuation more realistic .the threshold was made to fluctuate according to a gaussian distribution in order to simulate a noise contribution .the diffusion of drift electrons was taken into account .although these modifications could smear the overall time resolution , they could never enhance the _ swing - up _ behavior . as a result of further studies, we found that a small overshoot in the shaped signal may produce an appreciable _ swing - up _ structure in the leading - trailing relation .the signal shape that we originally used in the simulation does not have any overshoot , because no significant overshoot was observed in the x - ray response .however , any signal - shape measurement more or less affects the circuit property .there may be a small , but finite , overshoot in the real situation .figure [ fig_scattermc2 ] shows the result of a modified simulation in which the one - electron response is assumed to have an overshoot amounting to 5% of the peak pulse height .the overshoot is assumed to produce a baseline shift after the signal ; _i.e. _ , the overshoot is assumed to have a very long time constant .the modifications concerning the signal fluctuation mentioned above are all applied in this simulation .the polya distribution of a 100% fluctuation is used for the gain fluctuation .the standard deviation of the threshold fluctuation is equal to the average pulse height for one ionization electron .the _ swing - up _ behavior that we can see in fig .[ fig_scattermc2 ] looks quite similar to that which we observed in the measurement data .projections of the plot are shown in fig .[ fig_timemc ] .the trailing - edge time distribution has an rms resolution of 8.7 ns .this is still significantly smaller than that of the measurement result , suggesting that a further fine structure of the signal tail or certain phenomena not taken into account in the simulation ( _ e.g. _ , the -ray emission from the tube wall ) may be effective .note that non - proportional effects , such as the space - charge effect , must not be significant , because a measurement with the anode hv lowered to 1.9 kv , for which the gas gain is reduced to about one half , gives a comparable result , as shown in fig .[ fig_resolution ] .a cosmic - ray test was carried out to investigate the feasibility of an idea of filtering tube drift - chamber signals at high - rate experiments , based on a trailing - edge time measurement .a small tube chamber filled with a popular chamber gas , argon / ethane ( 50/50 ) , at the atmospheric pressure was used for the test .the leading and trailing - edge times of the signals for cosmic - ray tracks were measured simultaneously , using a tdc module employing the tmc lsi . applying a simple pole - zero filter circuit for signal shaping, we achieved a trailing - edge time resolution of 12 ns in rms with a realistic , or rather moderate , setting of the discriminator threshold .measurements with a resolution of this level will be very useful at future high - rate experiments . in the case of lhc , such a measurement will allow bunch - crossing identification with a tolerance of two or three crossings without any significant loss of signals . from a simulation study , we found that a small overshoot in the signal tail can produce a correlation quite similar to that which we observed .if this is truly the reason , the resolution may be further improved with better signal shaping . meanwhile , in the case of long tube chambers , this effect may seriously limit the achievable resolution , since the signal shape varies according to the signal transmission length .further studies are necessary to confirm these arguments .the authors wish to thank yasuo arai and masahiro ikeno for their help in preparing and maintaining the data - acquisition system .nobuhiro sato and takeo konno are acknowledged for their contribution in preparing the test setup .atlas collab ._ , _ the atlas technical proposal for a general - purpose pp experiment at the large hadron collider at cern _ , cern / lhcc/94 - 43 ( december 1994 ) .s. odaka , talk at the third jsd workshop , kek , tsukuba , japan , february 1 - 2 , 1991 .h. shirasu _et al . _ , ieee trans .* 43 * ( 1996 ) 1799 .y. arai and m. ikeno , ieee j. solid - state circ .* 31 * ( 1996 ) 212 .et al . _ ,instr . and meth .* a323 * ( 1992 ) 273 ; j. byrne , proc .a66 * ( 1962 ) 33 .et al . _ ,instr . and meth .* a259 * ( 1987 ) 438 .
|
the trailing edge of tube drift - chamber signals for charged particles is expected to provide information concerning the particle passage time . this information may be useful for separating meaningful signals from overlapping garbage at high - rate experiments , such as the future lhc experiments . we carried out a cosmic - ray test using a small tube chamber in order to investigate the feasibility of this idea . we achieved a trailing - edge time resolution of 12 ns in rms by applying simple pulse shaping to eliminate a signal tail . a comparison with a monte carlo simulation indicates the importance of well - optimized signal shaping to achieve good resolution . the resolution may be further improved with better shaping . , , and
|
during both world wars the baltic sea was a scene of intense naval warfare and became a `` container '' for almost whole nazi chemical munition arsenal . in accordance with the provisions of the potsdam conference ,germany was demilitarized but only a small part of their arsenal was neutralized on land .technologies for safe and effective disposal of chemical weapons were not known shortly after the war .thus , till 1948 about 250000 tons of munition , including up to 65000 tons of chemical agents , were sunk in baltic sea waters by both germans and allies .the main known contaminated areas are little belt , bornholm deep ( east of bornholm ) and the south - western part of the gotland deep .apart from known underwater stockyard there is unknown amount of dangerous war remnants spread over the whole baltic , especially along maritime convoys paths and in vicinity of coasts .it is not clear how dangerous are these underwater arsenals . at the bottom of the sea ( in approx .5 - 7 ) the chemical agents take a form of oily liquids hardly soluble in water .thus , the sunk ammunition does not release hazardous substances .it becomes dangerous however , if the rusted tanks and shells are raised from the bottom of the sea .chemical munitions , containing mostly mustard gas , was fished several times by fishermen on the baltic sea over the last fifty years . moreover , already in 1952 and 1955 the contamination was found at the polish coast causing serious injuries to people .it was estimated that if only of the sunk chemical agents was released into baltic the life in the sea and at its shores would be entirely ruined for the next 100 years .high economic and environmental costs have been preventing so far any activities aiming at extraction of these hazardous substances , but it is clear that we are about to face a very serious problem in the baltic sea .appropriate actions for preventing the ecological catastrophe demand a precise knowledge of location and amount of sunk munitions .+ presently used methods for underwater munition detection is based on sonars which show only a shape of underwater objects , like e.g. sunk ships or depth charges . to estimate the amount of dangerous substances and to determine the exact location of sunk munition it is still necessary that people are diving and searching the bottom of the sea .this operation is always very dangerous for divers since the corrosion state of the shells is usually not known .moreover , these methods are very expensive and slow , thus they can not be used in practice to search big sea areas .the above mentioned disadvantages can be to large extend overcome by using devices based on neutron activation analysis techniques ( naa ) which will be discussed in more details in next sections of this article .most of the commonly used explosives or drugs are organic materials . therefore , they are composed mostly of oxygen , carbon , hydrogen and nitrogen .war gases contain also sulfur , chlorine , phosphorus and fluorine .thus , these substances can be unambiguously identified by the determination of the ratio between number of c , h , n , o , s , p and f atoms in a molecule , which can be done noninvasively applying neutron activation analysis techniques .the suspected item can be irradiated with a flux of neutrons produced using compact generators based e.g. on deuteron - tritium fusion , where deuterons are accelerated to the energy of 0.1 mev and hit a solid target containing tritium . as a result of the fusion an alpha particleis created together with the neutron , which is emitted nearly isotropically with a well defined energy equal to about 14.1 mev .such energy is sufficient to excite all nuclei composing organic substances and the resulting quanta from the de - excitation of nuclei are then detected by e.g. a germanium detector providing a very good energy resolution . counting the number of gamma quanta emitted by nuclei from the examined itemprovides information about its stoichiometry .devices using naa to detect explosives on the ground were already designed and are produced , e.g. in usa and poland . in the aquatic environment however we encounter serious problems since neutrons are strongly attenuated by water . moreover , as in the case of ground detectors , an isotropic generation of neutrons induces a large environmental background , in this case from oxygen .this noise can be significantly reduced by the requirement of the coincident detection of the alpha particle , which allows for the neutron tagging .the attenuation of neutrons can be compensated by decreasing the distance between generator and examined item .there are also solutions based on low energy neutrons which are moderated in water before reaching the tested object .the detector is then counting the gamma quanta from thermal neutron capture and secondary neutrons originating from the irradiated object .the identification is done by searching for anomalies in the observed spectra of gamma quanta and neutrons .however , these methods do not allow to detect explosives buried deeply in the bottom of the sea .moreover , the device has to approach the suspected item very close and the strong attenuation of neutrons and gamma quanta significantly increases the exposure time and make the interpretation of results difficult .therefore , we propose to build a detector which uses naa technique and special guides for neutrons and emitted gamma rays .the device allows for detection of dangerous substances hidden deep in the bottom of the sea with significantly reduced background and provides determination of the density distribution of the dangerous substance in the tested object .scheme of the proposed device is presented in fig [ fig1 ] .quanta , while registration of an particle in anti - coincidence with gamma quanta detector allows to reduce background .b ) demonstration of how changing of the relative position between neutron and quanta guides provides a tomographic image of the interrogated object.,width=340 ] neutron generator collides deuterium ions with a tritium target producing a neutron and an particle . because of the much higher energy released in this reaction compared to the energy of deuterium , both particles are produced almost isotropically and move back - to - back .the particle is detected by a system of position sensitive detectors , e.g. silicon pads or scintillation hodoscope , disposed on the walls of the generator .neutrons emitted towards the subject of interrogation fly inside a guide built out of a stainless steal pipe filled with low pressure air or some other gas having low cross section for neutron interaction .neutrons after leaving a guide may be scattered inelastically on atomic nuclei in the tested substance .the nuclei deexcite and emit gamma quanta with energy specific to the element .part of the emitted quanta fly towards a dedicated detector within an analogous guide which decrease their absorption and scattering with respect to gamma quanta flying in water. the ray detector could be again a position sensitive detector measuring the energy , time and impact point of impinging particles .if the diameter - to - length ratio of both guides is small the depth at which quanta excite nuclei can be determined by measuring the time elapsed from the particle registration until the time of the quantum registration .generated neutrons travel with known velocity the distance from the tritium target to the point of interaction .similarly the gamma emitted from the tested object fly over a distance with a speed of light .thus , can be expressed as : if we know the positions of the generator target and ray detector and lengths of both guides the distance covered by neutrons from the end of the guide to the point of interaction can be calculated as : where and are the length of guides for neutrons and quanta , respectively .additional information on the depth of interaction is given by changing the relative position of the guides and the angle between them . as it is demonstrated in fig .[ fig1 ] a ) and b ) changing the distance between the guides provides registration of gamma quanta emitted from different parts of the object .this allows one to determine the density distribution of elements building the suspected object . in order to remove background resulting from interaction of neutrons emitted in other directions only signals registered by the quanta detector in coincidence with signals from particle detectors mounted in - line with the neutron guideare saved , while the other coincidences are discarded .taking into account cross section for neutron inelastic scattering with different nuclides and the detection efficiency of quanta we can reconstruct the number of atoms of each element building the object and compare them with the known stoichiometry of hazardous substances stored in the database .the whole detection unit can be mounted on a small submarine steered remotely from a ship .in order to optimize the dimensions and relative positions of detectors and guides we have developed dedicated open source software package written in the c++ programming language and based on the monte carlo simulation methods .our goal is to create a fast and user - friendly tool using novel methods of geometry definition and particle tracking based on hypergraphs .the simulation framework is written using the c++11 standard and open mpi library supporting parallel computing and it is destined for unix - like operating systems . the application needs to be configured with input file defining scene description ( location and shape of all objects included in simulation , as well as substances building them ) , and neutron source parameters ( location , number of generated neutrons and their energies ) .the parameters of neutron interaction with selected nuclei , e.g. total cross sections , neutron and quanta angular distributions and multiplicities , are parametrized as a function of neutron energy using data from the endf database .similarly , gamma quanta energies were taken from the evaluated nuclear structure data files ( ensdf ) .this information is stored in an sqlite database and recalled during the simulation only for elements specified by user .so far we have implemented only main processes induced by 14.1 mev neutrons which we are interested in , i.e. elastic and inelastic scattering and radiative capture , but in the near future the simulations will be supplemented with other processes , e.g. fission . at present neglected processes are taken into account effectively as one process after which neutron is no longer tracked .cross section for this process is calculated in a way that the sum of all processes induced by neutron is equal to the total cross section for a given nuclei with which the reaction occurred .neutrons are tracked according to the algorithm presented in fig .[ fig3 ] until they reach the scene boundary or their energy goes below the lowest included in the database 10 kev . ] , i.e. ev . the reaction place is randomly generated with exponential probability distribution taking into account total cross section values from the database and concentrations of all the isotopes building the substance .user can choose to calculate concentrations for a single chemical compound or a mixture of substances for a specified density .selection of an isotope with which the interaction took place is also random and it is done again based on neutron total cross sections and known stoichiometry of the material in which the reaction is simulated . nextthe reaction type is drawn according to cross section values for each process from the list specified by user in the input file .the direction of neutron after the reaction is generated using angular distributions parametrized with legendre polynomials or using uniform distribution if there is no relevant data in the database .it is first determined in the center of mass system and then the four - momentum is transformed back to the laboratory coordinate system . in case of inelastic scattering neutron energy in the laboratory is calculated taking into account the nuclei excitation energy . in the current version of simulationswe take into account up to tenth excited nuclei level .directions of gamma quanta coming from the nuclei deexcitation are currently generated uniformly in the laboratory frame .+ as a starting point for design of the device for underwater threats detection we have defined a simple setup with point - like source generating 14.1 mev neutrons uniformly in space .the scheme of the simulated setup with superimposed points of of neutrons interaction are shown in figs .[ fig2 ] and [ fig2b ] . z 5 and y 0 , respectively . ,title="fig:",width=207 ] z 5 and y 0 , respectively ., title="fig:",width=207 ] z 5 and y 0 , respectively ., title="fig:",width=207 ] z 5 and y 0 , respectively ., title="fig:",width=207 ] z 20 and y 0 , respectively ., title="fig:",width=207 ] z 20 and y 0 , respectively ., title="fig:",width=207 ] z 20 and y 0 , respectively ., title="fig:",width=207 ] z 20 and y 0 , respectively . ,title="fig:",width=207 ] 0 ) ., width=207 ] we have considered two neutron guides filled with air under normal conditions : cuboid with dimensions 40 x 40 x 100 ( figs .[ fig2]a and [ fig2b]a ) and truncated pyramid with height equal to 100 cm and bases with dimensions of 5 cm and 20 cm ( figs .[ fig2]b and [ fig2b]b ) . the interrogated object with dimensions 194 x 255 x50 lies on the bottom of a sea and contains mustard gas .as one can see in fig .[ fig5 ] the distribution of the path length of neutrons in water is characterized by a mean free path of about 9.5 cm . at the same time the flux flying through the guide filled with air reaches the container with mustard gas and excites its nuclei .comparing fig .[ fig2]a and fig . [ fig2]bone can see also that both shapes of the neutron guide give effectively the same flux irradiating the gas container , but for the trapezoidal shape a better spatial separation between regions where neutrons interact in water and in the interrogated object is clearly visible .the optimization of shapes and configuration of the neutron and gamma quanta guides will be a subject of future investigations .the energy spectra of gamma quanta from deexcitation of nuclei in water and interrogated object are presented in fig .[ fig4 ] . in fig .[ fig4]a one can see strong lines from the excitation of oxygen with energies about 3 mev , 6 mev and 7 mev , and characteristic line of neutron capture by hydrogen ( about 2.2 mev ) .the spectrum for the container with mustard gas in fig .[ fig4]b contains a big peak for gamma quanta emmited by carbon ( 4.43 mev ) and a structure at low energies and a peak at 6 mev comming from chlorine .hydrogen and sulfur compose one line at 2.2 mev , which shows that identification of this element will be difficult .methods of chemical thread detection based on neutron activation have a huge potential and may open a new frontier in homeland security . in the aquatic environment application of this method encounters serious problems since neutrons are strongly attenuated . in order to suppress this attenuation andto decrease background from gamma radiation induced in the water we propose to use guides for neutrons and gamma quanta which speeds up and simplifies identification .moreover , it may provide a determination of the density distribution of a dangerous substance .for designing of a device exploiting this idea we have been developing a fast and user - friendly simulation package using novel methods of geometry definition and particle tracking based on hypergraphs .although we are in a very early stage of the development the first results indicate that indeed the guides will increase the performance of underwater threats detection with fast neutrons .this work was supported by the polish ministry of science and higher education through grant no .7150/e-338/m/2014 for the support of young researchers and phd students of the department of physics , astronomy and applied computer science of the jagiellonian university .t. kasperek , _ czas morza _ * 1 * , 15 ( 2001 ) .p. moskal , annales umcs , physica * 66 * , 71 ( 2012 ) .b. c. maglich , aip conf .proc . * 796 * , 431 ( 2005 ) . .kamierczak et al . , acta physica polonica a , this issue .m. silarski , acta phys .supp . * 6 * , 1061 ( 2013 ) . c. eleon , b. perot , c. carasco , d. sudac , j. obhodas , v. valkovic , nucl .instrum . methods .a * 629 * , 220 ( 2011 ) .d. lambertus , t. schneider - pungs , k. buckup , patent application wo2012089584 a1 .m. silarski , p. moskal , patent application pl409388 .e. grabska , a. achwa , g. lusarczyk , adv .26 * , 681 ( 2012 ) .j. q. michael , _ parallel programming in c with mpi and openmp _ , mcgraw - hill inc . , 2004 ,isbn 0 - 07 - 058201 - 7 .chadwick et al .data sheets * 112*(12 ) , 2887 ( 2011 ) . m. r. bhat , nuclear data for science and technology ( revised as of april 2014 ) edited by s. m. qaim ( springer - verlag , berlin , germany , 1992 )
|
in this article we describe a novel method for the detection of explosives and other hazardous substances in the marine environment using neutron activation . unlike the other considered methods based on this technique we propose to use guides for neutron and gamma quanta which speeds up and simplifies identification . moreover , it may provide a determination of the density distribution of a dangerous substance . first preliminary results of monte carlo simulations dedicated for design of a device exploiting this method are also presented . pacs : 28.41.ak , 89.20.dd , 28.20.cz , 28.20.fc
|
statistical properties of time intervals between successive earthquakes , henceforward the interoccurrence times and the recurrence times , have been frequently studied in order to predict when the next big earthquake will happen .previous papers have been focused on the determination of the probability distribution and the presentation of the scaling law , as shown in the works of .for instance , the weibull distribution , the exponential distribution , the brownian passage time ( bpt ) distribution , the generalized gamma distribution , and the log normal distribution are candidates for the distribution function of interoccurrence times and recurrence times .also , in the stationary regime , a unified scaling law was proposed by corral and then obtained by analyzing the california aftershock data .meanwhile , the time interval statistics have been studied by numerical simulations of earthquake models , because real earthquake data are limited .for example , both the conceptual spring - block models and the sophisticated virtual california model show the weibull distribution of the recurrence times .one of the authors ( th ) also reported that the survivor function of interoccurrence times in the 2d spring - block model can be described by the zipf - mandelbrot type power law , which has been observed by abe and suzuki .very recently , a statistical feature of the interoccurrence times , the weibull - log weibull transition , was proposed by analyzing the japan meteorological agency ( jma ) catalog .we found that the probability distribution of interoccurrence times can be very well fitted by the superposition of the log - weibull distribution and the weibull distribution .in particular , the distribution of large earthquakes obeys the weibull distribution with exponent less than unity indicating that the process of large earthquakes is long - term correlated .our earlier results emphasize that the interoccurrence time statistics basically contain both weibull and log - weibull statistics , and as the threshold of magnitude is increased , the predominant distribution could change from the log - weibull distribution to the weibull distribution .those statistical properties , including the weibull - log weibull transition , can be also found in synthetic catalogs produced by the 2d spring - block model .however , the applicability to other tectonic regions of the weibull - log weibull transition remains unsolved . in this study , we further investigate the interoccurrence time statistics by analyzing the southern california and taiwan earthquake catalogs .together with previous results from the jma and synthetic catalogs , we show the universal weibull - log weibull transition obtained in all of the catalogs .we also demonstrate that a crossover magnitude , , between the superposition regime and the pure weibull regime is proportional to the plate velocity , and at the end of this paper we elucidate its implication in the geophysical sense .[ cols="^,^,^,^,^,^",options="header " , ] we have studied the interoccurrence time statistics of natural and synthetic earthquakes by analyzing the jma , scedc , tcwb , and synthetic catalogs and found the universal weibull - log weibull transition of the interoccurrence distribution .we emphasize that interoccurrence time statistics contain both weibull statistics and log - weibull statistics . and ,in this paper , we demonstrate the transition does occur for different tectonic settings .we also observe the area - independent scaling relation , namely .still the origin of the log - weibull distribution and the weibull - log weibull transition remains open .our present work represents the first step to fully understand the interoccurrence time statistics and the weibull - log - weibull transition for real earthquakes . although the scaled crossover magnitudes is area - independent , the crossover magnitude from the superposition regime to the pure weibull regime depends on the tectonic region . the geophysical implication for the area - dependent could be exposed when comparing the plate velocity with _averaged _ . as clearly seen from table [ table4 ], is on the average proportional to the plate velocity .that means the magnitude of the largest earthquake , , for a tectonic region is more or less proportional to the plate velocity since .ruff and kanamori showed a relation stating that the magnitude of characteristic earthquake which occurs in subduction - zone is proportional directly to the convergence rate , which is thus consistent with our results .this study therefore suggests a possible physical interpretation for earlier observation about the velocity - dependence of the characteristic earthquake size .we would like to thank the jma , scedc , and tcwb for allowing us to use the earthquake data .the effort of the taiwan central weather bureau to maintain the cwb seismic network is highly appreciated .th is supported by the japan society for the promotion of science ( jsps ) and the earthquake research institute cooperative research program at the university of tokyo .ccc is also grateful for research supports from the national science council ( roc ) and the department of earth sciences at national central university ( roc ) .00 abaimov , s. g. , turcotte , d. l. , shcherbakov , r , rundle , j. b. , yakovlev , g , goltz , c , and newman , w. i ( 2008 ) , earthquakes : recurrence and interoccurrence times , _ pure . applied .geophys . , 165 _ , 777 - 795 .enescu , b. , struzik , z. , and kiyoto , k ( 2008 ) , on the recurrence time of earthquakes : insight from vrancea ( romania ) intermediate - depth events _ geophys .j. int , 172 _ , 395 - 404 .matthews , m. v. , w. l. ellsworth , and a. p. reasenberg ( 2002 ) , a brownian model . for recurrent earthquakes ,, 92 _ , 2233 - 2250 .bak , p , christensen , k , danon , l , and scanlon , t ( 2002 ) , unified scaling law for earthquakes , _ phys ., 88 _ , 178501 .corral , a ( 2004 ) , long - term clustering , scaling , and universality in the temporal occurrence of earthquakes , _ phys .lett . , 92 _ , 108501. shcherbakov , r , yakovlev , g , turcotte , d. l , and rundle , j. b ( 2005 ) , model for the distribution of aftershock interoccurrence times , _ phys ., 95 _ , 218501 .rundle , j. b , turcotte , d. l. , klein , w ( eds ) ( 2000 ) , _ geocomplexity and the physics of earthquakes _ , agu , washington d. c. t. hasumi , t. akimoto , and y. aizawa , arxiv.0808.0616 .abaimov , s. g , turcotte , d. l. , shcherbakov , r. , and rundle , j. b. ( 2007 ) , recurrence and interoccurrence behavior of self - organized complex phenomena , _ nonlin .processes geophys . , 14 _ , 455 - 464 .yakovlev , g , turcotte , d. l. , rundle , j. b. , and rundle , p. b. ( 2006 ) , simulation - based distributions of earthquake recurrence times on the san andreas fault system , _ bull .soc . am . , 96(6 ) _ , 1995 - 2007 .hasumi , t ( 2007 ) , interoccurrence time statistics in the two - dimensional burridge - knopoff earthquake model , _ phys .e , 76 _ , 026117 .abe , s and suzuki , n ( 2005 ) , scale - free statistics of time interval between successive earthquakes , _ physica a , 350 _ 588 - 596 . t. hasumi , t. akimoto , and y. aizawa , arxiv.0807.0485 .japan meteorological agency earthquake catalog : http://wwweic.eri.u-tokyo.ac.jp/db/jma1 .carlson , j. m. , langer , j. s. , shaw , b. e. , and tang , c ( 1991 ) intrinsic properties of a burridge - knopoff model of an earthquake , _ phys .a , 44 _ , 884 - 897 .gutenberg , b. and richter , c. f. ( 1954 ) _ seismicity of earth and associated phenomena _, 310pp . , princeton univ . press ,princeton , n. j. southern california earthquake data center : http://www.data.scec.org/. taiwan central weather bureau : http://www.cwb.gov.tw/v5e/ ruff , l. , kanamori , h ( 1980 ) , seismicity and the subduction process ._ phys . earth plant .inter . , 23 _ 240 - 252 .seno , t and alice , e. g ( 1993 ) , a model for the motion of the philippine sea plate consistent with nuvel-1 and geological data , j. geophys .98 , 17941 - 17948 .fowler , c. m. r. ( 1990 ) , _ the solid earth : an introduction to global geophysics _ , cambridge university press , new york .
|
we have studied interoccurrence time distributions by analyzing the synthetic and three natural catalogs of the japan meteorological agency ( jma ) , the southern california earthquake data center ( scedc ) , and taiwan central weather bureau ( tcwb ) and revealed the universal feature of the interoccurrence time statistics , weibull - log weibull transition . this transition reinforces the view that the interoccurrence time statistics possess weibull statistics and log - weibull statistics . here in this paper , the crossover magnitude from the superposition regime to the weibull regime is proportional to the plate velocity . in addition , we have found the region - independent relation , .
|
in this paper , we give some quantum circuits for calculating the probability of a graph given data . together with a transition probability matrix for each node of the graph , constitutes a classical bayesian network , or cb net for short .bayesian methods for calculating have been given before ( the so called structural modular and ordered modular models ) , but these earlier methods were designed to work on a classical computer .the goal of this paper is to quantum computerize " those earlier methods . often in the literature , the word model " is used synonymously with cb net " and the word structure " is used synonymously with the bare graph " , which is the cb net without the associated transition probabilities .the bayesian methods for calculating that we will discuss in this paper assume a meta " cb net to predict for a cb net with graph .the meta cb nets usually assumed have a modular " pattern . two types of modular meta cb nets have been studied in the literature .we will call them in this paper unordered modular and ordered modular models although unordered modular models are more commonly called structural modular models .calculations with unordered modular models require that sums over graphs be performed .calculations with ordered modular models require that , besides sums , sums over orders " be performed . in some methods these two types of sums are performed deterministically ; in others , they are both performed by doing mcmc sampling of a probability distribution .some hybrid methods perform some of those sums deterministically and others by sampling .one of the first papers to propose unordered modular models appears to be ref. by cooper and herskovits .their paper proposed performing the by sampling .one of the first papers to propose ordered modular models appears to be ref. by friedman and koller .their paper proposed performing both and by sampling . later on , refs. by koivisto and soodproposed a way of doing deterministically using a technique they call fast mobius transform , and performing also deterministically by using a technique they call dp ( dynamic programming ) .since the initial work of koivisto and sood , several workers ( see , for example , refs. ) have proposed hybrid methods that use both sampling and the deterministic methods of koivisto and sood .so how can one quantum computerize to some extent the earlier classical computer methods for calculating ?one partial way is to replace sampling with classical computers by sampling with quantum computers ( qcs ) .an algorithm for sampling cb nets on a qc has been proposed by tucci in ref. .a second possibility is to replace the deterministic summing of or by quantum summing of the style discussed in refs. , wherein one uses a grover - like algorithm and the technique of targeting two hypotheses .this second possibility is what will be discussed in this paper , for both types of modular models .finally , let us mention that some earlier papers ( see , for example , refs. ) have proposed using a quantum computer to do ai related calculations reminiscent of the ones being tackled in this paper .however , the methods proposed in those papers differ greatly from the one in this paper . those papers either do nt use grover s algorithm , or if they do , they do nt use our techniques of targeting two hypotheses and blind targeting .this paper assumes that the reader has already read most of refs. and . by tucci .reading those previous 2 papers is essential to understanding this one because this paper applies techniques described in those 2 previous papers .[ sec - intro ]most of the notation that will be used in this paper has already been explained in previous papers by tucci .see , in particular , sec.2 ( entitled notation and preliminaries " ) of refs. and . in this section, we will discuss some notation and definitions that will be used in this paper but which were not discussed in those two earlier papers .we will underline random variables .for example , we will say that the random variable has probability distribution and takes on values in the set .we will sometimes also use . throughout this paper , the symbol , if used as a scalar , will always denote the number of nodes of a graph .however , will sometimes stand for the operator that measures the number of particles , either 0 or 1 , in a single qubit state .it will usually be clear from context whether refers to the number of nodes or the number operator . in cases where we are using both meanings at the same time, we will indicate the number operator by or or and the number of nodes simply by . as usual , an ordered set or n - tuple ( resp ., unordered set ) will be indicated by enclosing its elements with parentheses ( resp . , braces )we will use two dots between two integers to denote all intervening integers .for example , , , .we will use a backslash to denote the exclusion of the elements following the backslash in an ordered or unordered set .for example , , , and .we will use the symbols inside ordered or unordered sets to denote various bounded sequences of integers .for example , , , , , , etc . on occasion , we will use stirling s approximation : n ! ( 2 n)^ ( ) ^n . note that .in this section , we will review some previous theory by other workers ( references already cited in sec.[sec - intro ] ) .this previous theory is the foundation of some algorithms for using _ classical computers _ to discover the structure of cb nets from data .the theory defines two types of modular models " , either unordered or ordered . the main goal of the theory of modular models is to give a meta " cb net that helps us to discover the graph of a specific cb net . before embarking on a detailed discussion of modular models , it is convenient to discuss various sets of graphs .the structure of an -node bi - directed " graph is fully specified by giving , for each node of graph , the set of parents of node .hence , we will make the following identification : g=(pa_n-1, ,pa_1,pa_0 ) _ n , where _ n = _ n=(2^)^n .note that .. then we will write if for all .as is customary in the literature , we will abbreviate the phrase directed acyclic graph " by dag .define _ n & = & \{g_n : g } + & = & 2^ note that .a special element of is the fully connected graph with nodes , defined by fcg_n & = & ( , , , , ) + & = & ( , , , , ) .note that .as an example , consider for .one has . to list the elements of ,one begins by noticing that any must have r pa_0 + pa_1 + pa_2 .thus , we get the following list of elements of : c =1pc & 1 & + 2 & & 0 + g_1= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_2= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_3= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_4= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_5= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_6= ( , , ) & c =1pc & 1 & + 2 & & 0 + g_7= ( , , ) & c =1pc & 1 & + 2 & & 0 + note that .most of the previous literature speaks about an order or , more precisely , a linear order among the graphs of . in this paper , instead of using the language of linear orderings , we chose to speak in the totally equivalent language of permutations . let be the unique fully connected graph with nodes , where is the root node , is the child of , is the child of and , and so on . given any , its nodes can be ordered topologically .this is tantamount to finding a permutation mapping the nodes of to the nodes of .let .see fig.[fig - fcg - g - map ] for an example with . in that figure= ( cccc _ 0 & _ 1 & _ 2 & _ 3 + _ 2 & _ 1 & _ 3 & _ 0 ) so , etc .note that in that figure , the parent sets of each node of are related to those of as follows : l|l pa(3,g)= & pa(3^ , fcg_4)= + pa(2,g)=\{1,3 } & pa(2^ , fcg_4)=\{0,1,2 } \{3,1,0 } + pa(1,g)=\{3 } & pa(1^ , fcg_4)=\{0 } \{3 } + pa(0,g)=\{1 } & pa(0^ , fcg_4)=\{0,1 } \{3,1 } .[ eq - pa - sets - eg ] eq.([eq - pa - sets - eg ] ) implies that pa_j = pa(j , g ) ^= ^. for each , define the graph by fcg_n^= ( pa_j)_j pa_j = ^ henceforth , we will say that a graph is consistent with permutation if can be obtained by erasing some arrows from .equivalently consistent with if .define ( _ n)_&= & \ { g_n : gfcg_n^ } + & = & 2^^ 2^^ 2^^ 2^^ .note .another useful set to consider is ( sym_n)_g= \ { sym_n : gfcg_n^ } .note that whereas , is much harder to calculate for large . as an example , let us calculate for and all .one finds _ 0= ( ccc 0&1&2 + 0&1&2 ) & ( ccc 0&1&2 + 0&1&2 ) & ( , , ) & l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + _ 1= ( ccc 0&1&2 + 0&2&1 ) & ( ccc 0&1&2 + 0&2&1 ) & ( , , ) & l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + _ 2= ( ccc 0&1&2 + 1&0&2 ) & ( ccc 0&1&2 + 1&0&2 ) & ( , , ) &l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + _ 3= ( ccc 0&1&2 + 2&0&1 ) & ( ccc 0&1&2 + 1&2&0 ) & ( , , ) & l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + _ 4= ( ccc 0&1&2 + 1&2&0 ) & ( ccc 0&1&2 + 2&0&1 ) & ( , , ) & l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + _ 5= ( ccc 0&1&2 + 2&1&0 ) & ( ccc 0&1&2 + 2&1&0 ) & ( , , ) & l ( ^,^,^ ) + = ( , , ) & c =.5pc=.5pc & 1 & + 2 & & 0 + .[ eq - fcg-3-sig ] note that for each , are all graphs such that where is given by last column of eq.([eq - fcg-3-sig ] ) .henceforth , whenever we write a sum over without specifying the range of the sum , we will mean over the range .likewise , a sum over permutations without specifying the range should be interpreted as being over the range .thus , _ g_= _ g_n_sym_n .any set will be called a * feature set*. given any probability distribution for , define by p()=_gp(g)= _g_n i_(g ) p(g ) , where is an indicator function .a set is said to be a * modular feature set * if it equals a cartesian product =_ n-1 _1_0 _ n .we will sometimes denote such a set by .note that for a modular feature set , 1_(g)= _ j 1__j(pa_j ) . as an example , for any two nodes , the feature set for the edge is , where l _ j_2= + jj_2 , _ j= . in this section , we will discuss the meta cb net which defines unordered modular models .the meta cb net for unordered modular models is illustrated by fig.[fig - unord - modular ] for the case of 3 nodes , .we will assume that the random variable takes on values : g=(pa_j)_j_n .let index label measurements and index label nodes .let \in s_{\rvx_j} ] , \}_{\forall j , m}$ ] .we will assume that the random variable takes on values d = x^ns__0^ms__1^m s__n-1^m . is the data from which we intend to infer the structure of a cb net .we will assume that is proportional to ( in other words , that its support is inside ) so that _ pa_jp(pa_j)=1 .let p(g ) = _ j \ { ( pa_j)p(pa_j ) } .[ eq - pg - prod ] note that this implies that is proportional to . in light of our assumption about the support of ,there is no need to write down the in eq.([eq - pg - prod ] ) , but we will write it as a reminder . let p(d|g)= _ j p(x_j|pa_j ) where p(x_j|pa_j)= _tm_j p(x_j|pa_j , tm_j)p(tm_j|pa_j ) .[ eq - xj - paj ] let .the role of the variable is to parameterize the transition matrix for node .thus , summing over is equivalent to summing over all possible transition matrices for node .the can be modelled by a reasonable probability distribution .for example , under some reasonable assumptions , cooper and herskovits find in ref. that p(x_j|pa_j)= _x_js__j \{n(j , x_j , pa_j ) ! } , where n(j , x_j , pa_j)= _ m=0^m-1 ( x_j[m]=x_j)(pa_j[m]=pa_j ) , and .this is a special case of the dirichlet probability distribution .now that our meta cb net for unordered modular models is fully defined , we can calculate the it predicts .j(pa_j)=p(x_j|pa_j)p(pa_j ) .then p(g|d ) & = & + & = & , where means the numerator summed over . if is a modular feature set , p(|d ) = , [ eq - p - f - d - unord ] where means the numerator with replaced by . in this section, we will discuss the meta cb net which defines ordered modular models .the meta cb net for ordered modular models is illustrated by fig.[fig - ord - modular ] for the case of 3 nodes , .comparing figs.[fig - unord - modular ] and [ fig - ord - modular ] , we see that unordered modular models presume that the different nodes of the graph we are trying to discover , are uncorrelated , an unwarranted assumption in some cases ( for example , if two nodes have a common parent ) . ordered modular models permit us to model some of the correlation between the nodes .as for the unordered case described in sec.[sec - class - unord ] , we will assume that the random variable takes on values : g=(pa_j)_j_n , and the random variable takes on values d = x^ns__0^ms__1^m s__n-1^m .we will put a line over the as in for probability distributions referring to the ordered modular case to distinguish them from those referring to the unordered modular case , which we will continue to represent by without the overline .we will assume that is proportional to so that _ pa_j ^ ( pa_j|)=1 .let ( g|)= _ j ( pa_j|)= _ j\ { ( pa_j^)(pa_j| ) } .note that this implies that is proportional to .let ( d|g)= _ j ( x_j|pa_j ) , where ( x_j|pa_j ) = _ tm_j ( x_j|pa_j , tm_j ) ( tm_j|pa_j ) . as in the unordered case described in sec.[sec - class - unord ], can be modelled by using a dirichlet or some other reasonable probability distribution .now that our meta cb net for ordered modular models is fully defined , we can calculate the it predicts .j(pa_j|)=(x_j|pa_j)(pa_j| ) .then ( g|d ) & = & + & = & . note that if ( x_j|pa_j)= p(x_j|pa_j ) , ( pa_j|id)=p(pa_j)(pa_j ) , and ( ) = ( , i d ) , where is the identity permutation , then ( g|d ) = p(g|d ) .[ eq - pp - to - p-1 ] if is a modular feature set , ( |d ) = .next we will assume that ( pa_j| ) = ( pa_j|^ ) , _j(pa_j|^ ) when this is true , it is convenient to define the following functions for all and : h(j^|^)= _ pa_j^1__j^(pa_j^ ) ( pa_j^^)_j^(pa_j^|^ ) . can be expressed in terms of the functions as follows : ( |d ) = .[ eq - p - f - d - ord - long ] suppose that for any , ( ) = [ eq - special - p - sigma ] where is some non - negative function defined for all and .a completely general has real degrees of freedom . on the other hand , the special given by eq.([eq - special - p - sigma ] )has degrees of freedom so it is just a poor facsimile of the full .nevertheless , this special is a nice bridge function between two interesting extremes : if is a constant function , then is too .furthermore , if , then it is easy to see that , where is the identity permutation .but is just the unordered modular model .we conclude that eq.([eq - special - p - sigma ] ) includes the uniform distribution and the unordered modular model as special cases .another nice feature of the special of eq.([eq - special - p - sigma ] ) is that for each , we can redefine so that it absorbs the corresponding function .thus , if we assume the special , then , without further loss of generality , eq.([eq - p - f - d - ord - long ] ) becomes ( |d ) = .[ eq - p - f - d - ord ] for instance , when , we have ( | d)= . hence .c h_2|\{1,0}h_1|0h_0 + h_2|\{0,1}h_0|1h_1 } a = h_2|\{1,0 } ( h_1|0h_0 + h_0|1h_1 ) + .c h_1|\{0,2}h_0|2h_2 + h_1|\{2,0}h_2|0h_0 } b = h_1|\{0,2 } ( h_0|2h_2 + h_2|0h_0 ) + .c h_0|\{2,1}h_2|1h_1 + h_0|\{1,2}h_1|2h_2 } c = h_0|\{2,1 } ( h_2|1h_1 + h_1|2h_2 ) } ( |d)= .in this section , we will give quantum circuits for calculating for both unordered and ordered modular models .two types of sums , sums over graphs , and sums over permutations , need to be performed to calculate .the methods proposed in this section perform both of these sums using a grover - like algorithm in conjunction with the techniques of blind targeting and targeting two hypotheses , in the style discussed in refs. in this section , we present one possible method for calculating for ordered modular models , where is given by eq.([eq - p - f - d - ord ] ) .eq.([eq - p - f - d - ord ] ) is just a sum over all permutations in .we have already shown how to do sums over permutations in ref. .so we could consider our job already done .however , using the method of ref. would entail the non - trivial task of finding a way of compiling the argument under the sum over permutations , a complicated -fold product of functions . in this section, we will give a method for calculating eq.([eq - p - f - d - ord ] ) that does not require compiling this -fold product of functions . our method for calculating consists of applying the algorithm afga to a target state .] of ref. in the way that was described in ref. , using the techniques of targeting two hypotheses and blind targeting . as in ref. , when we apply afga in this section , we will use a sufficient target .all that remains for us to do to fully specify our circuit for calculating is to give a circuit for generating .see fig.[fig - halfmoon ] where a halfmoon vertex that we will use in the next figure is defined. controlled gates with one of these halfmoon vertices at the bottom should be interpreted as two gates , one with a control replacing the halfmoon , and one with a control replacing it .the one with the control should have at the top a rotation and the one with the control should have at the top a hadamard matrix .all controls acting on qubits other than the qubits at the very top and very bottom are kept the same .note that is a sum of terms , which is more than terms , but the controls in those terms together with the act of taking the matrix element between and , reduces the number of summed over terms to : to draw the circuit of fig.[fig - qjen - ckt ] , especially for much larger than 3 , requires that one know how to enumerate all possible combinations of elements from a set of elements .a simple algorithm for doing this is known ( ref. ) .it s based on a careful study of the pattern in simple examples such as this one : a more serious problem with using the circuit of fig.[fig - qjen - ckt ] for large is that as eq.([eq - exp - n2b ] ) indicates , the number of qubits grows exponentially with so the circuit fig.[fig - qjen - ckt ] is too expensive for large s .however , one can make an assumption which does nt seem too restrictive , namely that the in - degree ( number of parent nodes ) of all nodes of the graph is , where the bound does not grow with .define for example , consider fig.[fig - qjen - ckt ] .if for that figure , then we can omit all the qubits , and the rotations for .in other words , fig.[fig - qjen - ckt ] can be simplified to fig.[fig - qjen - ckt - restrict ] .claim [ cl - qjen - ckt ] still holds if we replace by 1 and by in eqs.([eq - z0-z1-claim ] ) .
|
we give some quantum circuits for calculating the probability of a graph given data . together with a transition probability matrix for each node of the graph , constitutes a classical bayesian network , or cb net for short . bayesian methods for calculating have been given before ( the so called structural modular and ordered modular models ) , but these earlier methods were designed to work on a classical computer . the goal of this paper is to quantum computerize " those earlier methods .
|
data within an event often relate to one another , e.g. , tracks are often matched to showers in the electro - magnetic calorimeter .a simple object - oriented design for objects in an event has these data items containing pointers to one another .unfortunately , this causes serious dependency problems : large compilation times , extremely long link times , and broken code affects more systems . using an object - relational model avoids these problems and allows new possibilities .a simple object - oriented design for data in an event is shown in figure [ oo ] .this design has related data grouped into classes , e.g. , track and em shower classes .the data values are stored in objects of the appropriate class .for example , data describing one track is stored in one object of class type track .finally , links between objects are implemented by embedding pointers into the objects .for example , em shower objects hold pointers to any track which is believed to be matched with that particular shower .the appeal of this design is that it is easy for users to navigate the relationships between objects .unfortunately , the simple object - oriented approach leads to many problems .the first problem arises in the software s interfaces . adding new relationships between objects means changing the classes involved .this change forces a recompilation of all code that uses those classes .in addition , published objects ( objects that are made available to other parts of the code by placing the objects into the event ) must be mutable .this is because we need to be able to change an object in order to set its relationship to another object .this design also raises the question of how to handle links in the case where multiple algorithms produce the same data .for example , tracks from different track - finders could be matched to the same em showers .another question is where to put the data that describes the relationship . in the `` track - shower ''matching example , where does the distance between the track and the shower live ?a final question is , how do two people refer to the same object if each has made a sub - selection of a list ?this often arises when people are comparing results from two different analyses that are looking for the same decay mode .another set of problems arise for compilation and linking of code . in highly coupled systems , if one piece of code breaks , the whole system can break .e.g. , if tracking is broken a user may not be able to do em shower work .there are standard ways to decrease dependencies in c or c++ but the techniques may not be known by all developers . to avoid excess compilation dependencies in c or c++ , you must forward declare data in the header files rather than including the data object header files . to avoid excess linking dependencies, associated objects can not internally access member functions of each other .e.g. , we can not have a function that calculates the energy of an em shower divided by momentum of the track .it is possible to relax this requirement if you organize your code so that the associated routines are in a separate object file .this works since many linkers force resolution of all symbols found in an object file .a further complication is that reference counting smart pointers cause strong compile and link - time dependencies which is unfortunate since they make memory management easier .the last set of problems we will discuss occur in object storage .direct references in objects complicate storage .this arises since the storage system needs to convert pointers to and from persistent values .if objects use bidirectional links , it is necessary to construct both objects before linking them . to simplify the storage system, developers often couple their objects directly to the storage system .unfortunately , this coupling locks the developer into using only one storage mechanism even if that mechanism is not appropriate for all the experiment s data .reading and writing objects causes compile , link and runtime dependencies between classes .this is true even if objects only hold pointers to other types of objects .it is possible to avoid some of these dependencies if the developer is willing to read back unlinked objects .unfortunately , use of such unlinked objects forces physicists who use the system to tell the system when the links should be made .so the user is burdened with the responsibility to be sure the link is made before she tries to use the link .the problems mentioned in the previous section led us to try an object - relational approach . in this approach ,no objects have pointers to objects outside atomic storage boundaries .e.g. , mc particles can hold pointers to their children if they are stored as a unit .a second requirement of this approach is that all objects in lists must have a unique identifier .physicists can use the identifier when talking with other physicists about the objects . in our system, we use our own templated table class to hold lists of objects which sort the objects via their identifier method .also in our system , lists are identified via unique keys based on the type of the objects in the list plus two character strings .therefore objects can be uniquely identified by what list it is in and by what identifier it has within the list .the final requirement of the object - relational approach is to define relationships via separate objects , which we call lattices .a lattice is an object which links relationship data ( e.g. , the distance between a track and an em shower ) to the identifiers of two different objects ( denoted by left and right ) .the lattice supports all 16 possible configurations for links .a configuration is defined by four separate sub - configurations where each sub - configuration has two choices .the four sub - configurations are : * 1 or many lefts per link * 1 or many rights per link * 1 or many links per left * 1 or many links per right figure [ or ] shows an example of the object - relational approach .the figure shows the hit , track and em shower objects with each object having a unique identifier within its respective list . between the hit and track listsis the hit - to - track lattice . within the latticeyou can see the separate objects holding the different link information .for example , the first link in the hit - to - track lattice says that hit 1 ( denoted by the left hand number ) is connected with track 1 ( denoted by the right hand number ) .similarly , the track to em shower lattice is shown between the track and em shower lists .the object - relational approach has many advantages .first , it shortens link times . in our experience , linking usually takes less than 30 seconds on a moderately powerful machine .however , we use dynamic loading so we only have to link to the libraries a module directly needs and this reduction in the number of libraries needed for linking also contributes to our short link times and allows compilation on very moderate machines .second , this approach simplifies the code used for storage . because the system is not coupled to one storage mechanism , it is easy to support many specialized storage formats .third , this approach speeds up data read - back since we only retrieve data a user actually uses .e.g. , we can ask if a track is matched to an em shower without needing to construct the em showers .fourth , it is possible to use multiple data sources ( each with their own format ) on read - back .e.g. , our system can build an event by combining a physicist s data skim with the experiment s event database .one disadvantage of this approach is it makes navigating the relationships between objects more complex .to offset this disadvantage , we have created navigation objects that give direct access to related objects .these objects internally look up the relationship information in the appropriate lattice and then use the regular data access mechanism to retrieve the appropriate related objects .effectively , the navigation objects do what users would have to do in order to obtain related objects . to avoid interdependencies in critical software ,only analysis code is allowed to use navigation objects . additionally , we have taken special care so that only by accessing an object via navigation does the users code become compile / link - time dependent on that object. e.g. , if a user does not use em showers then you do not need to link to them , even though the navigation tracks could access them .finally , to make code maintenance easier , we only allow the one library that holds the navigation objects to have interdependencies between objects .it also means that only the developer in charge of the navigation library has to be an expert on how to minimize interdependencies in c++ code .the general wisdom when writing code is that compile / link / run - time dependencies make code less robust .we have found it possible to avoid unnecessary dependencies by encapsulating relationships between objects into a separate object .however , providing direct link objects only to analysis users works well .their code usually accesses most high level data objects , and analysis code has the shortest usage lifetime so long - term maintenance issues are less important . by following the object - relational approach ,we have seen our user s productivity and satisfaction increase because they have gained shorter compile and run times .9 j. lakos , _ large scale c++ software design _ , addison - wesley , 1996 .j. thayer , event bookkeeping for cleo3 , _ proceedings of advanced computing and analysis techniques in physics research : vii international workshop _, 2001 , 149 - 151 .
|
with the use of object - oriented languages for hep , many experiments have designed their data objects to contain direct references to other objects in the event ( e.g. , tracks and electromagnetic showers have references to each other to denote matches ) . unfortunately this creates tremendous dependencies between packages which lead to brittle development systems ( e.g. , if the electromagnetic code has a problem you may not be able to compile the tracking code ) and makes the storage system more complex . we discuss how the cleo iii experiment avoided these problems by treating an event as an object - relational database . the discussion will include : the constraints we placed on our objects ; our use of a separate association class to deal with inter - object references ; and our ability to use multiple sources to supply different data items for one event .
|
we consider multiple hypothesis testing where the underlying tests are dependent .such testing problems arise in many applications , in particular , in the fields of genomics and genome - wide association studies , but also in astronomy and other fields .popular multiple - testing procedures include the bonferroni holm method which strongly controls the family - wise error rate ( fwer ) , and the benjamini yekutieli procedure which controls the false discovery rate ( fdr ) , both under arbitrary dependence structures between test statistics .if test statistics are strongly dependent , these procedures have low power to detect true positives .the reasons for this loss of power are well known : loosely speaking , many strongly dependent test - statistics carry only the information equivalent to fewer `` effective '' tests . hence , instead of correcting among many multiple tests , one would in principle only need to correct for the smaller number of `` effective '' tests .moreover , when controlling some error measure of false positives , an oracle would only need to adjust among the tests corresponding to true negatives .in large - scale sparse multiple testing situations , this latter issue is usually less important since the number of true positives is typically small , and the number of true negatives is close to the overall number of tests .the dependence among tests can be taken into account by using the permutation - based westfall young method , already used widely in practice ( e.g. , ) . under the assumption of subset - pivotality( see section [ secsinglestepwy ] for a definition ) , this method strongly controls the fwer under any kind of dependence structure . in this paperwe show that the westfall young permutation method is an optimal procedure in the following sense .we introduce a single - step oracle multiple testing procedure , by defining a single threshold such that all hypotheses with -values below this threshold are rejected ( see section [ secsinglestep ] ) .the oracle threshold is the largest threshold that still guarantees the desired level of the testing procedure .the oracle threshold is unknown in practice if the dependence among test statistics and the set of true null hypotheses are unknown .we show that the single - step westfall young threshold approximates the oracle threshold for a broad class of testing problems with a block - dependence and sparsity structure among the tests , when the number of tests tends to infinity .our notion of asymptotic optimality relative to an oracle threshold is on a general level and for any specified test statistic .the power of a multiple testing procedure depends also on the data generating distribution and the chosen individual test(s ) : we do not discuss this aspect here . instead , our goal is to analyze optimality once the individual tests have been specified .our optimality result has an immediate consequence for large - scale multiple testing : it is not possible to improve on the power of the westfall young permutation method while still controlling the fwer when considering single - step multiple testing procedures for a large number of tests and assuming only a block - dependence and sparsity structure among the tests ( and no additional modeling assumptions about the dependence or clustering / grouping ) .hence , in such situations , there is no need to consider ad - hoc proposals that are sometimes used in practice , at least when taking the viewpoint that multiple testing adjusted -values should be as model free as possible .there is a small but growing literature on optimality in multiple testing under dependence .sun and cai studied and proposed optimal decision procedures in a two - state hidden markov model , while genovese et al . and roeder and wasserman looked at the intriguing possibility of incorporating prior information by -value weighting .the effect of correlation between test statistics on the level of fdr control was studied in benjamini and yekutieli and benjamini et al . ; see also blanchard and roquain for fdr control under dependence .furthermore , clarke and hall discuss the effect of dependence and clustering when using `` wrong '' methods based on independence assumptions for controlling the ( generalized ) fwer and fdr .the effect of dependence on the power of higher criticism was examined in hall and jin .another viewpoint is given by efron , who proposed a novel empirical choice of an appropriate null distribution for large - scale significance testing .we do not propose new methodology in this manuscript but study instead the asymptotic optimality of the widely used westfall young permutation method for dependent test statistics .after introducing some notation , we define our notion of a single - step oracle threshold and describe the westfall young permutation method .let be a data matrix containing independent realizations of an -dimensional random variable with distribution and possibly some additional deterministic response variables ._ prototype of data matrix ._ to make this more concrete , consider the following setting that fits the examples described in section [ secexamples ] .let be a deterministic variable , and allow the distribution of to depend on . for each value , , we observe an independent sample of .we then define to be an -dimensional matrix by setting for and for and .thus , the first row of contains the -variables , and the column of corresponds to the data sample . based on , we want to test null hypotheses , , concerning the components of . for concrete examples ,see section [ secexamples ] .let be the indices of the true null hypotheses , and let be the indices of the true alternative hypotheses , that is , .let be a distribution under the complete null hypothesis , that is , .we denote the class of all distributions under the complete null hypothesis by .suppose that the same test is applied for all hypotheses , and let ] for -tests and related approaches , while is discrete for permutation tests and rank - based tests .let , , be the -values for the hypotheses , based on the chosen test and the data .suppose that we knew the true set of null hypotheses and the distribution of under ( which is of course not true in practice ) .then we could define the following single - step oracle multiple testing procedure : reject if , where is the -quantile of under . throughout , we define the maximum of the empty set to be zero , corresponding to a threshold that leads to zero rejections .this oracle procedure controls the fwer at level , since , by definition , and it is optimal in the sense that values with no longer control the fwer at level . the westfall young permutation method is based on the idea that under the complete null hypothesis , the distribution of is invariant under a certain group of transformations , that is , for every , and have the same distribution under .romano and wolf refer to this as the `` randomization hypothesis . '' in the sequel , is the collection of all permutations of , so that the number of elements equals . _ prototype permutation group acting on the prototype data matrix . _ in the examples in section [ secexamples ], is a prototype data matrix as described in section [ secnotation ] .the prototype permutation leads to a matrix obtained by permuting the _ first _ row of ( i.e. , permuting the -variables ) .for all examples in section [ secexamples ] , under the complete null hypothesis , the distribution of is then identical to the distribution of for all , so that the randomization hypothesis is satisfied .we suppress the dependence of on the sample size for notational simplicity .the single - step westfall young critical value is a random variable , defined as follows : where denotes the indicator function , and represents the permutation distribution for any function mapping into .in other words , is the -quantile of the permutation distribution of .our main result ( theorem [ thoptimal ] ) shows that under some conditions , the westfall young threshold approaches the oracle threshold .it is easy to see that the westfall young permutation method provides weak control of the fwer , that is , control of the fwer under the complete null hypothesis . under the assumption of subset - pivotality, it also provides strong control of the fwer , that is , control of the fwer under any set of true null hypotheses .subset - pivotality means that the distribution of is identical under the restrictions and for all possible subsets of true null hypotheses .subset - pivotality is not a necessary condition for strong control ; see , for example , romano and wolf , westfall and troendle and goeman and solari .we consider the framework where the number of hypotheses tends to infinity .this framework is suitable for high - dimensional settings arising , for example , in microarray experiments or genome - wide association studies .( a1 ) block - independence : the -values of all true null hypotheses adhere to a block - independence structure that is preserved under permutations in .specifically , there exists a partition of such that for any pair of permutations , , b=1, ,b_m , are mutually independent under . here , the number of blocks is denoted by .[ we assume without loss of generality that for all , meaning that there is at least one true null hypothesis in each block ; otherwise , the condition would be required only for blocks with . ]sparsity : the number of alternative hypotheses that are true under is small compared to the number of blocks , that is , as .block - size : the maximum size of a block , , is of smaller order than the square root of the number of blocks , that is , as .let be a random permutation taken uniformly from . under , the joint distribution of is identical to the joint distribution of .let be the permutation distribution in ( [ eqpstar ] ) .there exists a constant such that for and all , the -values corresponding to true null hypotheses are uniformly distributed ; that is , for all and , we have .a sufficient condition for the block - independence assumption ( a1 ) is that for every fixed pair of permutations the blocks of random variables are mutually independent for .this condition is implied by block - independence of the last rows of the prototype for the examples discussed in section [ secexamples ] and for the prototype as in section [ secsinglestepwy ] .the block - independence assumption captures an essential characteristic of large - scale testing problems : a test statistic is often strongly correlated with a number of other test statistics but not at all with the remaining tests .the sparsity assumption ( a2 ) is appropriate in many contexts .most genome - wide association studies , for example , aim to discover just a few locations on the genome that are associated with prevalence of a certain disease .furthermore , assumption ( a3 ) requiring that the range of ( block- ) dependence is not too large , which seems reasonable in genomic applications : for example , when having many different groups of genes ( e.g. , pathways ) , each of them not too large in cardinality , a block - dependence structure seems appropriate .we now consider assumptions ( b1)(b3 ) , supposing that we work with a prototype data matrix and a prototype permutation group as described in sections [ secnotation ] and [ secsinglestepwy ] .assumption ( b1 ) is satisfied if each -value only depends on the and rows of .moreover , subset - pivotality is satisfied in this setting .assumption ( b3 ) is satisfied for any test with valid typei error control .assumption ( b2 ) is fulfilled with if for all where is the probability with respect to a random permutation taken uniformly from , so that the left - hand side of ( [ eqcondprob ] ) equals in ( [ eqt1 ] ) .note that assumptions ( b1 ) and ( b3 ) together imply that where the probability is with respect to a random draw of the data , and a random permutation taken uniformly from .thus , assumption ( b2 ) holds if ( [ eqtest ] ) is true for all when conditioned on the observed data .section [ secexamples ] discusses three concrete examples that satisfy assumptions and subset - pivotality . for our theorems in section [ secmainresult ] , it would be sufficient if ( [ eqt1 ] ) were holding only with probability converging to 1 when sampling a random , but we leave a deterministic bound since it is easier notationally , the extension is direct and we are mostly interested in rank - based and conditional tests for which the deterministic bound holds .we now give three examples that satisfy assump - tions , as well as subset - pivotality . as in section [ secnotation ] , let be a deterministic scalar class variable and an -dimensional vector of random variables , where the distribution of can depend on . let the prototype data matrix and the prototype group of permutations be defined as in sections [ secnotation ] and [ secsinglestepwy ] , respectively . in all examples ,we work with tests with valid type i error control , and each -value only depends on the and rows of . hence , assumptions ( b1 ) , ( b3 ) and subset - pivotality are satisfied , and we focus on assumption ( b2 ) in the remainder . for the examples in sections [ seclocationshift ] and [ secmarginal ] , we assume that there exists a and an -dimensional random variable such that we omit the dependence of on in the following for notational simplicity .we consider two - sample testing problems for location shifts , similar to example 5 of romano and wolf .using the notation in ( [ eqxy ] ) , is a binary class variable , and the marginal distributions of are assumed to have a median of zero .we are interested in testing the null hypotheses versus the corresponding two - sided alternatives , we now discuss location - shift tests that satisfy assumption ( b2 ) .first , note that all permutation tests satisfy ( b2 ) with , since the -values in a permutation test are defined to fulfill for all .permutation tests are often recommended in biomedical research and other large scale location - shift testing applications due to their robustness with respect to the underlying distributions .for example , one can use the wilcoxon test .another example is a `` permutation -test '' : choose the -value as the proportion of permutations for which the absolute value of the -test statistic is larger than or equal to the observed absolute value of the -test statistic for . then condition ( b2 )is fulfilled with with the added advantage that inference is exact , and the type i error is guaranteed even if the distributional gaussian assumption for the -test is not fulfilled .computationally , such a `` permutation -test '' procedure seems to involve two rounds of permutations : one for the computation of the marginal -value and one for the westfall young method ; see ( [ eqpstar ] ) .however , the computation of the marginal permutation -value can be inferred from the permutations in the westfall young method , as in meinshausen , and just a single round of permutations is thus necessary .suppose that we have a continuous variable in formula ( [ eqxy ] ) .based on the observed data , we want to test the null hypotheses of no association between variable and , that is , versus the corresponding two - sided alternatives .a special case is the test for linear marginal association , where the functions for are assumed to be of the form , and the test of no linear marginal association is based on the null hypotheses rank - based correlation test like spearman s or kendall s correlation coefficient are examples of tests that fulfill assumption ( b2 ) .alternatively , a `` permutation correlation - test '' could be used , analogous to the `` permutation -test '' described in section [ seclocationshift ] .contingency tables are our final example .let be a class variable with distinct values .likewise , assume that the random variable is discrete and that each component of can take distinct values , .as an example , in many genome - wide association studies , the variables of interest are single nucleotide polymorphisms ( snps ) .each snp ( denoted by ) can take three distinct values , in general , and it is of interest to see whether there is a relation between the occurrence rate of these categories and a category of a person s health status .based on the observed data , we want to test the null hypothesis for that the distribution of does not depend on , the available data for hypothesis is contained in the and rows of .these data can be summarized in a contingency table and fisher s exact test can be used .since the test is conditional on the marginal distributions , we have that for a random permutation and ( b2 ) is fulfilled with .we now look at the properties of the westfall young permutation method and show asymptotic optimality in the sense that , with probability converging to 1 as the number of tests increases , the estimated westfall young threshold is at least as large as the optimal oracle threshold , where can be arbitrarily small .this implies that the power of the westfall young permutation method approaches the power of the oracle test , while providing strong control of the fwer under subset - pivotality .all proofs are given in section [ secproofs ] .[ thoptimal ] assume and .then for any and any we note that the sample size can be fixed and does not need to tend to infinity .however , if the range of -values is discrete , the sample size must increase with to avoid a trivial result where the oracle threshold vanishes ; see also theorem [ thoptimaldiscrete ] where this is made explicit for the wilcoxon test in the location - shift model of section [ seclocationshift ] .theorem [ thoptimal ] implies that the actual level of the westfall young procedure converges to the desired level ( up to possible discretization effects ; see section [ sectiondiscrete ] ) . to appreciate the statement in theorem [ thoptimal ] in terms of power gain ,consider a simple example .assume that the hypotheses form blocks .in the most extreme scenario , test statistics are perfectly dependent within each block . in such a scenario , the oracle threshold ( [ eqoracle ] ) for each individual -value is then {1-\alpha } , \ ] ] which is larger than , but very closely approximated by for large values of .thus , when controlling the fwer at level , hypotheses can be rejected when their -values are less than {1-\alpha} ] , that is , throughout , we denote the expected value , the variance and the covariance under by , and , respectively .let and .let and .then writing expression ( [ eqthoptimal ] ) in terms of and is equivalent to by definition , we thus have to show that as .first , we show in lemma [ lemmachati ] that there exists an such that for all and for all .this result is mainly due to the sparsity assumption ( a2 ) .second , we show in lemma [ lemmaapproxpermutationallsimple ] that theorem [ thoptimal ] follows by combining these two results . [ lemmachati ]let , , and assume and .then there exists an such that for all and for all .note that by definition . using the union bound, we have , for all and all , \\[-8pt ] & & { } + \sum_{j \in i'(p_m ) } p^ * \bigl(p_j(w ) \le s\bigr).\nonumber\end{aligned}\ ] ] hence , we only need to show that there exists an such that for all and all . by assumption ( b2 ) with constant , \\[-8pt ] & = & r \frac{|i'(p_m)|}{b } b c_{m , n}(\alpha).\nonumber\end{aligned}\ ] ] since as by assumption ( a2 ) , and is bounded above by under assumptions ( a1 ) and ( b3 ) ( see lemma [ lemmasumpib ] ) , we can choose a such that the right - hand side of ( [ eqlastline ] ) is bounded above by for all .this proves the claim in ( [ eqtoshownew ] ) and completes the proof .[ lemmaapproxpermutationallsimple ] let and and assume and .then let .the statement in the lemma is equivalent to showing that there exists an such that for all . by definition , \\[-8pt ] & = & \frac{1}{|\mathcal{g}| } \sum_{g \in\mathcal g } r(g , w),\nonumber\end{aligned}\ ] ] where ( we suppress the dependence on and for notational simplicity . ) let be a random permutation , chosen uniformly in , and let denote the identity permutation .then , by assumption ( b1 ) , it follows that by definition of [ see ( [ eqoracle ] ) ] , hence , the desired result ( [ eqts2 ] ) follows from a markov inequality as soon as one can show that the variance of ( [ eqpstareq ] ) vanishes as , that is , if as .let be two random permutations , drawn independently and uniformly from .then hence , in order to show ( [ eqpstarvar ] ) , we only need to show that define so that .we then need to prove that , as , using assumption ( a1 ) , the left - hand side in ( [ eqts3 ] ) can be written as ^ 2 .\ ] ] note that and ^ 2 ] under assumptions ( a1 ) , ( b1 ) and ( b2 ) .hence , using lemma [ lemmavar ] , it follows that ^ 2 \\ & \le & \bigl ( \log\{1/(1-\alpha)\}\alpha{r}^2 m_b b^{-1}\bigr)^2.\end{aligned}\ ] ] since under assumption ( a3 ) , claim ( [ eqts4 ] ) follows .[ lemmasumpib ] under assumptions and , we have let and .then where the inequality follows from the definition of , and the equality follows from assumption ( b3 ) and the fact that . summing ( [ eqpibc ] ) over yields the first inequality of ( [ eqboundsumpib ] ) . to prove the second inequality of ( [ eqboundsumpib ] ) , note that assumption ( a1 ) and the definition of that \le\alpha.\ ] ] the maximum of under constraint ( [ eqconstraint ] ) is obtained when this implies for all , so that and this is bounded above by for all values of .[ lemmasupportf ] assume and .let be the distribution of , where is defined in ( [ eqdefmub ] ) .then .\ ] ] using assumption ( b2 ) with constant and the union bound , it holds that since , the support of is thus in the interval ] .suppose that the distribution of the two random variables and , conditional on , is given by then . by the assumption that and are bernoulli conditional on , it follows that . combining this with the law of iterated expectation and the fact that and are conditionally independent given , we obtain moreover , we have and similarly .hence , finally , by the assumption on the support of .first , note that ( w ) implies ( b1)(b3 ) . using the union bound and assumption ( b3 ), it holds for any that is an upper bound for .hence , \\[-8pt ] & \ge & \max\ { s \in s_n \dvtx m s \le\alpha\}.\nonumber\end{aligned}\ ] ] this implies that the oracle critical value is larger than zero if the set is nonempty , which is the case if .the smallest possible two - sided wilcoxon -value is .hence , it is sufficient to require that , or equivalently , that .note that ( a3 ) implies ( a3 ) .hence , under assumptions ( w ) , ( a1 ) , ( a2 ) and ( a3 ) , the result in theorem [ thoptimal ] applies .let .we will now show that under assumptions ( w ) , ( a1 ) and ( a3 ) , as such that , where was defined in ( [ eqalpha- ] ) .define . using the definition of and assumption ( a1 ), we have \\ & = & 1-\prod_{b=1}^b [ 1-\pi_b\{c^+_{m , n}(\alpha)\ } + \pi_b\ { c^+_{m , n}(\alpha)\ } - \pi_b\{c_{m , n}(\alpha)\ } ] .\nonumber\end{aligned}\ ] ] define the function \to{\mathbb{r}}$ ] by ,\end{aligned}\ ] ] so that the right - hand side of ( [ eqstart ] ) equals , where for . a first - order taylor expansion of around yields where . for all , we have \\ & = & - \frac{1-g_{m , n}(0)}{1-\pi_b\{c^+_{m , n}(\alpha)\ } } \ge- \frac { 1-g_{m , n}(0)}{1-m_b c^+_{m , n}(\alpha)},\end{aligned}\ ] ] where the inequality follows from for , by the union bound and assumption ( b3 ) . plugging this into ( [ eqtaylor ] ) yields \\[-8pt ] &= & g_{m , n}(0 ) \biggl ( 1 + \frac{\sum_{b=1}^b w_b}{1-m_b c^+_{m , n}(\alpha)}\biggr ) - \frac{\sum_{b=1}^b w_b}{1-m_b c^+_{m , n}(\alpha ) } + r.\nonumber\end{aligned}\ ] ] the definition of implies that for all and .hence , if as such that , then the right - hand side of ( [ eqtaylor2 ] ) converges to and the proof is complete .we first consider . by definition , there is no value such that .hence , where the inequality follows from the union bound , and the last equality is due to assumption ( b3 ) .this implies similarly , we have note that by lemma [ lemmasumpib ] and ( and hence ) by assumption ( a3 ) . hence , in order to prove ( [ eqtwothingstoshow ] ) , it suffices to show that let the ordered -values in , based on a two - sided wilcoxon test with equal sample sizes in both classes , be denoted by , where .it is well known that and , where is the number of integer partitions of such that neither the number of parts nor the part magnitudes exceed .let satisfy .then this ratio converges to 1 if .recall that [ see ( [ eqcmnge ] ) ] .hence , since as such that , we have that under these conditions and .thus ( [ eqratioconv ] ) holds and hence implies ( [ eqtwothingstoshow ] ) , which completes the proof .we would like to thank two referees for constructive comments .
|
test statistics are often strongly dependent in large - scale multiple testing applications . most corrections for multiplicity are unduly conservative for correlated test statistics , resulting in a loss of power to detect true positives . we show that the westfall young permutation method has asymptotically optimal power for a broad class of testing problems with a block - dependence and sparsity structure among the tests , when the number of tests tends to infinity . , + . .
|
wireless operators often install low - power _ hotspot microcell _ base stations to provide coverage to small high - traffic regions within a larger coverage area .these microcells enhance the user capacity and coverage area supported by the existing high - power _ macrocell _ base stations . in this paper , we study these gains for a _ two - tier code division multiple access ( cdma ) network _ in which both the macrocells and microcells use the same set of frequencies .specifically , we study the effects , on the uplink capacity and coverage area , of transmit power constraints and variable power fading due to multipath . + +earlier studies have examined the uplink performance of these single - frequency , two - tier cdma systems , e.g. , - . in , we showed how to compute the uplink user capacity for a single - macrocell / single - microcell ( two - cell ) system using exact and approximate analytical methods that accounted for random user locations , propagation effects , signal - to - interference - plus - noise ratio ( sinr)-based power control , and various methods of base station selection . in , we calculated capacity gains under similar assumptions for a system composed of multiple macrocells and/or multiple microcells ; the results pointed to a roughly linear growth in capacity as the number of these base stations increase , where the constants of the linear curve were derivable solely from the analysis for the two - cell case .this linear approximation was conjectured based on the trends observed from several simulations .more recently , we have developed an _ analytical _ approximation to the user capacity of a multicell system which is much tighter than the empirical result of ; and is valid over a larger number of embedded microcells .interestingly , this new analytical approach also depends on constants obtained from a far - simpler two - cell analysis .these studies therefore highlight the importance of understanding two - cell performance in computing the performance of larger multicell systems .+ + the capacity gains demonstrated in these earlier works were based on assumptions that 1 ) user terminals have unlimited power ; and 2 ) that the wireless channel is so wideband that user signals have constant output power after rake receiver processing .this latter condition is equivalent to assuming that user signals go through an infinitely - dispersive channel before reception . in this paper, we remove these conditions .portions of this work were presented in and a companion paper on _ downlink _ capacity showed that this type of system is uplink - limited . here , we improve the presentation in and present several new results .specifically , we show how the capacity / coverage tradeoff under finite power constraints can vary significantly with shadow fading .further , we bridge our study of finite power constraints and finite channel dispersion by presenting a new analysis of user capacity in two - cell systems under _ both _ maximum transmit power constraints and variable power fading . finally and most significantly , we develop and verify analytical expressions for the total user capacity of _ multicell _ two - tier cdma systems under finite dispersion .+ + section [ sysdes ] describes the basic two - cell system and the model used to capture the power gain between a user and a base ; and it summarizes our previous results for total uplink capacity .section [ pmax_sec ] assumes a limit on terminal transmit power and approximates total capacity as a function of this power and cell size .section [ varfading_sec ] relaxes the condition on infinite dispersion and approximates the total capacity , for both limited and unlimited terminal power , as a function of the multipath delay profile .section [ multimulti_sec ] extends the results for finite dispersion and unlimited terminal power to the case of multiple macrocells and multiple microcells .simulations confirm the accuracy of the approximation methods over a wide range of practical conditions and assumptions .* the system : * we first consider a system with a macrocell and an embedded microcell which together contain total users , comprising in the macrocell and in the microcell .we assume users have random codes of length , where is the system bandwidth and is the user rate .each user is power - controlled by its base so as to achieve a required uplink power level .it was shown in that the required received power levels to meet a minimum sinr requirement of are and at the macrocell and microcell base stations , respectively . here, denotes the single - cell pole capacity , is the power spectral density of the additive white gaussian noise , and and are _ normalized cross - tier interference _ terms .specifically , is the total interference power at the macrocell due to microcell users and is the total interference power at the microcell due to macrocell users .thus , where represents the set of microcell users , represents the set of macrocell users , and and are the path gains from user to the macrocell and microcell base stations , respectively .we say that a given arrangement of user locations and path gains is _ feasible _ if and only if the common denominator in ( [ pwr1 ] ) and ( [ pwr2 ] ) is positive .the fraction of all possible cases ( i.e. , all combinations of and ) for which this occurs is called the _ probability of feasibility_. the cross - tier interference terms depend on the path gains for all users to both base stations and on the method by which users select base stations , leading to the sets and .+ + * path gain model : * we model the instantaneous power gain ( i.e. , the sum of the power gains of the resolvable multipaths ) between user and base station as where is the distance between user and base station ; is the breakpoint distance of base station at which the slope of the decibel path gain versus distance changes ; is a gain factor that captures the effects of antenna height and gain at base station ; represents _ shadow fading _ ; and is the variable fading due to multipath - .note that and are different for each user - base pair ( ) but , for convenience , we suppress the subscripts for these terms .since the antenna height and gain are greater at the macrocell , we can assume .we model as a zero mean gaussian random variable with variance for base station .the fading due to multipath , , is scaled to be a unit - mean random variable ( ) . in an infinitely dispersive channel , .let denote the path gain between user and base in an infinitely dispersive channel .thus , represents the local spatial average of , and .+ + * base - selection scheme : * we assume a user chooses to communicate with the macrocell base station if exceeds by a fraction , called the _ desensitivity _ parameter . thus , in both finitely - dispersive and infinitely dispersive channels , the user selects base stations according to the local mean path gains . with no loss in generality , we assume in this study. + + * capacity : * the path gain model and the selection scheme described above imply that the cross - tier interference terms are random variables ; they depend on the random variations due to shadow and multipath fading and on the random user locations . in , we demonstrated how to compute the cumulative distribution functions ( cdf s ) of the two cross - tier interference terms in ( [ im_imu ] ) under the condition of infinite dispersion .let and denote these cross - tier terms under that condition .the cdf s of and were used to compute the exact uplink user capacity in a two - cell system with unlimited terminal power .in addition , the mean values of and were used to reliably approximate attainable user capacity for the two - cell system as where and are the mean values ( over all user locations and shadowing fadings ) of single terms in the sums and , respectively ., we used the fact , proved in , that maximal total capacity results when . ] the total user capacity of the two - cell system thus depends on ( a system parameter ) and the product .we computed this product under various conditions and observed that it ( and therefore ) are robust to variations in propagation parameters , separation between the two base stations , and user distribution .its value under all conditions examined was about 0.09 , leading to the robust result .we use simulation to study the two - cell system in figure [ fxy ] . specifically , we assume a square geographic region , with sides of length . at a distance from the center of this squareis a smaller square region , with sides of length .a portion of the total user population is uniformly distributed over the larger square region and the remaining users are uniformly distributed over the smaller square region .the smaller square thus represents a traffic hotspot within the larger coverage region .a macrocell base with antenna height is at the center of the larger square , while a microcell base with antenna height is at the center of the smaller square .we assume each user terminal has antenna height .these antenna heights are used to compute the distance between transmit antenna at each user terminal and the receive antenna at the two base stations .the breakpoint of the path loss to the microcell base is engineered to be , to help ensure that the microcells provide strong signals to those users contained within the high density region , and sharply diminishing signals to users outside it .finally , we assume that on average half of the users are uniformly distributed over the hotspot region surrounding the microcell base .this is done to obtain a roughly equal number of users for each base , i.e. , to ensure that the maximal user capacity occurs with high probability . + + using the parameter values listed in table 1 , we determine the total user capacity that can be sustained with 5% outage probability for various values of .figure [ pwr_out1 ] shows the resulting user capacity as a function of as defined in ( [ peruser_outage_mu ] ) .we observe the close correspondence between the approximate and simulation results .we also note that the user capacity goes to an asymptotic value as gets large and that , below a critical value of ( denoted by ) , the user capacity decreases sharply as decreases .( from the figure , . ) in other words , there are critical combinations of and such that , if we either reduce the transmit power constraint or increase the desired coverage area , the overall user capacity noticeably drops ; and , for values of , the system operates as if there is unlimited terminal power .+ + in an earlier study , we found that total user capacity is nearly invariant to and , increasing _ very _ slightly as and increase .this finding was for unlimited terminal power .we now consider the uplink user capacity as a function of for various pairs of .the results , obtained via simulation , are in figure [ sigma_curves ] .even though the capacity for unlimited transmit power ( infinite ) is slightly larger for , , the curves increase much faster for smaller values of and .thus , although for unlimited transmit power the total user capacity is roughly invariant to and , it can vary significantly with these parameters under transmit power constraints .we now consider _ finite dispersion _( i.e. , a finite number of significant multipaths ) , which causes the sum of the multipath powers to fade with time as a user moves in the environment .we assume that users still select bases according to the _ slowly _ varying path gains , , but the fluctuations of signals and interferences due to multipath lead to instantaneous occurrences of outage .the instantaneous path gain from base to user is , where is independent and identically distributed ( i.i.d . ) over all .let represent the instantaneous power gain of the -th multipath component of a particular uplink channel .then , for that channel , where is the total number of multipaths and we assume a scaling such that . we consider four possible multipath delay profiles , for each of which has a different statistical nature .one is for the so - called _ uniform channel _ , in which the multipaths are i.i.d . and rayleigh - fading , i.e. , each has a power gain that is exponentially distributed with a mean of .+ + the other three delay profiles studied here are based on cellular channel models for the typical urban ( tu ) , rural area ( ra ) , and hilly terrain ( ht ) environments provided in third - generation standards .see , for example , , which tabulates in db for these environments .base selections ( and thus , the sets and ) are made according to the local average path gains , .the instantaneous cross - tier interference powers are where and are i.i.d .random variables , each distributed as in ( [ rho_def ] ) .we see from the definitions of and that , at any one instant , the system could become infeasible or , more generally , experience outage , depending on the values of and . here again , we seek the largest number of users , , that can be supported such that .+ + the exact method for calculating requires that the distributions of and be computed .the procedure is even more complex than before since we must incorporate the effects of the additional random quantities , and .thus , we resort to two analytical approximations . the first estimates total user capacity under finite dispersion but for large ( i.e. , ) ; and the second estimates capacity under both finite dispersion and finite transmit power requirements .[ [ estimated - user - capacity - in - a - uniform - channel ] ] estimated user capacity in a uniform channel + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the total user capacity in a uniform channel , , can be approximated by modifying the mean method presented in ( [ nunif ] ) .specifically , we postulate that where and the mean values and were defined earlier ; and , where and are i.i.d .gamma random variables with unit mean and degrees of freedom .in other words , represents either or , and represents the other . for a uniform channel with paths , the probability density of ( and therefore of or )is we can compute the mean of as , where and we can therefore relate to , as follows : clearly , this approximation breaks down as approaches 1 , as it predicts zero capacity in that case . for , we obtain a simple relationship between user capacity and the degree of multipath dispersion .as becomes infinite , user capacity converges to the estimate given by ( [ nunif ] ) .[ [ estimated - user - capacity - in - a - non - uniform - channel ] ] estimated user capacity in a non - uniform channel + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the result in ( [ nu_approx ] ) can be used to approximate user capacity for _ any _ delay profile .consider a channel delay profile having some arbitrary variation of with over the paths .we can compute a _ diversity factor _, , defined as the ratio of the square of the mean to the variance of in ( [ rho_def ] ) . for independent rayleigh paths , can be shown to be from this definition , we see that the diversity factor for a uniform channel is precisely .paths is less than . ]the proposed approximation makes use of this fact by first calculating the diversity factor for a given ( non - uniform ) channel .the approximate user capacity is then the value of corresponding to , i.e. , in ( [ nu_approx ] ) . to approximate user capacity under both finite dispersion and finite terminal power ,we incorporate the approximations of sections [ pmax_sec ] and [ fading_f_large ] .we begin by first studying the uniform delay profile and then develop an approximation for a general non - uniform environment .[ [ estimated - user - capacity - in - a - uniform - channel-1 ] ] estimated user capacity in a uniform channel + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as in section [ pmax_sec ] , we seek the largest such that the probability of outage is or lower . in calculating outage, we use the expressions in ( [ totaloutage ] ) and ( [ outage_eq ] ) but replace and with and , respectively , so that ;\\ p'_\mu & = & \mbox{pr~}[\mbox{e~ } \{ s'_\mu \ } / \rho \tilde{t}'_\mu \leq f ] , \label{pmmu_new}\end{aligned}\ ] ] where , , and is a random variable with density given in ( [ rho_density ] ) .the steps used to compute and are identical to the steps outlined in section [ pmax_sec ] for the calculations of and .the difference here is that the densities of and depend on the densities of and , ( [ im_imu ] ) , which account for variable fading , whereas and were obtained using the cross - tier interference terms in infinitely - dispersive channels .the densities of and depend on the densities of the individual terms in their sums .the terms in each sum are i.i.d .random variates , denoted here by and .+ + for a uniform channel with paths , we note that , where , and ( ) has probability density given in ( [ rho_density ] ) .thus , the density of ( ) can be computed as where the probability density of is given in and we obtain from its cdf , which can be computed as : and .] = { \mbox{p}}[\rho_1 \leq \rho_2 x ] \\ & = & 1-\frac{1}{(l_p-1 ) ! } \sum_{i=0}^{l_p-1 } \frac{x^i}{i ! } \cdot \frac{(l_p-1+i)!}{(x+1)^{l_p+i}}.\end{aligned}\ ] ] with the density of and at hand , we use the analysis in to compute the densities of and for a given value of .we then use the densities of these cross - tier interferences to compute the densities of and , which in turn give us the mean values needed in ( [ pmmu_new ] ) . the outage probability for this capacity then determined using ( [ totaloutage ] ) and ( [ outage_eq ] ) with the substitutions for and indicated in ( [ pmmu_new ] ) .finally , we can approximate the maximum number of users supported for a desired outage level ( ) for the uniform multipath channel .[ [ estimated - user - capacity - in - a - non - uniform - channel-1 ] ] estimated user capacity in a non - uniform channel + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + given a non - uniform channel , we propose an approximation to user capacity for a given value of that uses the above technique for the uniform channel . specifically , we use the results of section [ fading_f_large ] to claim that a system in a non - uniform channel with diversity factor supports roughly the same number of users as the same system in a uniform channel with paths .thus , for a given non - uniform delay profile , we first compute from ( [ df_definition ] ) .we then set as the integer closest to and use the analysis above to estimate the total user capacity with outage in a uniform channel with paths .this value approximates user capacity in the given non - uniform channel .we use the two - cell system described in figure [ fxy ] and table 1 to obtain simulation results for finitely - dispersive channels .for a given configuration of users , i.e. , a fixed set of user locations and shadow fadings , we generate 1000 values of and for each user ( based on the given channel delay profile ) and record the instances of outage .we then calculate this outage measurement over 999 other random user locations and shadow fadings ( assuming a fixed value of ) .the total instances of outage over all these trials determine the probability of outage for .+ + using this simulation method , we first calculate the user capacity with 5% outage probability when there are no transmit power constraints ( ) .large , 5% outage is the same as 5% infeasibility .] this is done for the uniform channel and the ra , ht , and tu environments . for the uniform channel ,we determine user capacity as a function of ; figure [ nvsl_fading ] shows results for both the simulation and approximation methods .the approximation curve follows the simulation curve closely , especially for smaller values of .figure [ nvsl_fading ] also shows the infinite- capacity for the ra , ht , and tu environments ( from simulation ) .we show each of these capacities at a value of equal to its diversity factor .the diversity factors for the ra , ht , and tu environments are 1.6 , 3.3 , and 4.0 , respectively .we see that , using the diversity factor , the capacity of a non - uniform channel can be mapped to the computed curve for the uniform channel , yielding a simple and reliable analytical approximation to user capacity . for channels with , the total user capacity appears to be within 5% of that for an infinitely dispersive channel .+ + we also obtained user capacity ( with 5% outage probability ) as a function of for the 2-path and 4-path uniform channels .this was done via both simulation and analytical approximation , and the results are presented in figure [ nvsf_fading ] .we observe , first , that the approximation matches the simulation results for the 2-path and 4-path channels in the region of interest , i.e. , for .overall , the approximations improve as increases .furthermore , the user capacities noticeably decrease for , where for all three channels studied here , a result which is consistent with the infinitely - dispersive case ( figure [ pwr_out1 ] ) .+ + in figure [ nvsf_fading_2 ] , we plot user capacity versus using the approximation method of section [ fading_f_small ] for and 4 and simulation results for the ra , ht , and tu environments .the ra results closely match those for ; the ht results closely match those for ; and the tu results closely match those for .thus , the diversity factor method of section [ fading_f_small ] allows us to reliably estimate user capacity versus for any realistic delay profile .+ + finally , we note that as in figure [ nvsf_fading ] , the critical value is around 1 for all cases in figure [ nvsf_fading_2 ] .thus , we find that , regardless of the delay profile , . for the remainder of the paper , we assume practical values of and such that .in computing the approximate user capacity in an infinitely dispersive channel , we use a result in which states that maximal user capacity results when each cell in a multicell system supports an equal number of users .let us denote as the ( equal ) number of users supported in each cell and as the total number of users supported in a system with microcells , macrocells , and an infinitely dispersive channel .. ] we assume initially that all microcells are embedded within one of the macrocells ( macrocell 1 ) and that the remaining macrocells surround this macrocell .we first write inequalities representing the minimum sinr requirements at the bases .the received sinr s are determined using average interference terms .for example , the sinr requirement at macrocell base station 1 ( which contains embedded microcells ) is where , is the required received power at macrocell , ( as per the optimal equal distribution of users ) , and the same - tier interference between macrocells is represented by its average value , i.e. , , which we assume to be the same for all macrocells . here is the interference into a macrocell caused by a user communicating with a neighboring macrocell , averaged over all possible user locations in and over shadow fading . in our calculations , we simplify the sinr requirement in ( [ multi_sinr_1 ] ) by assuming that , i.e. , interference due to two users in neighboring macrocells is roughly equal to the interference at a macrocell due to an embedded microcell user .although macrocells transmit at higher powers than microcells , they are further apart from each other than is a macrocell base from an embedded microcell user .this approximation assumes that the larger distance between macrocells balances the impact of higher transmit powers .+ + next , we determine the sinr requirement at macrocell ( ) .to do so , we assume ( as above ) that .we further assume that the interference from neighboring macrocells is much larger than the interference caused by transmissions in a few isolated microcells in a nearby macrocell , i.e. , , where is the interference at any macrocell base due to a microcell user embedded in a neighboring macrocell averaged over all possible user locations and shadow fadings . under these two assumptions ,the sinr requirement at macrocell ( ) is approximately finally , we calculate the sinr requirements at the microcell bases . here, we assume first that , where represents the interference into any microcell base due to a user in a neighboring macrocell averaged over user location and shadow fading . in other words, we assume the interference at a microcell from neighboring macrocells is comparable to interference caused by transmission to the umbrella macrocell . note that this is a somewhat pessimistic assumption as will typically be less than .this pessimistic approach is balanced by assuming further that the microcell - to - microcell interference is negligible . here is the average interference into any microcell base due to a user communicating with any other microcell .thus , the required sinr equation at microcell is these inequalities can be used to show that and for all and if and only if a benefit of this simplified capacity condition is that it allows for capacity calculation using and alone . as a result , it is no more complex to determine than is , the user capacity when and , section 2 .further , as the product is robust to variations in propagation parameters and user distributions , so is the approximation in ( [ eq : cap_multi ] ) . despite its simplified form , this approximation yields a fairly accurate estimate to the attainable user capacity .+ + a simple extension of the capacity expression in ( [ eq : cap_multi ] ) can now be used to find the user capacity in a _general _ multicell system , i.e. , a system in which the microcells can be embedded anywhere within the coverage areas of the macrocells .we first note that , in the general system , each macrocell contains _ on average _ microcells .next , we use ( [ eq : cap_multi ] ) to determine the user capacity for a multicell system in which there are macrocells but only the center macrocell contains microcells . in this case, each cell contains , users , where next , we assume that this value of hardly changes when some other macrocell contains ( on average ) embedded microcells .thus , we can finally approximate the total attainable capacity for the general multicell system as times the result in ( [ eq : cap_multi_2 ] ) , i.e. , the goal here is to develop a calculation for the multicell user capacity in _finitely dispersive channels _ which is as straightforward to calculate as the capacity expression in ( [ eq : cap_multi_3 ] ) .although there are several approaches to this problem , we propose an extremely simple and highly accurate approximation method .first , we compute the diversity factor for the given multipath channel .next , we make the assumption that systems with the same diversity factor support equivalent numbers of users and we seek to find , the user capacity in a system with microcells , macrocells , and a multipath channel with diversity factor .we then use the results of section [ varfading_sec ] to compute the total user capacity for a two - cell system , i.e , we also compute the total user capacity for a two - cell system with infinite dispersion , i.e. , we use these two capacities to determine the capacity percentage loss due to dispersion , , for the two - cell system .this loss is given as finally , we claim and demonstrate through extensive numerical results that the total attainable user capacity in a multicell system with microcells , macrocells , and finite dispersion is simply times the total attainable user capacity of a system with microcells , macrocells , and infinite dispersion , i.e. , we begin with a large square region having side and a macrocell base at its center . a fraction of the total system users are uniformly distributed over this region .this larger square is divided into squares with side , as shown in figure [ singlesquares ] for .the smaller squares represent potential high - density regions . in each simulation trial, we randomly select high density regions ( excluding the center square which contains the macrocell base ) and place a microcell base at the center .finally , we assume that the microcells have identical values for , , and .+ + our simulations assume , meaning 24 possible candidate high density regions and the parameters in table 1 . for a given value of , the simulation randomly selects of the 24 high density regions .a portion of the total user population is uniformly - distributed over the large square region and the remaining users are uniformly distributed over the selected high density regions .the average fraction of the total user population placed in each region is . as before ,this was done to ensure that maximal user capacity occurs with high probability - .the simulation then determines the total number of users supported with 5% outage .this is done for 23 other random selections of high density regions , and the average over these selections is computed as a function of .+ + we performed the above simulations for uniform channels with and , and for the infinitely dispersive channel .the results are in figure [ ul_multicell_1a ] .in addition , we show the attainable user capacities predicted by ( [ eq : cap_multi_3 ] ) and ( [ multi_fading_capacity ] ) .the simulation points match the corresponding approximations very tightly up to about , corresponding to a microcell _ fill factor _ ( the fraction of macrocellular area served by microcells ) of about one half . in other words ,our approximation technique works reliably in the domain _ hotspot _ microcells .we simulated a multiple - macrocell / multiple - microcell system by extending the system studied above .specifically ( figure [ multimulti_fig ] ) , there are macrocells and candidate high density regions ( locations for microcell bases ) .we randomly select high density regions from this pool .we again accrue the average total capacity and its one - standard - deviation spread for the two extreme cases of dispersion , i.e. , infinite dispersion and .the results for , are in figure [ multi_multi_curve ] . also shownare the capacities predicted by ( [ eq : cap_multi_3 ] ) and ( [ multi_fading_capacity ] ) for and .the approximations closely match simulation points up to at least , which translates to roughly eight microcells per macrocell ( fill factor of about one third ) .thus , the approximation method is reliable in the domain of hotspot microcells .we first examined the effect of transmit power constraints and cell size on the uplink performance of a two - cell two - tier cdma system .we then investigated the effect of finite channel dispersion ( which causes variable fading of the output signal power ) on this capacity .analysis was presented to approximate the uplink capacity of _ any _ channel delay profile using the uniform multipath channel , both with and without terminal power constraints .finally , we presented an approximation to the uplink user capacity in larger multicell systems with finite dispersion and demonstrated its accuracy using extensive simulation .+ + while a particular set of system parameters was used in obtaining our numerical results , the analytical approximation methods , and their confirmation via simulations , are quite general .what we have shown here , first , is that two - tier uplink user capacity can be reliably estimated in a two - cell system for arbitrary pole capacity , channel delay profile , shadow fading variances , transmit power limit , and cell size .moreover , under realistic conditions on transmit power and cell size ( ) , the method can be extended to the case of multiple macrocells with multiple microcells .thus , we have developed simple , general and accurate approximation methods for treating a long - standing problem in two - tier cdma systems .we thank the editor , dr .halim yanikomeroglu , for his helpful suggestions on expanding the scope of our study . 1 j. shapira , `` microcell engineering in cdma cellular networks , '' , vol .4 , pp . 817825 ; nov . 1994 . j.s .wu , et al ., `` performance study for a microcell hot spot embedded in cdma macrocell systems , '' , vol .48 , no . 1 ,4759 ; jan . 1999 .j. j. gaytn and d. mu noz rodrguez , `` analysis of capacity gain and ber performance for cdma systems with desensitized embedded microcells , '' in _ proc . of international conf . on universal personal commun .98 _ , 1998 ,vol . 2 , pp .887891 ; oct .s. kishore , et al ., `` uplink capacity in a cdma macrocell with a hotspot microcell : exact and approximate analyses , '' , vol .2 , pp . 364374 ; mar .s. kishore , et al ., `` uplink user capacity of a multi - cell cdma system with hotspot microcells , '' in _ proc . of vehic .s-02 _ , vol .2 , pp . 992996 ; may 2002 .s. kishore , et al ., `` uplink user capacity of multi - cell cdma system with hotspot microcells , '' submitted to _ieee trans . on wireless comm .2003 , http://www.eecs.lehigh.edu/126 skishore / research / inprogress.htm. s. kishore , et al ., `` uplink user capacity in a cdma macrocell with a hotspot microcell : effects of finite power constraints and channel dispersion , '' in _ proc .of ieee globecom _15581562 , dec . 2003 .s. kishore , et al ., `` downlink user capacity in a cdma macrocell with a hotspot microcell , '' in _ proc . of ieee globecom _ ,15731577 , dec . 2003 .k. s. gilhousen , et al ., `` on the capacity of a cellular cdma system , '' , vol .2 , pp . 303312 ; may 1991 .v. erceg , et al . , `` an empirically based path loss model for wireless channels in suburban environments , '' , vol .17 , no . 7 , pp . 12051211 ; july 1999 . t. s. rappaport , , prentice hall , 1996 ; chapter 3 .s. kishore , , ph.d .thesis , department of electrical engineering , princeton university ; jan .`` deployment aspects , '' 3gpp specifications , tr 25.943 v4.0.0 , ( 2001 - 2006 ) , 2001 .a. awoniyi , et al ., `` characterizing the orthogonality factor in wcdma downlink , '' , vol .4 , pp . 611615 ; july 2003 .shalinee kishore received the b.s . anddegrees in electrical engineering from rutgers university in 1996 and 1999 , respectively , and the m.a . and ph.d .degrees in electrical engineering from princeton university in 2000 and 2003 , respectively .kishore is an assistant professor in the department of electrical and computer engineering at lehigh university . from 1994 to 2002, she has held numerous internships and consulting positions at at&t , bell labs , and at&t labs - research .she is the recipient of the national science foundation career award , the p.c .rossin assistant professorship , and the at&t labs fellowship award .her research interests are in the areas of communication theory , networks , and signal processing , with emphasis on wireless systems . from 1958 to 1970 , he was with iit - research institute , chicago , il , working on radio - frequency interference and anti - clutter airborne radar .he joined bell laboratories , holmdel , nj , in 1970 . over a 32-year at&t career, he conducted research in digital satellites , point - to - point digital radio , lightwave transmission techniques , and wireless communications . for 21 years during that period ( 1979 - 2000 ) , he led a research department renowned for its contributions in these fields .his research interests in wireless communications have included measurement - based channel modeling , microcell system design and analysis , diversity and equalization techniques , and system performance analysis and optimization . since april 2002 he has been a research professor at rutgers winlab , piscataway , nj , working in the areas of ultra - wideband systems , sensor networks , relay networks and channel modeling . dr .greenstein is an at&t fellow and a member - at - large of the ieee communications society board of governors .he has been a guest editor , senior editor and editorial board member for numerous publications .h. vincent poor ( s72,m77,sm82,f97 ) received the ph.d .degree in eecs from princeton university in 1977 . from 1977 until 1990 , he was on the faculty of the university of illinois at urbana - champaign . since 1990 he has been on the faculty at princeton , where is the george van ness lothrop professor in engineering .poor s research interests are in the area of statistical signal processing and its applications in wireless networks and related fields . among his publications in these areasis the recent book _ wireless networks : multiuser detection in cross - layer design _ ( springer : new york , ny , 2005 ) .poor is a member of the national academy of engineering , and is a fellow of the institute of mathematical statistics , the optical society of america , and other organizations . in 1990 , he served as president of the ieee information theory society , and in 1991 - 92 he was a member of the ieee board of directors .he is currently serving as the editor - in - chief of the _ ieee transactions on information theory_. recent recognition of his work includes the joint paper award of the ieee communications and information theory societies ( 2001 ) , the nsf director s award for distinguished teaching scholars ( 2002 ) , a guggenheim fellowship ( 2002 - 2003 ) , and the ieee education medal ( 2005 ) .stuart schwartz received the b.s . anddegrees from m.i.t . in 1961 and the ph.d . from the university of michigan in 1966 . while at m.i.t .he was associated with the naval supersonic laboratory and the instrumentation laboratory ( now the draper laboratories ) . during the year 1961 - 62 he was at the jet propulsion laboratory in pasadena ,california , working on problems in orbit estimation and telemetry . during the academic year 1980 - 81 , he was a member of the technical staff at the radio research laboratory , bell telephone laboratories , crawford hill , nj , working in the area of mobile telephony .he is currently a professor of electrical engineering at princeton university .he was chair of the department during the period 1985 - 1994 , and served as associate dean for the school of engineering during the period july 1977-june 1980 . during the academic year 1972 - 73 , he was a john s. guggenheim fellow and visiting associate professor at the department of electrical engineering , technion , haifa , israel .he has also held visiting academic appointments at dartmouth , university of california , berkeley , and the image sciences laboratory , eth , zurich .his principal research interests are in statistical communication theory , signal and image processing
|
this paper examines the uplink user capacity in a two - tier code division multiple access ( cdma ) system with hotspot microcells when user terminal power is limited and the wireless channel is _ finitely - dispersive_. a finitely - dispersive channel causes variable fading of the signal power at the output of the rake receiver . first , a two - cell system composed of one macrocell and one embedded microcell is studied and analytical methods are developed to estimate the user capacity as a function of a dimensionless parameter that depends on the transmit power constraint and cell radius . next , novel analytical methods are developed to study the effect of variable fading , both with and without transmit power constraints . finally , the analytical methods are extended to estimate uplink user capacity for _ multicell _ cdma systems , composed of multiple macrocells and multiple embedded microcells . in all cases , the analysis - based estimates are compared with and confirmed by simulation results . cellular systems , code division multiple access , microcells .
|
a wban is a wireless network of low power and low cost sensors that may be embedded inside or attached on the human body .its sensors are used in various applications such as personal health monitoring , ubiquitous healthcare , sports and military .it mainly monitors and captures vital signs as glucose percentage in blood , heart beats , respiration and/or can record electrocardiography ( ecg ) . due to the dynamic and mobile nature of wbans , in tdma access scheme , interferences can happen due to the infeasibility of coordination among different coexisting wbans ( inter - wbans interference ) . or , in csma / ca access scheme, it can happen when multiple nodes of a particular wban can not concurrently transmit ( intra - wban interference ) .at the first side , radio co - channel interference is of the paramount importance .it increases energy consumption .further , it also decreases the efficiency of the reliable communication and the throughput . on the other side , the stringent factor is energy and this requires to keep always as low power consumption as possible .recently , the ieee 802.15.6 working group has defined new proposals for wbans and also adopted the cooperative two - hop scheme in the standard .thus , adopting relay transmission scheme is a very promising solution for co - channel interference mitigation .firstly , co - channel interference problem motivates for the stringent requirements of interference mitigation and/or avoidance schemes , protocols for reliable and energy efficient operation of both isolated and coexisting wbans .secondly , due to the constrained nature of wbans ( in terms of energy , size and cost ) , advanced antenna techniques can not be used for interference mitigation as well as power control mechanisms used in cellular networks are not applicable to wbans . however , in this work we stress our concentration on problems related to co - channel interference and power savings of an isolated wban .thus , novel methods and schemes are required for intra - wban interference mitigation and consequently for energy savings .the rest of the paper is organized as follows .section slowromancap2@ shows the works address problems related to interference mitigation in wbans .section slowromancap3@ explains the flexible tdma scheme .section slowromancap4@ presents the system model and introduces the proposed cftim scheme .section slowromancap5@ shows the simulation and comparison results .the conclusions are drawn in section [email protected] studies show multihop schemes have a lower power consumption in comparison to one - hop scheme .however , using relays reduces the wban interference and consequently the power consumption .authors propose a single - relay cooperative scheme where the best relay is selected in a distributed fashion .also , authors of propose a prediction - based dynamic relay transmission scheme through which the problem of `` when to relay '' and `` who to relay '' are decided in an optimal way .the interference problem among multiple co - located wbans is investigated in .the authors show cooperative two relay communication with opportunistic relaying significantly mitigates wban interference .authors of propose an analytical framework to optimize the size of relay - zone around each source node .authors of investigate the problem of coexistence of multiple non coordinated wbans .this study provides better co - channel interference mitigation .however , more recent works conducted in propose a scheme for joint two - hop relay - assisted cooperative communication integrated with transmit power control .this scheme can reduce co - channel interference and extend the lifetime .on the other hand , most of the recent works prove that tdma scheme is an attractive solution to avoid interference inside a small - sized wban .furthermore , authors of also enables two or three coexisting wbans to agree on a common tdma schedule to reduce the interference .the work in adopts a tdma polling - based scheme for traffic coordination within a wban and a carrier sensing ( cs ) mechanism to deal with inter - wban interference .some efforts develop a model to analyze interference by using geometrical probability approach among non - overlapping nearby wbans .other research focuses on the performance at the coordinator that calculates signal - to - interference - plus - noise ratio ( sinr ) periodically which enables to command its nodes to select appropriate interferece mitigation scheme .other studies of analyze the performance of a reference wban .they evaluate the performance in terms of bit error rate , throughput and lifetime which have been improved by adoption of an optimized time hopping code assignment strategy .works in consider a wban where coordinator periodically queries sensors to transmit data .the network adopts the csma / ca and the nodes adopt link adaptation to select the modulation scheme according to the experienced channel quality . whereas , the research work of solves the problem of inter - wban scheduling and interference by the adoption of a qos based mac preemptive priority scheduling approach .whilst , researchers of proposes a distributed interference detection and mitigation scheme through using adaptive channel hopping .research work of proposes a dynamic resource allocation scheme for interference avoidance among multiple coexisting wbans through using orthogonal sub - channels for high interfering nodes .most of the recent works show problems related to interference mitigation in inter - wbans environment .they do not address problems related to interference and energy savings of a single isolated wban . in this paper , we propose a distributed relay - assisted algorithm for interference mitigation within a single wban .the proposed scheme enables non interfering sources ( transmitters ) to use csma / ca for transmitting to relays .whilst , high interfering sources and best relays transmit to the coordinator directly through using stable channels in their assigned time slots .in the traditional tdma scheme , a superframe is usually splitted into a number of equal intervals each called timeslot .each timeslot is assigned to a node to transmit its message to c. a proposed tdma scheduling ensures collision - free communication .however , in wbans , nodes sleep and wake up dynamically and hence , the number of transmitting nodes is unexpected .therefore , a dynamic and flexible way of scheduling different transmissions is required to efficiently avoid interference , save energy and decrease the time delay . unlike the traditional tdma scheme, a ftdma frame is divided into two parts .the first part is used by c for broadcasting beacons to the nodes and the second part is used by the nodes for transmitting their messages to c. however , this latter part is further splitted into a csma / ca part and a tdma part which its size changes dynamically depending on the interference level .[ frame ] . in the c part of a ftdma frame ,c boadcasts the information based on the messages received from its nodes during the node part of the previous frame . along with this information, it allocates a slot in the subsequent node part for each active node .however , a node is considered active if c has received at least one beacon during the previous three frames .if there are m active nodes currently connected to c , c allocates p ( where p > m ) slots in the subsequent node part .the first m slots are allocated to the currently active nodes and the rest p - m slots are reserved to the newly incoming nodes ( those nodes have sensed data and do not have assigned timeslots in the current superframe ) . however , c predicts p for the next frame based on the interference level history of the last k - received frames where k can be any interger .the node part is composed of two main parts , a csma / ca part denoted by cap and a tdma part as shown in fig .[ frame ] . in the cap ,non interfering sources transmit their message to relays .whilst , in the tdma part , a node ( can be a high interfering source or a best relay sensor ) first listens to the beacon from c. if it finds its i d in the slot allocation list of the beacon , it transmits its message in the corresponding timeslot ( one of the m slots ) .however , if it does not find its i d in the slot allocation list .it synchronizes its clock with that of c. it then randomly selects one of the p - m empty slots and transmits its beacon meassage in that slot .since these slots are free and not yet assigned , there is chance of collision with the transmission of any other node willing to trnasmit at the same slot .if the beacon is successfully sent to c , i.e. there is no collision , c will allocate a slot for the node in the next frame .otherwise , if collision happens , c will not successfully receive the beacon , and will not include its i d in the next frame . in such cases ,a node keeps trying different empty slots randomly in every frame until a timeslot is assigned to it . in our proposed ftdma, c manages the incoming messages from the nodes according to the interference level by varying the number of empty slots only ( p - m ) .therefore , when the interference is low , few empty slots can handle the newly incoming nodes .if c finds that many empty slots remain unused , it then lowers the value of p. furthermore , it is also guanteed that there will be no collision among the messages sent by the nodes since c carefully assigns one slot for one node only .but , there may be collisions in the empty slots reserved for the newly incoming nodes .when the interference increases , more than one node may select the same timeslot causing a collision . hence , whenever collisions occur , c increases the number of empty slots simply by increasing the value of p.the main objective behind our proposal is to minimize interfereces , extend wban energy lifetime and maximize the throughput in an intra - wban .we define the following : * non interfering source : a csma / ca source has sensed data and whose measured * sinr * , i.e. , channel is clear to transmit . *high interfering source : a csma / ca source has sensed data and whose whose measured * sinr * , i.e. , channel is unclear to transmit .we adopt the following assumptions : * fixed topology of wban nodes ( number of interfering nodes is variable from a frame to another depending on interference level ) * topology with combined one- and two - hop as a communication scheme respectively ( high interfering sources - > c ) and ( non interfering sources - > relays - > c ) * a combination of csma / ca ( non interfering sources - > relays ) and flexible tdma ( ( high interfering sources - > c ) and ( best relays - > c ) ) are adopted as medium access schemes * any node transmits one data packet in each superframe * the number of available stable channels is always larger than the number of nodes demanding for that channels * one and only one best relay retransmits the source s message to c , i.e. a source can have only one best relay after the last beacon frame is received from c. all the sources of the wban start a csma / ca contention to access a shared base channel . during this contention , two sets of sources are composed .non interfering sources denoted by set * ts * and high interfering sources denoted by set * is*. each source whose * sinr * is included in ts . that implies , any member of ts gains the contention to access the channel and transmit its message to the best relay during the current frame . then , when the flexible tdma schedule commences , the best relay transmits the message it has received from a member of ts to c. more clearly , a best relay checks the last beacon already received . if it finds its i d in the node slot list ( nodeslotlist of m slots ) of the beacon .it transmits its message in the corresponding timeslot to c. begincap endcap beginftdma endftdma however , if it does not find its i d , it then randomly selects one slot of the p - m free slots of free slot list ( freeslotlist of p - m slots ) and transmits its meassage to c in that slot .we denote the set of all is sources and all best relays by isbr , . the following pseudocode shown in algo .[ multihop ] presents the proposed cftim scheme and introduces the actions taken at the ts nodes , is nodes and the best relays . on the other hand , high interfering sources whose * sinr * < are included to set * is*. these sources can not access shared channel during the current frame .this is due to the interference coming from other sources accessing the channel at the same time .similarly , any member of is checks the last beacon received from c. if it finds its i d in the node slot list , it transmits its message in the corresponding timeslot directly to c through the most stable channel .however , if it does not find its i d .it then randomly selects one slot of free slot list and transmits its message to c in that slot .if it does not succeed to transmit , it keeps trying different empty slots randomly until a timeslot is assigned to it .as a node gets aware of its time slot in the current frame , it then starts immediatly scanning a fixed number of channels .based on the aforementioned assumptions , it finds a stable channel , it switches to this channel and then transmits its message directly to c during its assigned time slot .after the flxible tdma schedule is over , c receives all the ids of nodes have successfully transmitted in the current frame ( say m ids ) .however , based on that knowledge , it forms a new beacon frame composed of p slots .c assigns a time slot in the next beacon frame for each node it owns its i d .consequently , it assigns m slots for m nodes and thus forming the nodeslotlist ( fixed tdma ) part of the beacon .furthermore , it adds p - m unassigned free slots to the beacon and thus forming the freeslotlist ( flexible tdma ) part of the beacon .this addition of p - m slots is to let other unassigned nodes to transmit in the next frame .the following pseudocode shown in algo . [ cactions ] introduces c actions and transmission phase .the number of time slots specified in the frame is variable from one beacon frame to another thus depending on the interference level .we say that a channel is stable with a given probability , if its quality is not expected to deteriorate before completing the scheduled data transmission on that channel .more specifically , assume that we are currently in the slot n and we want to select a channel to be used after k time slots .then , the stability condition for the channel is : with where is the duration of the data transmission , x is the total time spent in sensing before finding channel , and i is the cuurent state of channel . in the above condition , the values of interest for i are 1 and 2 , because if the channel is in state 3 , then , there is no need to check its stability . in fading channels, the received signal has no constant power which depends on the channel and can be described by probability models .thus , sinr will also become a random variable . andso the maximum capacity of the channel becomes a random variable .outage probability is a metric for the channel that states according to the variable sinr at the received end , what is the probability that a rate is not supported due to variable sinr . in other words , outage probability at given sinr thresholdsare defined as the probability of sinr value being smaller than a given threshold . where , sinr is computed as in ( [ one ] ) . where p is the desired power received at receiver , is the interference power received from interferer i at the receiver and is the noise power . in our model , we denote the probability that the total interference at time instant i is larger than at the wban by .then , we calculate this probability by the following formula : we present a simplet probabilistic approach which we prove analytically it lowers the outage probability . as mentiond above , any sensor whose received sinr at the is lower than a threshold is added to the interference set of sources ( is ) .we denote by the received sinr at a sensor in the wban .if , an orthogonal channel is assigned to that sensor with certain probability which equals .thus , at time instant i , we can calculate the average interference level using the proposed probabilistic approach as follows : based on the probabilistic approach , any sensor with probability is assigned with an orthogonal channel .* lemma 1 : * we denote by and the outage probability of probabilistic approach and the outage probability of the original scheme respectively .then , < .* proof : * based on outage probability definition , we have : where the last line of is based on the fact that the cdf is an increasing function of its argument .therefore , the lemma is proved . m = 0 ; c forms of p slots nodeslotlist of m slots freeslotlist of p - m slotswe assume n=12 sources , r=4 relays and a coordinator c form a wban .we consider a sensor node scans different channels and eventually uses one channel at a time .we also consider that all sensor nodes excluding the coordinator in the wban have similar transmit power .however , in the following subsections , we prove that our proposed cftim scheme significantly mitigates intra - wban interference .thus , the simulation results also prove that the wban energy savings and the throughput are enhanced .the following * table * [ tab : title ] summarizes the simulation parameters .the wban average sinr is defined as in * eq .[ avgsinr ] * of the proposed cftim compared to that of opportunistic relaying ( or ) scheme is shown in fig .[ sinrcftim ] . as can be clearly seen from the figure , both curves increase with time . also , the curve of the proposed cftim scheme is always higher than that of opportunistic relaying ( or ) scheme .which implies , cftim better mitigates wban interference than or scheme . starting from average sinr of 10db , after 25 minutes , both cftim and or schemes have respectively 24db and 17db ( after 45 minute respectively 34db and 23db ) .therefore , the improvement in sinr from or to cftim scheme is around 11db which is quite good in wbans .however , in the or scheme , the interferences can happen due to the collisions at the relays or when some sources access the shared channel at the same time .consequently , few sources transmit and the rest will defer their transmissions ( interfering sources ) .therefore , these interfering nodes , with cftim scheme , are assigned dynamically time slots and stable channels to avoid interference with other nodes .doing so , the average sinr is improved better in our proposed cftim scheme . in this section, the wban energy consumption of the proposed cftim scheme is thoroughly analyzed and compared with two other schemes .the first wban employs csma / ca or communication scheme and the second employs the traditional single link tdma scheme . the wban energy consumption versus time for all wbans employing different schemesis shown in fig .[ totalenergy ] .as can be seen in this figure , higher wban energy consumption is obtained in the case of original tdma scheme .this is because tdma scheme uses the single - hop communication which increases the energy consumption .however , a lower wban energy consumption is obtained in the case of or than the original tdma scheme .the improvement in energy consumption is due to the fact that using two - hop communication reduces the energy consumption of the whole wban .furthermore , it is clear to see from the figure that the proposed cftim scheme achieves the lowest energy consumption among others schemes .firstly , using two - hop scheme saves the wban energy better than the tdma scheme .secondly , avoiding interferecne using flexible tdma at the interfering sources and the relays reduces the collisions and retransmissions and so decreases more the energy consumption .we define the throughput as the sum of the number of successful messages delivered per a unit time at a node .the throughput versus time is shown in fig .[ fig : throughput ] .this figure compares the throughput results of our proposed cftim with that of or scheme received at the coordinator . as can be clearly seen from fig .[ fig : throughput ] that the throughput achieved from our cftim scheme is always higher than that of or scheme .this improvement can be explained due to the fact that the interference is significantly avoided in cftim scheme through using flexible tdma combined with stable channel mechanism .it is evident to notice that those nodes which were supposed to experience interference in the or scheme are avoided with cftim scheme which enhances the sinr and in turn improves the throughput .in this work , a distributed combined csma / ca with flexible tdma scheme namely , cftim is proposed for interference mitigation in relay - assisted intra - wban . in our proposed method ,non interfering sources use csma / ca to communication with the relays . whilst high interfering sources together with best relays use a flexible tdma integrated with stable channel mechanism to communication to c. our approachaims to minimize intra - wban interference .our proposed scheme has been evaluated in comparison with other schemes .the simulation results show that the interference and the wban energy consumption are significantly minimized and the throughput is increased. additionally , simple theoretical analysis of outage probability validated our proposed cftim approach in interference mitigation .16 yang , wen - bin and sayrafian - pour , kamran , interference mitigation using adaptive schemes in body area networks , international journal of wireless information networks , pages 193 - 200 , 2012 latr , benot and braem , bart and moerman , ingrid and blondia , chris and demeester , piet , a survey on wireless body area networks , journal wirel .pages 1 - 18 , 2011 chen , g. chen , w. and shen , s. , 2l - mac : a mac protocol with two - layer interference mitigation in wireless body area networks for medical applications , ieee international conference on communications ( icc ) , pages 3523 - 3528 , 2014 xuan wang and lin cai , ieee global telecommunications conference ( globecom ) , interference analysis of co - existing wireless body area networks , 2011 martelli , f. and verdone , r. and buratti , c. , link adaptation in ieee 802.15.4-based wireless body area networks , personal , indoor and mobile radio communications workshops ( pimrc workshops ) , 2010 ieee 21st international symposium , pages 117 - 121,2010 mahapatro , j. and misra , s. and manjunatha , m. and islam , n. , interference mitigation between wban equipped patients , ninth international conference on wireless and optical communications networks ( wocn),pages 1 - 5,2012 dong , jie and smith , david , opportunistic relaying in wireless body area networks : coexistence performance , ieee international conference on communications ( icc ) , pages 5613 - 5618 , 2013 vakil , s. and ben liang , relaying in wireless sensor networks with interference mitigation , ieee global telecommunications conference ( globecom ) , 2006 dong , jie and smith , david , joint relay selection and transmit power control for wireless body area networks coexistence , ieee international conference on communications ( icc ) , pages 5676 - 5681 , 2014 braem , b. and latre , b. and moerman , i. and blondia , c. and reusens , e. and joseph , w. and martens , l. and demeester , p. , the need for cooperation and relaying in short - range high path loss sensor networks , international conference on sensor technologies and applications , sensorcomm , 2007 javaid , nadeem and khan , na and shakir , m and khan , ma and bouk , safdar hussain and khan , za , ubiquitous healthcare in wireless body area networks - a survey , arxiv preprint arxiv:1303.2062 , 2013 movassaghi , samaneh and abolhasan , mehran and lipman , justin and smith , david and jamalipour , abbas , wireless body area networks : a survey , ieee communications surveys tutorials , pages 1658 - 1686 , 2014 movassaghi , samaneh and abolhasan , mehran and smith , david , smart spectrum allocation for interference mitigation in wireless body area networks , ieee international conference on communications ( icc ) , pages 5688 - 5693 , 2014 feng , hui and liu , bin and yan , zhisheng and zhang , chi and chen , chang wen , prediction - based dynamic relay transmission scheme for wireless body area networks , ieee 24th international symposium personal indoor and mobile radio communications ( pimrc ) , pages 2539 - 2544 , 2013 barakah , d.m . and ammad - uddin , m. , a survey of challenges and applications of wireless body area network ( wban ) and role of a virtual doctor server in existing architecture , third international conference intelligent systems , modelling and simulation ( isms ) , pages 214 - 219 , 2012 shipeng liang and yu ge and shengming jiang and hwee pink tan , a lightweight and robust interference mitigation scheme for wireless body sensor networks in realistic environments , ieee wireless communications and networking conference ( wcnc ) , pages 1697 - 1702 , 2014 movassaghi , samaneh and abolhasan , mehran and smith , david , interference mitigation in wbans : challenges and existing solutions , workshop on advances in real - time information networks , 2013 jie dong and smith , d. , cooperative body - area - communications : enhancing coexistence without coordination between networks , ieee 23rd international symposium on personal indoor and mobile radio communications ( pimrc ) , pages 2269 - 2274 , 2012 jamthe , anagha and mishra , amitabh and agrawal , dharma p , scheduling schemes for interference suppression in healthcare sensor networks , ieee international conference on communications ( icc ) , pages 391 - 396 , 2014 ieee standard for local and metropolitan area networks - part 15.6 : wireless body area networks , pages 1 - 271 , 2012
|
this work addresses problems related to interference mitigation in a single wireless body area network ( wban ) . in this paper , we propose a distributed _ _ c__ombined carrier sense multiple access with collision avoidance ( csma / ca ) with _ _ f__lexible time division multiple access ( _ _ t__dma ) scheme for _ _ i__nterference _ _ m__itigation in relay - assisted intra - wban , namely , cftim . in cftim scheme , non interfering sources ( transmitters ) use csma / ca to communicate with relays . whilst , high interfering sources and best relays use flexible tdma to communicate with coordinator ( c ) through using stable channels . simulation results of the proposed scheme are compared to other schemes and consequently cftim scheme outperforms in all cases . these results prove that the proposed scheme mitigates interference , extends wban energy lifetime and improves the throughput . to further reduce the interference level , we analytically show that the outage probability can be effectively reduced to the minimal . # 1*/
|
computer simulation is widely regarded as complementary to theory and experiment .the standard approach is to start from one or more basic equations of physics and to employ a numerical algorithm to solve these equations .this approach has been highly successful for a wide variety of problems in science and engineering . however , there are a number of physics problems , very fundamental ones , for which this approach fails , simply because there are no basic equations to start from .indeed , as is well - known from the early days in the development of quantum theory , quantum theory has nothing to say about individual events . reconciling the mathematical formalism that does not describe individual events with the experimental fact that each observation yields a definite outcomeis referred to as the quantum measurement paradox and is the most fundamental problem in the foundation of quantum theory . in view of the quantum measurement paradox ,it is unlikely that we can find algorithms that simulate the experimental observation of individual events within the framework of quantum theory .of course , we could simply use pseudo - random numbers to generate events according to the probability distribution that is obtained by solving the time - independent schrdinger equation .however , the challenge is to find algorithms that simulate , event - by - event , the experimental observations of , for instance , interference without first solving the schrdinger equation .this paper is not about a new interpretation or an extension of quantum theory .the proof that there exist simulation algorithms that reproduce the results of quantum theory has no direct implications on the foundations of quantum theory : these algorithms describe the process of generating events on a level of detail about which quantum theory has nothing to say .the average properties of the data may be in perfect agreement with quantum theory but the algorithms that generate such data are outside of the scope of what quantum theory can describe . in a number of recent papers , we have demonstrated that locally - connected networks of processing units can simulate event - by - event , the single - photon beam splitter and mach - zehnder interferometer experiments , universal quantum computation , real einstein - podolsky - rosen - bohm ( eprb ) experiments , wheeler s delayed choice experiment and the double - slit experiment with photons .our work suggests that we may have discovered a procedure to simulate quantum phenomena using event - based , particle - only processes that satisfy einstein s criterion of local causality , without first solving a wave equation . in this paper , we limit the discussion to event - by - event simulations of real eprb experiments .in fig . [ fig1a ] , we show a schematic diagram of an eprb experiment with photons ( see also fig . 2 in ) .the source emits pairs of photons .each photon of a pair propagates to an observation station in which it is manipulated and detected .the two stations are separated spatially and temporally .this arrangement prevents the observation at station 1 ( 2 ) to have a causal effect on the data registered at station ( 1 ) .as the photon arrives at station , it passes through an electro - optic modulator that rotates the polarization of the photon by an angle depending on the voltage applied to the modulator .these voltages are controlled by two independent binary random number generators .as the photon leaves the polarizer , it generates a signal in one of the two detectors .the station s clock assigns a time - tag to each generated signal .effectively , this procedure discretizes time in intervals of a width that is determined by the time - tag resolution . in the experiment ,the firing of a detector is regarded as an event .as we wish to demonstrate that it is possible to reproduce the results of quantum theory ( which implicitly assumes idealized conditions ) for the eprb gedanken experiment by an event - based simulation algorithm , it would be logically inconsistent to `` recover '' the results of the former by simulating nonideal experiments .therefore , we consider ideal experiments only , meaning that we assume that detectors operate with 100% efficiency , clocks remain synchronized forever , the `` fair sampling '' assumption is satisfied , and so on .we assume that the two stations are separated spatially and temporally such that the manipulation and observation at station 1 ( 2 ) can not have a causal effect on the data registered at station ( 1 ) . furthermore , to realize the eprb gedanken experiment on the computer , we assume that the orientation of each electro - optic modulator can be changed at will , at any time .although these conditions are very difficult to satisfy in real experiments , they are trivially realized in computer experiments . in the experiment ,the firing of a detector is regarded as an event . at the event ,the data recorded on a hard disk at station consists of , specifying which of the two detectors fired , the time tag indicating the time at which a detector fired , and the two - dimensional unit vector that represents the rotation of the polarization by the electro - optic modulator . hence , the set of data collected at station during a run of events may be written as in the ( computer ) experiment , the data may be analyzed long after the data has been collected .coincidences are identified by comparing the time differences with a time window . introducing the symbol to indicate that the sum has to be taken over all events that satisfy for , for each pair of directions and of the electro - optic modulators ,the number of coincidences between detectors ( ) at station 1 and detectors ( ) at station 2 is given by where is the heaviside step function .we emphasize that we count all events that , according to the same criterion as the one employed in experiment , correspond to the detection of pairs .the average single - particle counts and the two - particle average are defined by and respectively . in eqs .( [ ex ] ) and ( [ exy ] ) , the denominator is the sum of all coincidences . for later use , it is expedient to introduce the function and its maximum we illustrate the procedure of data analysis and the importance of the choice of the time window by analyzing a data set ( the archives alice.zip and bob.zip ) of an eprb experiment with photons that is publicly available . in the real experiment , the number of events detected at station 1 is unlikely to be the same as the number of events detected at station 2 .in fact , the data sets of ref . show that station 1 ( alice.zip ) recorded 388455 events while station 2 ( bob.zip ) recorded 302271 events .furthermore , in the real eprb experiment , there may be an unknown shift ( assumed to be constant during the experiment ) between the times gathered at station 1 and the times recorded at station 2 .therefore , there is some extra ambiguity in matching the data of station 1 to the data of station 2 . a simple data processing procedure that resolves this ambiguity consists of two steps .first , we make a histogram of the time differences with a small but reasonable resolution ( we used ns ) .then , we fix the value of the time - shift by searching for the time difference for which the histogram reaches its maximum , that is we maximize the number of coincidences by a suitable choice of . for the case at hand , we find ns . finally , we compute the coincidences , the two - particle average , and using the expressions given earlier .the average times between two detection events is ms and ms for alice and bob , respectively .the number of coincidences ( with double counts removed ) is 13975 and 2899 for ( ns , ns ) and ( , ns ) respectively . as a function of the time window , computed from the data sets contained in the archives alice.zip and bob.zip that can be downloaded from ref .bullets ( red ) : data obtained by using the relative time shift ns that maximizes the number of coincidences . crosses ( blue ) : raw data ( ) . dashed line at : if the system is described by quantum theory ( see section [ quantumtheory ] ) . dashed line at : if the system is described by the class of models introduced by bell . ,width=283 ] , computed from the data sets contained in the archives alice.zip and bob.zip , using the relative time shift ns that maximizes the number of coincidences . bullets ( red ) : and ; crosses ( blue ) : and . , width=283 ] in fig .[ exp1 ] we present the results for as a function of the time window .first , it is clear that decreases significantly as increases but it is also clear that as , is not very sensitive to the choice of .second , the procedure of maximizing the coincidence count by varying reduces the maximum value of from a value 2.89 that considerably exceeds the maximum for the quantum system ( , see section [ quantumtheory ] ) to a value 2.73 that violates the bell inequality ( , see ref . ) and is less than the maximum for the quantum system .finally , we use the experimental data to show that the time delays depend on the orientation of the polarizer . to this end ,we select all coincidences between and ( see fig .[ fig1a ] ) and make a histogram of the coincidence counts as a function of the time - tag difference , for fixed orientation and the two orientations ( other combinations give similar results ) . the results of this analysis are shown in fig .[ fort.7 ] .the maximum of the distribution shifts by approximately 1 ns as the polarizer at station 2 is rotated by , a demonstration that the time - tag data is sensitive to the orientation of the polarizer at station 2 . a similar distribution of time - delays ( of about the same width )was also observed in a much older experimental realization of the eprb experiment .the time delays that result from differences in the orientations of the polarizers is much larger than the average time between detection events , which for the data that we analyzed is about 30000 ns . in other words , the loss in correlation that we observe as a function of increasing ( see fig . [ exp1 ] )can not be explained by assuming that we calculate correlations using photons that belong to different pairs . strictly speaking , we can not derive the time delay from classical electrodynamics : the concept of a photon has no place in maxwell s theory .a more detailed understanding of the time delay mechanism requires dedicated , single - photon retardation measurements for these specific optical elements .the crucial point is that in any real epr - type experiment , it is necessary to have an operational procedure to decide if the two detection events correspond to the observation of one two - particle system or to the observation of two single - particle systems . in standard`` hidden variable '' treatments of the epr gedanken experiment , the operational definition of `` observation of a single two - particle system '' is missing . in eprb - type experiments, this decision is taken on the basis of coincidence in time .our analysis of the experimental data shows beyond doubt that a model which aims to describe real eprb experiments should include the time window and that the interesting regime is , not as is assumed in all textbook treatments of the eprb experiment .indeed , in quantum mechanics textbooks it is standard to assume that an eprb experiment measures the correlation which we obtain from eq .( [ cxy ] ) by taking the limit .although this limit defines a valid theoretical model , there is no reason why this model should have any bearing on the real experiments , in particular because experiments pay considerable attention to the choice of . in experimentsa lot of effort is made to reduce ( not increase ) .according to the axioms of quantum theory , repeated measurements on the two - spin system described by the density matrix yield statistical estimates for the single - spin expectation values and the two - spin expectation value where are the pauli spin-1/2 matrices describing the spin of particle , and and are unit vectors .we have introduced the tilde to distinguish the quantum theoretical results from the results obtained from the data sets .the quantum theoretical description of the eprb experiment assumes that the system is represented by the singlet state of two spin-1/2 particles , where and denote the horizontal and vertical polarization and the subscripts refer to photon 1 and 2 , respectively . for the singlet state , and concrete simulation model of the eprb experiment sketched in fig .[ fig1a ] requires a specification of the information carried by the particles , of the algorithm that simulates the source and the observation stations , and of the procedure to analyze the data . in the following ,we describe a slightly modified version of the algorithm proposed in ref . , tailored to the case of photon polarization .* source and particles : * the source emits particles that carry a vector , representing the polarization of the photons that travel to station and station , respectively .note that , indicating that the two particles have orthogonal polarizations .the `` polarization state '' of a particle is completely characterized by , which is distributed uniformly over the whole interval . for the purpose of mimicing the apparent unpredictability of the experimental data , we use uniform random numbers . however , from the description of the algorithm , it will be clear that the use of random numbers is not essential .simple counters that sample the intervals in a systematic , but uniform , manner might be employed as well .* observation station : * the electro - optic modulator in station rotates by an angle , that is .the number of different rotation angles is chosen prior to the data collection ( in the experiment of weihs _ et al ._ , ) .we use random numbers to fill the arrays and . during the measurement processwe use two uniform random numbers to select the rotation angles and .the electro - optic modulator then rotates by , yielding .the polarizer at station projects the rotated vector onto its -axis : , where denotes the unit vector along the -axis of the polarizer . for the polarizing beam splitter, we consider a simple model : if the particle causes to fire , otherwise fires .thus , the detection of the particles generates the data .* time - tag model : * to assign a time - tag to each event , we assume that as a particle passes through the detection system , it may experience a time delay . in our model , the time delay for a particleis assumed to be distributed uniformly over the interval ] .there are not many options to make a reasonable choice for . assuming that the particle `` knows '' its own direction and that of the polarizer only , we can construct one number that depends on the relative angle : .thus , depends on only .furthermore , consistency with classical electrodynamics requires that functions that depend on the polarization have period .thus , we must have .we already used to determine whether the particle generates a or signal . by trial and error , we found that yields useful results . here, is the maximum time delay and defines the unit of time , used in the simulation and is a free parameter of the model . in our numerical work , we set . * data analysis : * for fixed and , the algorithm generates the data sets just as experiment does . in order to count the coincidences ,we choose a time - tag resolution and a coincidence window .we set the correlation counts to zero for all and .we compute the discretized time tags for all events in both data sets . here denotes the smallest integer that is larger or equal to , that is . according to the procedure adopted in the experiment ,an entangled photon pair is observed if and only if .thus , if , we increment the count .the simulation proceeds in the same way as the experiment , that is we first collect the data sets , and then compute the coincidences eq .( [ cxy ] ) and the correlation eq .( [ exy ] ) .the simulation results for the coincidences depend on the time - tag resolution , the time window and the number of events , just as in real experiments .figure [ fig2 ] shows simulation data for as obtained for d=2 , and . in the simulation , for each event , the random numbers select one pair out of , where the angles and are fixed before the data is recorded .the data shown has been obtained by allowing for different angles per station .hence , forty random numbers from the interval [ 0,360 [ were used to fill the arrays and . for each of the events ,two different random number generators were used to select the angles and . the statistical correlation between and measured to be less than . from fig .[ fig2 ] , it is clear that the simulation data for are in excellent agreement with quantum theory . within the statistical noise ,the simulation data ( not shown ) for the single - spin expectation values also reproduce the results of quantum theory .additional simulation results ( not shown ) demonstrate that the kind of models described earlier are capable of reproducing all the results of quantum theory for a system of two s=1/2 particles .furthermore , to first order in and in the limit that the number of events goes to infinity , one can prove rigorously that these simulation models give the same expressions for the single- and two - particle averages as those obtained from quantum theory .starting from the factual observation that experimental realizations of the eprb experiment produce the data ( see eq .( [ ups ] ) ) and that coincidence in time is a key ingredient for the data analysis , we have described a computer simulation model that satisfies einstein s criterion of local causality and , exactly reproduces the correlation that is characteristic for a quantum system in the singlet state . we have shown that whether or not these simulation models produce quantum correlations depends on the data analysis procedure that is performed ( long ) after the data has been collected : in order to observe the correlations of the singlet state , the resolution of the devices that generate the time - tags and the time window should be made as small as possible . disregarding the time - tag data ( or ) yields results that disagree with quantum theory but agree with the models considered by bell .our analysis of real experimental data and our simulation results show that increasing the time window changes the nature of the two - particle correlations . according to the folklore about bell s theorem , a procedure such asthe one that we described should not exist .bell s theorem states that any local , hidden variable model will produce results that are in conflict with the quantum theory of a system of two particles .however , it is often overlooked that this statement can be proven for a ( very ) restricted class of probabilistic models only .in fact , bell s theorem does not necessarily apply to the systems that we are interested in as both simulation algorithms and actual data do not need to satisfy the ( hidden ) conditions under which bell s theorem hold .furthermore , the apparent conflict between the fact that there exist event - based simulation models that satisfy einstein s criterion of local causality and reproduce all the results of the quantum theory of a system of two particles and the folklore about bell s theorem , stating that such models are not supposed to exist dissolves immediately if one recognizes that bell s extension of einstein s concept of locality to the domain of probabilistic theories relies on the hidden , fundamental assumption that the absence of a causal influence implies logical independence .the simulation model that is described in this paper is an example of a purely ontological model that reproduces quantum phenomena without first solving the quantum problem .the salient features of our simulation models are that they 1 .generate , event - by - event , the same type of data as recorded in experiment , 2 .analyze data according to the procedure used in experiment , 3 .satisfy einstein s criterion of local causality , 4 .do not rely on any concept of quantum theory or probability theory , 5 .reproduce the averages that we compute from quantum theory .we thank k. de raedt and a. keimpema for many useful suggestions and extensive discussions .bellexp / data.html[http://www.quantum.at / research / photonentangle/ bellexp / data.html ] .the results presented here have been obtained by assuming that the data sets * _ v.dat contain ieee-8 byte ( instead of 8-bit ) double - precision numbers and that the least significant bit in * _ c.dat specifies the position of the switch instead of the detector that fired .g. weihs , ein experiment zum test der bellschen ungleichung unter einsteinscher lokalitt , ph.d .thesis , university of vienna , http://www.quantum.univie.ac.at/publications/ thesis / gwdiss.pdf[http://www.quantum.univie.ac.at / publications/ thesis / gwdiss.pdf ] ( 2000 ) .nieuwenhuizen , in _ foundations of probability and physics - 5 _ , vol .1101 , ed . by l.accardi , g. adenier , c. fuchs , g. jaeger , a. khrennikov , j.a .larsson , s. stenholm ( aip conference proceedings , melville and new york , 2009 ) , vol .1101 , p. 127
|
we discuss recent progress in the development of simulation algorithms that do not rely on any concept of quantum theory but are nevertheless capable of reproducing the averages computed from quantum theory through an event - by - event simulation . the simulation approach is illustrated by applications to einstein - podolsky - rosen - bohm experiments with photons . , , , , , quantum theory , epr paradox , computational techniques 02.70.-c , 03.65.-w # 1([#1 ] )
|
understanding the structure of a population and how it evolves in time has been a critical issue in modern societies .having started as an economic problem , it soon extended to an environmental one , and states take censuses periodically in order to use the results to design demographic policies .malthus was the first who presented the mathematical claim , which has been accepted as the fundamental principle of population dynamics , that `` population , when unchecked , increases in a geometrical ratio '' . in modern terms , he meant the exponential growth tendency of a population of size with a constant net growth rate , i.e. , with the time derivative ; this is often called the malthusian growth model .forty years after malthus published his essay , verhulst added the idea of the maximal capacity allowed by the environment , , to the growth model , so that the growth rate can be negative when the population size exceeds ; this is referred to as the logistic model .in addition to the inherent variety of dynamics it may exhibit , there are also other models reflecting the complexity in population dynamics , one of which has been introduced and termed the -logistic model .information on the structure of human population is not only available in many countries , but also very reliable owing to modern census techniques .various classifications therein allow deeper insight into how subpopulations develop and interact with each other . in this work ,we classify people according to their family names ( or surnames ) , and study the family name distribution .these studies can also be important from the viewpoint of genetics in biology , since the inheritance of the family name is often paternal , exactly like the inheritance pattern of the chromosome .furthermore , if one can identify quantitatively the origin of differences in family name distributions across countries , it can provide an understanding of the social mechanism behind the naming behavior in human societies . & & china & & korea & 1.0 & argentina & & austria & & berlin & 2.0 & france & & germany & & isle of man & 1.5 italy & & japan & 1.75 & netherlands & & norway & 2.16 & sicily & 0.461.83 & spain & & switzerland & & taiwan & 1.9 & united states & 1.94 & venezuela & & vietnam & 1.43 & [ table : summary ] the pattern of family name distributions in many countries have already been investigated ( see table i ) : in japan , the family name distribution has been shown to have a power - law dependency on the size of families , i.e. , with the exponent .later , families in the united states and berlin have also been reported to display power - law behavior with the similar exponent .the same power - law distributions with exponents and have also been measured for taiwanese family names and for names in the isle of man , respectively .extensive research in various countries ranging from western europe to south america has again found exponents around ( see refs . and references therein ) . in sharp contrast , the korean family name distribution has been recently investigated , revealing the very interesting behavior of .the korean distribution is very different since the cumulative distribution ( the number of family names with more than members , divided by the total number of family names ) becomes logarithmic , which results in an exponential zipf plot ( sizes versus ranks of families ) . throughout the present paper ,the rank of a family is defined according to its size in descending order , i.e. , the biggest family is assigned rank 1 , and the second biggest family has rank 2 , and so on . more strikingly , the exponentially decaying zipf plot suggests that the number of family names found in a population of size increases logarithmically in korea ( we observe the same behavior for the chinese family names reported in ref . ) , in sharp contrast to the corresponding results for other countries , where grows algebraically with . in this work ,we investigate the possible mechanism for the differences of family name distributions across countries , by using a simple model of population dynamics .we suggest that the difference originates from the rate of appearance of new family names , which is checked by empirical observation made for the history of korean family names . in more detail ,if new names appear linearly in time irrespective of the total population size , is obtained , whereas if the number of new names generated per unit of time is proportional to the population size , is concluded .we also investigate the family books for several family names in korea containing genealogical trees , and extract the family name distribution to construct the zipf plot , revealing that the exponential zipf plot in korea has been prevalent for at least 500 years .family names in other countries such as china , vietnam , and norway are newly investigated , and comparisons with existing studies lead us to the conclusion that there are indeed two distinct groups of different family name distributions .the present paper is organized as follows : we present our master equation formulation , and obtain the formal solution for the distribution function in sec .[ sec : formulation ] . a detailed analysisis then made in sec .[ sec : constant ] for the case of constant name generation rate , and historical observations are also discussed .section [ sec : branching ] is devoted to the other case when branching out from old to new names is allowed , which is followed by a summary in sec .[ sec : summary ] .we first introduce the master equation in a general form to describe the time evolution of the family size , and then present the formal solution obtained by using the generating function technique .let us define the probability for a class ( family ) to have number at time given that it started with at time : ,\ ] ] which is required to satisfy the initial condition with the kronecker delta if ( ) .the time evolution of is governed by the following master equation : p_{j , k+1}(s , t ) \nonumber \\ & & - [ \lambda_k(t ) + \mu_k(t ) + \beta_k(t ) ] p_{j , k}(s , t ) , \label{eq : master}\end{aligned}\ ] ] where we have made the continuous - time approximation that . for convenience , we take one year as the time unit , and thus the rate variables , , and defined in terms of the annual change of population . the first term in the right - hand side of eq .( [ eq : master ] ) describes the process in which the class with members increases its members by 1 , which occurs at the birth rate .the second term is for the opposite process that members is decreased to members , which occurs when one member either dies at the death rate , or invents a new family name at the branching rate .we consider only the branching process in which a person invents a new name ; changing a name from one to an existing one is not allowed in our model .the last term is for the change from either to or to , which occurs when a person is born , dies , and changes name , at rates , and , respectively . in this work , we allow birth and death rates to depend on time , and write them as the prefactor is easily understood since the family with members has a chance proportional to to be picked up .we henceforth also assume that to describe a population growing in time .the solution of the master equation [ [ eq : master ] ] is easily found by using the generating function written as ( see , e.g. , ref . ) with the initial condition [ see eq .( [ eq : pjkst ] ) ] .it is straightforward to get the following partial differential equation for by combining eqs .( [ eq : master ] ) and ( [ eq : psi ] ) : with , which is written in the simpler form by introducing new variables and as the solution should be written as , and the functional form of is determined from the initial condition ( we henceforth set , i.e. , the class started from only one member ) : ^{-1},\end{aligned}\ ] ] where . by expanding the generating function , we reach the desired probability distribution where with .so far we have focused on the size of a class first introduced at a certain time . to derive the overall population distributionobserved at time , one needs to know when each class was introduced .let represent the rate at which a class is introduced .if the history begins at , the resulting population distribution at time is given by where is the total number of family names at time .although one can think of a further generalization using different time - dependent functions , , , for the corresponding rate variables in eq .( [ eq : lambda ] ) , for simplicity we restrict ourselves only to the identical form . within this limitation ,it is noteworthy that our expression for the family name distribution in eq .( [ eq : pkt ] ) applies for a variety of different situations for arbitrary and .for example , the widely used simon model corresponds to the situation that const and with the total population .it is to be noted that the use of introduces an effective competition among individuals : in one unit of time , only some fixed number of individuals are allowed to be born or die , which yields a linear increase of population in time , different from what really happened in human history .accordingly , we focus below on the case to have exponential growth of the population ; however , we consider different choices for .the assumption of time - independent rates with [ see eq .( [ eq : lambda ] ) ] results in . without knowing the details of the generation mechanism of new family names , it is plausible to assume that new family names are introduced into the population at the rate which contains both the population - independent part ( ) and the population - proportional part ( ) .the second term can be easily motivated if we assume that each individual invents a new family name at a given probability .the population - independent part of the name generation rate should also be included to describe , e.g. , immigration from abroad .let us consider a family that started at time . the expected family size at time is computed to be [ see eq .( [ eq : psi ] ) ] which yields the self - consistent integral equation ds,\end{aligned}\ ] ] or , in the differential form , with the solution as is expected , the population - proportional part in due to the change of names ( from the existing one to a new one ) has nothing to do with the increase or decrease of the total population , and thus only the population - independent part in enters .when family names appear uniformly in time and no branching process occurs , i.e. , we obtain , via change of the integration variable from to [ see eqs .( [ eq : eta ] ) and ( [ eq : pkt ] ) ] , which yields it is also straightforward to get the number of family names which , combined with the total population , yields the expression .interestingly , the above results are in perfect agreement with what has been found for korean family name distribution .the cumulative family name distribution becomes logarithmic ( i.e. , ) , which gives an exponentially decaying zipf plot where the size is displayed as a function of the rank of the family .furthermore , one can show directly from that the number of family names found for the population size increases logarithmically , i.e. , .the assumption of the constant rate of new name generation is very plausible in korean culture : more or less , it is considered as taboo to invent a new family name , and in korean history very few names have been introduced ; there were only 288 family names in the year 2000 .furthermore , only 11 names newly appeared between 1985 and 2000 , which seems to imply that in korea is extremely close to zero .( if we assume that , corresponding to the upper limit estimation of , we get per year . ) korea has preserved its family name system for more than two millennia , and many korean families still keep their own genealogical trees , from which their origins can be rather well dated ., as a function of time in units of years .in korea , grows approximately linearly in time while the population growth is exponential.,title="fig:",width=288 ] , as a function of time in units of years .in korea , grows approximately linearly in time while the population growth is exponential.,title="fig:",width=288 ] in order to check the validity of the assumption of the constant generation rate of names , we collect information about the origins and sizes of family names from publicly accessible sources . collecting the sizes and times of appearance for 178 family names , around 60% of those existing , fig .[ fig : origin ] is obtained . in fig .[ fig : origin](a ) , we show the present size of each family versus the time when it first appeared and it seems to be in accord with the exponential growth in eq .( [ eq : kst ] ) . in fig .[ fig : origin](b ) , we plot the number of family names as a function of time . although we have included only 178 names , the plot is again in agreement with the linear increase of in eq .( [ eq : nf ] ) over a broad range of time .we emphasize here that the number of korean family names increases much more slowly than the total population .we also use several family books containing genealogical trees .although these books contain only the paternal part of the trees , the family names of women who were married to the members of the family were recorded ( in korea , women do not change their family names after their marriages ) .we use the information about family names of women at various periods of time to plot fig .[ fig : kiet ] .it is clearly seen that the size of the family versus the family rank decays exponentially for a broad range of periods , which confirms that the exponential zipf plot in korea has been prevalent for a long time and is not a recent trend . versus the rank of the family ( zipf plot ) extracted from the family names of married women in family books . the exponential decay has been valid for at least 500 years in korean history.,width=288 ] we have shown above that a family name distribution of the form is closely related to the constant generation rate of new names , i.e. , , which has also been validated from empirical historical observations . for another example , we present the result of our analysis for chinese family names in ref . , where chinese are sampled with family names found .although only the top chinese family names are available in ref . , the rank - size distribution ( zipf plot ) appears to have preserved an exponential tail for the almost a millennium , as shown in fig .[ fig : chinese](a ) .moreover , the number of family names increases logarithmically with the number of people , as depicted in fig .[ fig : chinese](b ) , supporting our argument .( the zipf plot ) . the exponential shape has been maintained from the time of the song dynasty ( 9601279 ) .( b ) number of people ( ) versus the number of family names found therein ( ) , collected in each province of china , showing clearly .,title="fig:",width=288 ] ( the zipf plot ) .the exponential shape has been maintained from the time of the song dynasty ( 9601279 ) .( b ) number of people ( ) versus the number of family names found therein ( ) , collected in each province of china , showing clearly .,title="fig:",width=288 ] we next pursue the answer to the question of how the distribution changes if new names are produced at a rate that is not fixed but grows with the population size .if family names are allowed to branch out , the exponent is altered . with being positive , is dominated by the exponential growth in the long run [ see eq .( [ eq : nt ] ) ] which we use to compute in eq . ([ eq : pkt ] ) as follows : consequently , the family name distribution in the case of has in agreement with ref .it is very reasonable to assume that is much smaller than , and we expect in most countries .indeed , the united states and berlin have , which , by using the relation discussed in ref . , leads to the conclusion that and are proportional to each other .of course , one can confirm this linear relation from the direct calculation of the number of family names : ,\ ] ] which confirms that as .the above result of should be used carefully when : if is strictly zero , one can not use the assumption , and we recover the result as previously shown . from the publicly accessible population information , we estimate that the swedish population increased at the rate per year during 20042006 . in the same period of time ,about 100 new family names were introduced per month , which gives us a rough estimate .accordingly , , which makes the assumption we made above very plausible .the number of family names in sweden is known to be somewhere between 140 000 and 400 000 depending on how we count them .together with the total population of about 9 millions , we confirm that , which is in accord with our expectation that .the empirical findings we have referred to are listed in table i , from which we suggest two main categories of family name systems : one with and a logarithmic increase of versus ( korea and china ) , and the other with and a power - law increase of ( other countries ) .the latter category has been prevalent in the literature , to which we also add norway with ( fig . [fig : norway ] ) and vietnam with ( fig .[ fig : vietnam ] ) . versus the family size of norwegian family names , based on the survey in 2007 .the power - law behavior is clearly seen.,width=288 ] found in certain numbers of people .both show power - law behaviors ., title="fig:",width=288 ] found in certain numbers of people .both show power - law behaviors ., title="fig:",width=288 ] we suggest above that the existence of two groups of family name distributions originates from the difference in new name generation rates , which reflects the existence of a very different social dynamics behind the naming behaviors across different cultures .we also point out that , due to the unavoidable simplifications made in our analytic model study , we are not able to clearly explain the spread of in the second group of family names . for example , in the history of vietnam , when a dynasty was ruined many vietnamese belonging to the fallen dynasty changed their family names into existing ones , which our framework can not take into account .another interesting case is the japanese system .again , a japanese family name rarely undergoes the branching process these days , but one finds the algebraic dependency of , which indicates the fact that many japanese people had to adopt their family names by governmental policy about a century ago .the diversity ensured at the creation appears to be maintained up to now characterizing the japanese family name system .consequently , the japanese name distribution can not be successfully explained by our model in which the limit is taken .another peculiar observation has been found in sicily : the surname distribution from one of its communes shows , possibly originating from the effects of isolation ; mathematical treatment of this population has not been carried out . within these limitations of our model studyin which various simplifications are made implicitly and explicitly , we strongly believe that such an idealization in general helps one to sensitively check the reality and identify the most important issue from all the ingredients , providing a deeper understanding and insight .in summary , we analytically investigated the generating mechanism of observed family name distributions .whereas the traditional approaches from the simon model are based on implicit assumptions about competition within the population , we instead started from the first principle of population dynamics , the malthusian growth model .the existence of branching processes in generating new family names was pointed out as the crucial factor in determining the power - law exponent : with and without the branching process , and , respectively , were obtained .genealogical trees collected for korean family names were analyzed to confirm that the total number of names increased linearly in time , justifying the assumption made in the analytic study .we additionally reported chinese , vietnamese , and norwegian data sets to examine our argument , which , combined with existing studies , lead us to the conclusion that there are two groups of family name distributions on the globe and that these differences can be successfully explained in terms of the differences in new name generation rates .we thank h. jeong for providing us data from korean family books , and p. holme for useful information on swedish family names .we also thank statistics norway for the norwegian data .this work was supported by the korea research foundation grant funded by the korean government ( moehrd ) , grants no .krf-2005 - 005-j11903 ( s.k.b . ) , no .krf-2006 - 211-c00010 ( h.a.t.k . ) , and no .krf-2006 - 312-c00548 ( b.j.k . ) .
|
although cumulative family name distributions in many countries exhibit power - law forms , there also exist counterexamples . the origin of different family name distributions across countries is discussed analytically in the framework of a population dynamics model . combined with empirical observations made , it is suggested that those differences in distributions are closely related to the rate of appearance of new family names .
|
suppose we have a domain and a domain contained in an dimensional surface in ; here , denotes the extended source , and denotes the target domain , receiver , or screen to be illuminated .let and be the indices of refraction of two homogeneous and isotropic media i and ii , respectively .suppose from the extended source , surrounded by medium i , radiation emanates in the vertical direction with intensity for , and the target is surrounded by medium ii .that is , all emanating rays from are collimated .a _ parallel refractor _ is an optical surface , interface between media i and ii , such that all rays refracted by into medium ii are received at the surface with prescribed radiation intensity at each point . assuming no loss of energy in this process , we have the conservation of energy equation . when medium ii is denser than medium i ( i.e. ) , estimates are proved in , and the existence of refractors is proved in .the purpose of this paper is to consider the case when .this has interest in the applications to lens design since lenses are typically made of a material having refractive index larger than the surrounding medium .in fact , if the material around the source is cut out with a plane parallel to the source , then the lens sandwiched between that plane and the constructed refractor surface will perform the desired refracting job .when the geometry of the refractors is different than when ; in fact , the geometry is determined by hyperboloids instead of ellipsoids .in addition , in the case , total internal reflection can occur and one needs additional geometric conditions on the relative configuration between the source and the target so that the target is reachable by the refracted rays . to obtain existence and regularity of refractorswhen , the use of hyperboloids requires non - trivial changes in some of the arguments used in when .the main differences are in the set up of the problem , in the arguments to obtain global support from local support , section [ sec : localtoglobal ] , and in the proof of existence .our results are local ; that is , we only need to assume local conditions in a neighborhood of a point in the extended source and the target .the main result of the paper is theorem [ thm : holdercontinuityresult ] where estimates are proved .we remark that most results do not involve the energy distribution given in the source and target , and conservation of energy is only used to prove existence in theorem [ thm : existencediscretecase ] .for instance , the fact that local refractors are global , theorem [ localtoglobal ] , just follows from the geometric assumptions in section [ asstar ] ; see condition .in addition , theorem [ lowerboundtheorem ] only requires geometric assumptions .properties of the target measure are necessary only to obtain the hlder estimates , theorem [ thm : holdercontinuityresult ] .our results are structural , in the sense that they only depend on the geometric conditions assumed and do not depend on the smoothness of the measures given in the source and target. problems of refraction have generated interest recently for the applications to design free form lenses and also for the various mathematical tools developed to solve them .for example , the far field point source refractor problem is solved in using mass transport .the near field point source refractor problem is considered in and .more general models taking into account losses due to internal reflection are in .numerical methods have been developed in and for the actual calculation of reflectors , and recently in for the numerical design of far field point source refractors .a significant amount of work has also been done to obtain results on the regularity of reflectors and refractors .the organization of the paper is as follows .section [ sec : definitionsandpreliminaries ] contains results concerning estimates of hyperboloids of revolution .the precise definition of refractor is in section [ condi ] , and the structural assumptions on the target that avoid total reflection are in section [ subsec : structuralassumptionsontarget ] .the derivatives estimates needed for hyperboloids are in section [ subsec : derivativeestimates ] .section [ asstar ] contains assumptions on the target modeled on the conditions introduced by loeper in the seminal work ( * ? ? ?* proposition 5.1 ) . in section [ sec : localtoglobal ] , using the geometry of the hyperboloids , we prove that if a hyperboloid supports a parallel refractor locally , then it supports the refractor globally provided the target satisfies the local condition .this resembles the condition ( a3 ) of ma , trudinger and wang introduced in the context of optimal mass transport .the main results are in section [ main ] , in particular , section [ subsec : holderestimatesfromgrowthconditions ] contains the proof of the hlder estimates .finally in the appendix , section [ app ] , we discuss and establish the existence of refractors satisfying the energy conservation condition .we briefly review the process of refraction .points in will be denoted by .we consider parallel rays traveling in the unit direction .let be a hyperplane with outward pointing unit normal and .we assume that medium is located in the region below and media in the region above .in such a scenario , a ray of light emanated from in the direction strikes at and , by snell s law of refraction , gets refracted in the unit direction where since .the refracted ray is , for ; see figure [ fig : snell ] .in particular , if and the hyperplane is so that the unit upper normal , then and the refracted unit direction is with .with this notation we have .since medium is more dense than medium , total internal reflection can occur , ( * ? ? ?1.5.4 ) . to avoid this we assume , or equivalently , ; see ( * ? ? ?* lemma 2.1 ) where and are reversed . , width=192 ] fix . a two - sheeted hyperboloid in with upper focus at and lower focus at has equation the semi - axis with direction is , the semi - axis with direction is , and the center of symmetry is the point .moreover , the upper vertex is , and the lower vertex is .hence the distance between the foci is and the distance between the vertices is by definition , the eccentricity is , and so the eccentricity equals .the lower sheet ( facing downwards ) of the hyperboloid is given by which can be written as the graph of the function suppose the region above has refractive index , and the region below has refractive index , with . then we have from ( * ? ? ?* section 2.2 ) and the reversibility of optical paths that each ray with direction striking from below the graph of at the point is refracted into a ray passing through the upper focus ; see figure [ fig : refractioninhyperboloid ] .therefore , lies along the ray with , with given by .conversely , a fact that will be used on multiple occasions throughout this paper ..02 in given , let us define if , then is the unique lower sheet of a two - sheeted hyperboloid with upper focus at passing through , and it is thus described by notice that . we are given a source domain surrounded by medium and a target , a compact hypersurface in , surrounded by medium , with .informally , a parallel refractor from to is the graph of a function defined on that refracts all vertical rays emanating from into .the hyperboloid is said to support at the point , if there exist and such that with equality at .we will show that the existence of supporting hyperboloids depends on the relative positions between and ; this will lead to a precise notion of refractor given in definition [ def : refractor ] . also from physical reasons ,the refracting surface given by must be above the source : has thus to be positive in .this means that the supporting hyperboloids must satisfy which immediately imposes a condition on .in fact , first notice that from we have if at some , then we have that is fix and satisfying . by calculationwe get that since we need all the s to be positive in , we want notice that implies that the quantity inside the last square root is positive .fixing and letting , is equivalent to which squaring imposes a condition on , i.e. the corresponding quadratic equation in has roots first observe that . because there is such that and since we obtain which is equivalent to .so to have the inclusion we must have from that but from it is easy to see that is impossible .so to have the inclusion we must have that is , we now choose a uniform bound for in .let where is the projection onto of the target .we require that for this to be well defined we need the right hand side to be positive , which means so we assume that the target satisfies the condition we can now define refractor as follows .[ def : refractor ] let be defined by and assume .the function is a refractor from to if for each there exists and such that for all with equality at . from here onwards, we will assume that is convex , and ] ; see figure [ fig : figurefarfield ] in the appendix .let us also suppose the following compatibility condition : in appendix [ app ] , we show under this configuration the existence of such a refractor .more precisely , we will prove that , for any , , , one can choose , sufficiently large , and , both depending only on and , such that holds and there exists a refractor in the sense of definition [ def : refractor ] ; see theorem [ thm : existencediscretecase ] and the comment afterwards .in addition , the refractor constructed there satisfies the energy condition . since , total reflection can occur , ( * ? ? ?* section 1.5.4 ) . to avoid this, we require that the target satisfies this means the following : if for each we consider the upward cones with vertex at and opening , then is equivalent to say that . if , then , and since the cones are vertical , we have .therefore if we assume , then holds choosing appropriately . for example , if and we look at the cones with , we see that the set is a cone with the same opening and vertex at the point , with . if we choose sufficiently small such that , then intersected with the slab ] , there is no total reflection , that is , condition holds and satisfies the previous structural assumptions .let us first observe that hyperboloids are uniformly lipschitz hypersurfaces .indeed , a direct calculation shows that therefore , hence , by the definition of refractor , we conclude that if we interchange the roles of and , we get a uniform lipschitz bound for the refractors .the above argument suggests that obtaining higher derivative estimates for will allow us to obtain higher derivative estimates for .we calculate below the relevant derivatives of that will be used .fix , and put .for the derivative in , we notice that and thus . from \frac{\partial c(x_0,y)}{\partial x^0_{n+1 } } , \end{aligned}\ ] ]and so we get \bigg|\frac{\partial c(x_0,y)}{\partial x^0_{n+1 } } \bigg| \nonumber \\ & \leq \left[\left(\frac{\kappa}{\kappa^2 - 1}\right ) + \frac{1}{\kappa^2 - 1 } \right ] ( \kappa + 1)= \frac{\kappa + 1}{\kappa - 1}. \label{derivlastcoord}\end{aligned}\ ] ] next we calculate the second derivatives and get , for , that this gives the mixed second derivatives in and are , for and , since , we have .therefore , it is evident from the above calculations that in order to bound the derivatives of in a uniform manner , we must obtain a positive lower bound for when and .for this , we will use the structural assumption .in fact , let and . since , we first have next , if we let , then and .thus , for all .it then follows that if is given , we have from , we get since , we then obtain that is a lower bound for .clearly , this bound yields uniform bounds in and , as well as for higher order derivatives .we explicitly remark that the first order derivative bound in is independent of the bounds for , and thus independent of the compatibility assumptions .it depends just on the fact that the relevant supporting objects in our problem are hyperboloids , and it gives automatically global lipschitz bounds for the refractor .this is in strong contrast with the case considered in .in fact , the supporting objects in are ellipsoids and to obtain global lipschitz bounds for them a condition between and is needed , see ( * ? ? ?* section 2.3 ) .the derivative bounds and the properties of hyperboloids also imply the following estimates , which will be used in section [ main ] .[ unposu ] let , , .let also , and assume for some positive .then we recall that .since we have .therefore , \partial_{x^0_{n+1}}c(x^0 , y ) > 0.\ ] ] it follows that .for the upper bound we have , for some ] .the points in {x_0} ] . [ regularityhypothesis ]fix .we say the target is regular from if there exists a neighborhood and depending on such that , for all and , we have for all with , ] mimics the notion of a -segment in the theory of optimal mass transport ( cf . ) , while the condition is akin to the ma - trudinger - wang condition ( a3 ) in the regularity theory of optimal transport maps ( cf . , ) .a watershed for the regularity theory of mass transport is the result of loeper , which shows the condition ( a3 ) is equivalent to a maximum principle for -support functions .this forms the motivation for the regularity hypothesis and the theorem above is the analog of this characterization for the case of the parallel refractor .loeper s maximum principle allowed him to obtain a result which kim and mccann refer to as the _ dasm _ ( double - mountain above sliding - mountain ) theorem in the context of optimal mass transport .this in turn enabled loeper to obtain a local - implies - global result for -support functions .this section is devoted to establishing the analog of this local - implies - global result in the setting of the parallel refractor .we refer the reader to the end of this section for further comments .we say that the target satisfies condition ( aw ) from if for all written as , and for all , we have where , and .equivalently , if , then the condition ( aw ) requires that for all , we have \xi_k \xi_{\ell } \geq 0.\ ] ] 0.6 cm let us put , where we recall that as defined in .we denote . by , for ,we have since , we obtain if , then , and so hence , now if , then we can write . recalling that , we therefore obtain the condition ( aw ) is then equivalent to . on the other hand , by setting , we see that .notice that if , then = 2 ( \kappa^2 - 1 ) \left\langle v_{\epsilon } , \eta \right\rangle \left\langle \xi , \eta \right\rangle = 0.\ ] ] it follows that \left\langle d^2_v g ( v , x ) \xi , \xi \right\rangle.\ ] ] by the derivative estimate for , we have ( actually it is strictly less than 1 ) .thus , for all and .therefore , the condition implies is a positive concave function for each .[ maxprinciple ] suppose that the condition holds from some .let be given by and .for , let , and define . denoting , then in particular ,for all , assume for simplicity .let us first make note of the following : we notice that the set is contained in a hyperplane .indeed , if , then and . applying in the case of equality with and , we obtain ; + . hence , .we can rewrite this as , where . in conclusion , implies .analogously , if , then where .recall that any is given by . denoting ,we may rewrite and as the -vectors .this is because the first components of are given by the vector , while the -st component is given by , since .hence , .since , it follows that .let us now prove a couple of claims . 1 .suppose .then as a matter of fact , since , we have , which implies by + .+ if , we then have + . + the last two inequalities give + . + in particular , this implies .we want to conclude that .this is possible thanks to the structural assumption .in fact , for any and , we have + . +this gives , which implies .thus , assuming , we have , and the first implication is proved .the proof of the second implication we claimed is completely analogous .let us note that so far , we have not used the condition , which we have shown to be equivalent to the concavity of .we will now use this fact in the proof of the following claim . 1 . if and , then .+ assume and .notice that where .hence , it is enough to show that .first assume and .we will show that with and for some . by comparing the first components of and , we find that the above equality holds if and only if therefore , we choose such that since , we have and .it follows that and have the same sign . from the last identity, we also obtain by the concavity of , we have .hence , and thus as well .+ now consider the case and . from the concavity of , we have . if we write , then , and so .+ finally , the last case to consider is .in such case and , and so both inequalities and can not hold simultaneously .the above claims complete the proof of the theorem .indeed , by the second claim , if , then either or .hence , it follows from the first claim that or . for a function ] .it can be proved following the arguments in ( * ? ? ?* lemma 5.1 ) by exploiting and the lower bound .the constant may be chosen dependent only on the bound for .the following lemma is crucial for the regularity of refractors , assuming regular from a point with respect to definition [ regularityhypothesis ] .we omit the proof since it can be completed proceeding as in ( * ? ? ?* lemma 5.2 ) .the main needed ingredients for the proof are : the concavity of , the estimates - , lemma [ unposu ] , lemma [ lemmafour ] , and lemma [ cuno ] . [ taylor ] suppose is a parallel refractor and the target is regular from .there exist positive constants depending on such that if and , , and , then there exists ] , ] such that {x_0^ * } : \lambda \in \left[\frac{1}{4 } , \frac{3}{4 } \right ] \right\ } \right ) \cap \sigma \subset f_u(b_{\eta}(x_0)),\ ] ] where , , and .let and suppose and satisfy with this choice of .define and .since , the lemma [ taylor ] applies .let be the point in that lemma and let .fix {x_0^ * } : \lambda \in \left[\frac{1}{4 } , \frac{3}{4 } \right ] \right\ } \right ) \cap \sigma ] with ] such that if , then we have the inclusion {x_0^ * } : \lambda \in \left[\frac{1}{4 } , \frac{3}{4 } \right ] \right\ } \right ) \cap \sigma \subset f_u(b_{\eta}(x_0)),\ ] ] where , and .it now follows from and that : \lambda \in \left[\frac{1}{4 } , \frac{3}{4 } \right ] \right\}\right ) \cap \sigma \right ] \\ & \leq \sigma[f_u(b_{\eta}(x_0 ) ) ] \leq c_0 \eta^{\frac{n}{q}},\end{aligned}\ ] ] which immediately implies .the value of shown in can be obtained using the definitions of and .moreover , it follows that is single - valued for all . take .we first show that is differentiable at and , where and . indeed ,for and sufficiently small , we obtain from and the definition of parallel refractor that for . for the reverse inequality ,let , and . once again , by the definition of parallel refractor , for some ] ( in the sense of definition [ def : refractor ] ) for which the compatibility conditions are satisfied , and such that the main step is to prove this in the discrete case , i.e. , when with and .once this is established , existence when is a general radon measure follows by an approximation argument , see , e.g. , .[ [ geometric - configuration - of - the - target - for - existence - of - refractors ] ] geometric configuration of the target for existence of refractors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * step 1 . * we want to choose such that if for , then that is , we will choose such that we write we have for some .so on the other hand , so holds if we choose such that so this is equivalent to choose such that notice that , and so .now the function is strictly increasing in and , so to get some satisfying , we need that it is easy to see by calculation that the inverse function of is .since for , we have so holds .in addition , must satisfy that ; so we pick as large as possible in this interval and satisfying ( notice that from , always holds for all sufficiently small ) .that is , we define with given in .* step 2*. we prove that the vectors in the set are bounded below by a positive constant depending only on the constant in ( and ) , the constant concerning the location of the target , , and .that is , we prove that if , then , for all with to be calculated ; see .suppose that for some there is such that .we shall prove that this implies that which implies that contradicting the energy conservation condition .we will prove that which clearly implies .we have for some on the other hand , so to prove it is enough to show that if .since is strictly increasing , if we choose with then so we need to choose such that we have , so to find satisfying the last inequality , we need to have we have from if we choose sufficiently large ( satisfying also and ) and satisfying ( notice that is the width of the slab containing the target , and ) and choose with then follows .that is , * step 3*. we shall prove the following upper bound for each refractor when from step 1 , and : with where , for all is sufficiently large where is given in .first notice that from for ; and if we prove that then holds .in fact , from and since is increasing .now , so is equivalent to which holds true since and . notice that to obtain the bound we must take sufficiently large satisfying the inequalities , , and .we shall prove now that we can take even larger so that holds .we first show we can choose large so that this inequality is equivalent to writing , , and noticing that and , we see that hence holds for all sufficiently large , since . to show that we can choose large so that it is enough to show , since , that we can choose large such that the last inequality is equivalent to which as before holds true for all sufficiently large . in summary , we can choose large depending on , and such that and hold true for any refractor with , and .that is , the graph of the refractor is contained in the cylinder and for all ,}\\ & \text{theray intersects in at most one point,}\notag\end{aligned}\ ] ] there is a parallel refractor $ ] satisfying where with and the energy conservation condition .using the last theorem and proceeding as in the proof of the existence ( * ? ? ?* theorem 3.4 ) , we obtain by discretization that theorem [ thm : existencediscretecase ] holds true for an arbitrary radon measure on a general target satisfying and the energy conservation condition . c. e. gutirrez , _ the near field refractor _ , geometric methods in pde s , conference for the 65th birthday of e. lanconelli , lect .notes semin .interdiscip . mat ., vol . 7 , semin . interdiscip( s.i.m . ) , potenza , 2008 , pp .
|
we consider the parallel refractor problem when the planar radiating source lies in a medium having higher refractive index than the medium in which the target is located . we prove local estimates for parallel refractors under suitable geometric assumptions on the source and target , and under local regularity hypotheses on the target set . we also discuss existence of refractors under energy conservation assumptions .
|
kinetic transport for a gas or plasma involves particle interactions such as collisions , excitation / deexcitation and ionization / recombination .simulation of these interactions is most often performed using the direct simulation monte carlo ( dsmc ) method or one of its variants , in which the actual particle distribution is represented by a relatively small number of numerical particles , each of which is characterized by state variables , such as position and energy .interactions between the numerical particles are performed by random selection of the interacting particles and the interaction parameters , depending on the interaction rates . correctly sampling these interactionsinvolves several computational challenges : first the number of particles can be large ( e.g. , ) and the number of possible interaction events can be even larger ( e.g. , for or ) .second , the interaction probabilities vary throughout the simulation since interactions change the state of the interacting particles .these two difficulties are routinely overcome using acceptance - rejection sampling .third , the interaction rates can be nearly singular , for example in a recombination event between an ion and two electrons ( described in more detail in section [ section : example2 ] ) .this creates a wide range of interaction rates that makes acceptance - rejection computationally intractable .figure [ figure : challenge ] illustrates these challenges and how different methods can handle them .the sampling method presented here , which we call reduced rejection , was developed to overcome the challenges of a large number of interaction events with fluctuating and singular rates .[ ! htp ] simulation of kinetics requires sampling methods that generate independent samples .this rules out markov chain monte carlo schemes , such as metropolis hastings , gibbs sampling , and slice sampling .although these methods are very powerful and are used very often , this paper focuses on sampling methods that generate independent samples .there are several efficient algorithms for simulation of discrete random variables , notably marsaglia s table method and the alias method .however , these methods require pre - processing time and , therefore , are not efficient for sampling from a random variable whose probability function changes during the simulation . for continuous random variablesthere are several different algorithms ; nevertheless , each of these algorithms has its own constraints .for example , inverse transform sampling method requires knowledge of the cumulative distribution function and evaluation of its inverse , box - muller only applies to a normal distribution , and ziggurat algorithm can be used for random variables that have monotone decreasing ( or symmetric unimodal ) density function . the algorithm of choice for general ( both continuous and discrete ) random variables that generates independent samples and does not require preprocessing time is acceptance - rejection method ( see for example ) .let be a real - valued function on the sample space .let ] .we say function _ encloses _ function if for all in the sample space . the idea of acceptance - rejection method is to find a proposal function that encloses function .suppose we already have a mechanism to sample according to , then acceptance - rejection algorithm enables us to sample according to .in most cases the constant function is used as the proposal function .the main drawback of acceptance - rejection method is that it might reject many samples .indeed the ratio of the number of rejected samples to the number of accepted samples is approximately equal to the ratio of the area between curves and to the area under the curve . for many given distributions , finding a good proposal function that encloses it without leading to many rejected samplesis difficult .one extension to acceptance - rejection method is adaptive rejection sampling .the basic idea of adaptive rejection sampling is to construct proposal function that encloses the given distribution by concatenating segments of one or more exponential distributions .as the algorithm proceeds , it successively updates the proposal function to correspond more closely to the given distribution .another extension to acceptance - rejection method is economical method .this method is basically a generalization of alias method for continuous distributions . in this method, one needs to define a specific transformation that maps to .although this method produces no rejection , finding the required transformation is difficult in general .in the reduced rejection method we sample according to a given function based on a proposal function .in contrast to the acceptance - rejection method , reduced rejection sampling does not require to enclose ( i.e. it allows for some ) .on the other hand , reduced rejection sampling requires some extra knowledge about the functions and .the reduced rejection sampling method can be applied to a wide range of sampling problems ( for both continuous and discrete random variables ) and in many examples is more efficient than customary methods ( three examples are provided in sections [ section : example1 ] , [ section : example2 ] and [ section : example3 ] ) . in particular ,reduced rejection sampling requires no pre - processing time and consequently is suitable for simulations in which is changing constantly ( see section [ section : comparison ] for an elaboration on this point and sections [ section : example2 ] and [ section : example3 ] for examples of simulations with fluctuating ) .also in situations where has singularities or is highly peaked in certain regions , reduced rejection sampling can be very efficient .the next section describes the reduced rejection sampling and proves its validity .section [ section : comparison ] compares reduced rejection sampling to other methods ( including other generalizations of acceptance - rejection ) , highlights advantages of reduced rejection sampling in comparison to other methods , and points out some of the challenges in applying reduced rejection sampling . in section [ section : example1 ] , reduced rejection sampling is demonstrated on a simple example . in section [ section : example2 ] , reduced rejection sampling is applied to an example motivated from plasma physics , for which other sampling methods can not be used efficiently . in section [ section :example3 ] , we make some comments on how to apply reduced rejection in the context of stochastic chemical kinetics . in the appendix, we provide flow charts for the reduced rejection algorithm .consider a sample space with lebesgue measure on , and two functions .denote =\int_\omega q(x)d\mu(x ) , \qquad i[p]=\int_\omega p(x)d\mu(s).\ ] ] by sampling from according to we mean sampling from using probability distribution function ] , ] and ] . perform the following steps : * with probability -i[q])/i[p] ] ) , sample from according to . 1 .if , accept .2 . if , accept with probability .* if was not accepted , then sample a new value of from according to and accept .* algorithm ii : * < i[q]} ] .if , then part ( ii ) must have been selected , must have been sampled in ( ii ) and it must have been accepted in case ( ii.b ) .therefore , the probability of returning is \\pr[z\hbox { sampled in ( ii ) } ] \\pr[z \hbox { accepted in ( ii.b ) } ] & = & \frac{i[q]}{i[p]}\times\frac{q(z)d\mu(z)}{i[q]}\times\frac{p(z)}{q(z)}\nonumber \\ & = & \frac{p(z)d\mu(z)}{i[p]}.\label{equation : ismall}\end{aligned}\ ] ] also note that for every , after is selected in ( ii.b ) with probability }d\mu(x) \vert \mathcal{l} ] .this completes the proof for algorithm i. * proof for algorithm ii : * for each , show that the algorithm in algorithm ii returns with probability ] , the probability that it is not accepted in ( iii ) is . thus the total probability of not returning an element of in ( iii ) , which is the same as the probability of reaching ( iv ) , is = \int_{\mathcal{s}}\frac{q(x)-p(x)}{q(x)}\frac{q(x)}{i[q]}d\mu(x ) = \int_{\mathcal{s}}\frac{q(x)-p(x)}{i[q]}d\mu(x ) .\label{equation : notreturn}\ ] ] next suppose that .the probability that is accepted in ( ii ) is =\frac{q(z)d\mu(z)}{i[q]}. \label{equation : ai}\ ] ] for to be returned from ( iv.a ) , the algorithm must reach ( iv ) , then go to ( iv.a ) and then select in ( iv.a ) .this has probability \notag \\ = & \pr[\hbox{reach ( iv ) } ] \ \pr[\hbox{go to ( iv.a ) } ] \\pr[\hbox{z sampled in ( iv.a ) } ] \notag\\ = & \left(\int_{\mathcal{s}}\frac{q(x)-p(x)}{i[q]}d\mu(x)\right)\times \frac{\int_{\mathcal{l}}(p(x)-q(x))d\mu(x)}{\int_{\mathcal{s}}(q(x)-p(x))d\mu(x ) } \times \frac{(p(z)-q(z))d\mu(z)}{\int_{\mathcal{l}}(p(x)-q(x))d\mu(x ) } \nonumber \\ = & \frac{(p(z)-q(z))d\mu(z)}{i[q]}. \label{equation : bi}\end{aligned}\ ] ] now using equations and , the probability of returning in a cycle is = & \pr[z \hbox { returned from ( ii)}]+\pr[z \hbox { returned from ( iv.a)}]\notag\\ = & \frac{q(z)d\mu(z)}{i[q]}+\frac{(p(z)-q(z))d\mu(z)}{i[q]}\notag\\ = & \frac{p(z)d\mu(z)}{i[q]}. \label{equation : iilarge}\end{aligned}\ ] ] equations and imply that , whether or , the probability that is returned in a cycle is ] .consequently , probability that no sample point is returned in a cycle is /i[q] ] .actually it can be significantly better because ] , then algorithms i and ii are the same .one of the important features of reduced rejection sampling is that it requires no preprocessing time .this is particularly useful for _ dynamic simulation _ ; i.e. , simulation in which the probability distribution function may change after each sample ( see section [ section : example2 ] for an example from plasma physics ) . for dynamic simulation ,fast discrete sampling methods such as marsaglia s table method or the alias method , are not suitable as they require preprocessing time after each change in .although , the acceptance - rejection method requires no preprocessing time and can be used for dynamic simulation , it may require changes in if changes , which is usually not difficult , and it becomes very inefficient when the ratio of the area under function to the area under proposal function is small .moreover , adaptive rejection sampling is not efficient , because the process of adapting to starts over whenever changes .the reduced rejection sampling method can be thought as an extension of the acceptance - rejection method . in particular when the proposal function encloses ( i.e. , so that ) the reduced rejection sampling method reduces to acceptance - rejection method .the advantage of reduced rejection sampling over acceptance - rejection method is that the proposal function does not need to enclose function ; i.e. , it allows for some .this is very useful in dynamic simulation as it can accommodate changes in without requiring changes in .moreover , reduced rejection sampling may result in less unwanted samples than acceptance - rejection does , especially if has singularities or is highly peaked .there are several challenges in implementing the reduced rejection sampling method .the main challenge is the need to sample from set according to and from set according to , which can be performed by various sampling methods .another challenge in using reduced rejection sampling is the need to know the values of ] and ( but note that the last value is only for algorithm ii ) . in many situations , these valuesare readily available or can be calculated during the simulation .in this section , reduced rejection sampling method is applied to a simple problem .let and sample according to {1-x}}\label{equation : ex1}\ ] ] which has singularities at 0 and 1 . using inverse transform sampling, it is easy to sample according to or {1-x} ] are all known .in this section , we apply reduced rejection sampling to an idealized problem motivated by plasma physics . as discussed in subsection [ subsection : numerical result ] , the unique features of this problem makes other sampling methods inefficient to use . the example presented here is a simplified version of simulation for recombination by impact of two electrons with an ion , in which one of the electrons is absorbed into the atom and the other electron is scattered . for incidentelectron energies and , the recombination rate is proportional to , which can become singular if electrons of low energy are present .this is an obstacle to kinetic simulation of recombination by electron impact in a plasma .our goal is to simulate the evolution of the following system : consider particles labeled .to each particle we associate a number , called the state of particle ( and corresponding to electron energy in the recombination problem ) . occasionally , where it does not cause confusion , we use to refer to particle .we refer to the set as the configuration of the system . for every pair of states and , is a random variable with an exponential distribution with parameter , in which is a fixed constant between 0 and 1 . is the time for interaction between particle and which randomly occurs with rate .after an interaction occurs , say for the pair , the values of states and are replaced by new values and ; consequently , the distribution of changes if either of and is equal to or .we will consider a simple updating mechanisms for the states after each interaction . in the simulations presented below ,the updated values of and are chosen independently and uniformly at random from , without dependence on and .this choice has been made for simplicity and because the stationary distribution can be calculated for this choice ( see section [ subsection : theoretical result ] ) , but we expect that reduced rejection sampling would work equally well for more complex interaction rules .indeed the algorithm [ algorithm : process ] described below and the more detailed algorithm presented in section [ subsection : use of reduced rejection ] do not depend on the interaction rules .first we make some notation and observations . set and .let denotes an exponential random variable with parameter ( with rate ) ; then for any scalar .we will use in the following algorithm , which is a variant of the kinetic monte carlo ( kmc ) algorithm ( also known as the residence - time algorithm or the n - fold way or the bortz - kalos - lebowitz ( bkl ) algorithm ) , that simulates the system described above .this algorithm chooses interactions , by choosing two particles separately out of the number of particles , rather than choosing a pair of particles out of the number of pairs .[ algorithm : process ] 1 .start from 2 .choose time by sampling from an exponential distribution with rate .3 . choose index with probability .4 . choose index with probability .5 . at time interaction between particles and occurs .update states and according to the updating mechanism and the update value of .set and start over from 2 .we use reduced rejection sampling in subsection [ subsection : numerical result ] to perform steps 3 and 4 in the above algorithm .we also explain why other methods of sampling would be inefficient in this circumstances . to verify that our simulation is working properly, we perform the following test .let be a real - valued function on configuration space , with expectation ] analytically , as shown in subsection [ subsection : theoretical result ] .consequently , the difference between the numerical and analytic results provides a measure of the accuracy of the simulation as discussed at the end of subsection [ subsection : numerical result ] .think of the system s evolution as a random walk over the configurations of the system .suppose the updating process is that if states and interact , then states and are chosen , independently , uniformly at random from . in this section ,we find the stationary distribution for this random walk and the value of ] when . *=\frac{\alpha+1}{\alpha+3}(n-2)+\frac{2}{3} ] after each interaction and keep track of set by comparing the updated values of s to their corresponding values of s .moreover , the size of set changes by at most 2 after each interaction ( but it can also decrease after some interactions ) .we use marsaglia s table method to sample according to s .since we do not update s after each interaction , the preprocessing time in marsaglia s table method is only required for the first sampling and not for the subsequent samplings .to sample from set according to , we use acceptance - rejection with uniform distribution for the proposal distribution .as long as the size of set is not too big , the sampling from is not very time consuming . to prevent from getting too large, we reset the values of s to , which sets to be empty , whenever the size of exceeds a predetermined number .the size of is important for the performance of the algorithm. if is too small , then there are many updates of the s , each of which requires preprocessing time for marsaglia s table method . on the other hand , if is too big , then is large and costly to sample from by acceptance - rejection .our computational experience shows that setting equal to a multiple of is a good choice .it might be better for the reinitialization criterion to be based on the efficiency of the sampling from ( i.e. , the fraction of rejected samples when using acceptance rejection on ) , rather than the size of .we simulated the evolution of the system under the conditions outlined in subsection [ subsection : theoretical result ] with , and .we start with a random configuration at time .the simulation is based the reduced rejection sampling method , using marsaglia s table method and the acceptance - rejection method as described above .after each interaction , we evaluate function and take the average to get an estimate for ] from reduced rejection sampling with those from the acceptance - rejection method .the results of figure [ figure : linear10000result ] show excellent agreement between the values of ] using different number of interactions . here , , and .also for the reduced rejection sampling .the theoretical value of ] after interactions using reduced rejection sampling and acceptance - rejection methods were , respectively , 5994.59 and 5996.35 .the reported result is the average of 5 independent runs.,title="fig:",width=453 ] [ ! htp ] , and .also for the reduced rejection sampling .the reported processing time is the average of 5 independent runs.,title="fig:",width=453 ]in this section we describe how we can use reduced rejection in the context of stochastic chemical kinetics .stochastic simulation in chemical kinetics is a monte carlo procedure to numerically simulate the time evolution of a well - stirred chemically reacting system .the first stochastic simulation algorithm , called the direct method , was presented in .the direct method is computationally expensive and there have been many adaptations of this algorithm to achieve greater speed in simulation .the first - reaction method , also in , is an equivalent formulation of the direct method .the next - reaction method is an improvement over the first - reaction method , using a binary - tree structure to store the reaction times .the modified direct method and sorting direct method speed up the direct method by indexing the reactions in such a way that reactions with larger propensity function tend to have a lower index value .recently , some new stochastic simulation algorithms , called partial - propensity methods , were introduced that work only for elementary chemical reactions ( i.e. reactions with at most two different reactants ) ( see ) .nevertheless , note that it is possible to decompose any non - elementary reaction into combination of elementary reactions .there are also approximate stochastic simulation algorithms , such as tau - leaping and slow - scale , that provide better computational efficiency in exchange for sacrificing some of the exactness in the direct method ( see and the references therein for more details ) .next we give a brief review of stochastic simulation in chemical kinetics .an excellent reference with more detailed explanation is . using the same notation and terminology as in ,consider a well - stirred system of molecules of chemical species , which interact through chemical reactions .let denote the number of molecules of species in the system at time .the goal is to estimate the state vector given the system is initially in state .similar to section [ section : example2 ] , when the system is in state , the time for reaction to occur is given by an exponential distribution whose rate is the propensity function .when reaction occurs , the state of the system changes from to , where is the change in the number of molecules when one reaction occurs . estimating the propensity functions in general is not an easy task . as noted ,the value of the propensity functions depend on the state of the system .for example , if and are , respectively , the unimolecular reaction and bimolecular reaction , then and for some constants and .therefore , the propensity functions of the reactions are changing throughout the simulation .moreover , if for some chemical species the magnitude of their population differ drastically from others , we expect the value of propensity functions to be very non - uniform. for every state , define to simulate the chemical kinetics of the system the following algorithm is used , which resembles algorithm [ algorithm : process ] in section [ section : example2 ] . [ algorithm : chemicalprocess ] 1 .start from time and state .2 . choose time by sampling from an exponential distribution with rate .3 . choose index with probability .4 . at time reaction occurs .5 . update time , state and start over from 2 . in the original direct method step 3 in the above algorithm [ algorithm : chemicalprocess ] is performed by choosing number uniformly at random in the unit interval and setting however , when we have many reactions with a wide range of propensity function values presented in the system , a scenario that is very common in biological models , the above procedure of using partial sums becomes computationally expensive . as noted earlier , some methods , such as the modified direct method and sorting direct method , index the reactions in a smart way so that they can save on the average number of terms summed in equation and consequently achieve computational efficiency .we propose a different approach to performing step 3 in algorithm [ algorithm : chemicalprocess ] using the acceptance - rejection or reduced rejection method .the approach is very similar to what was done in section [ section : example2 ] . to be specific , we can use acceptance - rejection for step 3 in the following way : let until an index is accepted , select index uniformly at random from and accept it with probability ; otherwise , discard and repeat .when an index is accepted step 3 in algorithm [ algorithm : chemicalprocess ] is completed . typically for chemical reactions ,most of s are zero ; therefore , we can efficiently update the value of at each iteration of algorithm [ algorithm : chemicalprocess ] . however , as in section [ section : example2 ], if the values of s are very non - uniform ( for example , when the population of some chemical species differ drastically from that of other species in the system ) the acceptance - rejection method becomes inefficient due to rejection of many samples . in these circumstances ,the reduced rejection algorithm can be readily used in a very similar way as it was used in section [ section : example2 ] .we expect that the use of the reduced rejection algorithm in these circumstances would greatly improve the computational efficiency of the exact stochastic simulation algorithms .in this paper we introduce a new reduced rejection sampling method that can be used to generate independent samples for a discrete or continuous random variable .the strength of this algorithm is most evident for applications in which acceptance - rejection method is inefficient ; namely , the probability distribution of the random variable is highly peaked in certain regions or has singularities .it is also useful when the probabilities are fluctuating , so that discrete methods that requiring preprocessing are inefficient . in particular ,the reduced rejection sampling method is expected to perform well on kinetic simulation of electron - impact recombination in a plasma , which is difficult to simulate by other methods .the preliminary examples in this paper are meant to illustrate these advantages of the reduced rejection sampling method .they provide evidence of improvement in computation time using the reduced rejection sampling versus acceptance - rejection method .these examples also provide some insights on implementation of the method .one possible direction for future research is the nested use of reduced rejection sampling methods . for the most difficult step - sampling from according to - we propose to apply the reduced rejection sampling method again using a new proposal function .in essence , this would use one reduced rejection sampling method inside another reduced rejection sampling method .[ ! htp ] \geq i[q]$].,title="fig:",width=604 ] g. marsaglia and w. w. tsang , `` a fast , easily implemented method for sampling from decreasing or symmetric unimodal density functions '' , _ siam journ .scient . and statis . computing _ ,* 5 * , 2 ( 1984 ) , 349359 .mccollum , g.d .peterson , c.d .cox , m.l .simpson , n.f .samatova , the sorting direct method for stochastic simulation of biochemical systems with varying reaction execution behavior " , _ comput . bio ._ , * 30 * , ( 2006 ) , 3949 .
|
in this paper we present a method to generate independent samples for a general random variable , either continuous or discrete . the algorithm is an extension of the acceptance - rejection method , and it is particularly useful for kinetic simulation in which the rates are fluctuating in time and have singular limits , as occurs for example in simulation of recombination interactions in a plasma . although it depends on some additional requirements , the new method is easy to implement and rejects less samples than the acceptance - rejection method .
|
recent supernova ( sn ) cosmological measurements have greatly reduced both the statistical and systematic uncertainty in our knowledge of the accelerated expansion of the universe ( the latest such efforts are presented in * ? ? ?* ; * ? ? ?* ; * ? ? ?despite these improvements , the frameworks currently in use are not statistically optimal .as we build larger supernova samples , these frameworks will become increasingly inadequate .this paper offers an improved technique for deriving constraints on cosmological parameters from sn measurements ( peak magnitude , light - curve shape and color , host - galaxy mass ) .although this paper uses sn cosmology as an example , researchers from other fields may find this type of framework useful .our work is particularly relevant for researchers confronting partially known uncertainties , selection effects , correlated measurements , and outliers .the rest of this introduction summarizes the problems with existing methods and describes in general terms the basic requirements for greater accuracy .section [ sec : proposedframework ] describes the framework in detail , and section [ sec : simulateddata ] quantitatively examines the performance of the method on simulated data . in section [ sec : realdata ] , we demonstrate the performance on real sn observations , then conclude in section [ sec : conclusions ] with future directions .current cosmological constraints are derived from time series of photometric measurements in multiple bands ( light curves ) and spectroscopy . before obtaining sn distances , the light curves must be fit with a model such as salt2 .salt2 models each photometric observation in the observer frame with a combination of a mean sn spectral energy distribution ( sed ) ( scaled by a normalization parameter ) , the first component of the sed variation ( scaled by a shape parameter ) , and the mean color variation in magnitudes ( which is scaled by a color parameter ) .the template is also shifted in time to match the observations with a date - of - maximum parameter .( in this work , we restrict our attention to the salt2 empirical model , which is the best validated and most widely used such model . ) and trained the salt2 model in an initial ( separate ) step before the light - curve fitting , using a dataset with well - measured spectra and light curves that partially overlaps with the sn data used for cosmological distances .after measuring a light - curve shape parameter ( ) , a light - curve color parameter ( ) , and a light - curve normalization ( , the rest - frame -band flux ) , we construct a distance modulus estimate as and are the light - curve shape standardizationcoefficient and color standardizationcoefficient , respectively .they quantify the empirical relations that sne with broader rest - frame optical light curves or bluer rest - frame colors are intrinsically more luminous . captures residual luminosity correlations with host - galaxy mass , discussed more in section [ sec : hostgalaxy ] . is the absolute magnitude in the rest - frame -band ( for a given ) .all of these coefficients are nuisance parameters for cosmological purposes. we can then construct a to use for cosmological fitting ( for illustrative purposes only , we assume here that all of the sn observations are uncorrelated ) : captures the measurement uncertainties from the light - curve fit .salt2 handles -corrections implicitly , as it is a rest - frame spectral model that fits photometry in the observer frame . limitations of the salt2 model are important to add , such as its inability to simultaneously model intrinsic color variation and extinction with one parameter ( discussed more below ) . while salt2 does include a simple estimate of the dispersion around its model , including this dispersion does not give a per degree of freedom of 1 when performing cosmological fits . is a term that captures this sample - dependent unexplainedvariance in the sn distribution ( sometimes called `` intrinsic dispersion '' ) .finally , captures dispersion due to gravitational lensing and incoherent peculiar velocities . in current cosmological analyses ( those since* ) , an iteration is performed between estimating the standardizationcoefficients , estimating , and rejecting outlying sne . , and we address it now to contrast against the proposed framework for , e.g. , fitting nonlinear standardizations . finding the and values that minimize this is not the same as minimizing the dispersion of the hubble diagram . nordoes it eliminate the correlation between and hubble diagram residuals .the residuals at the best - fit and are expected to show a correlation with the observed , as uncertainties will preferentially scatter the observed values of away from the distribution means and it is only the values without this scatter that should be uncorrelated with hubble diagram residuals . ]there are several ways in which sn datasets are imperfect .outliers , selection effects , nonlinear correlations , partially known uncertainties , and heterogeneity each complicate analyses .the proposed framework will address each of these complications , but first we discuss the limitations of current work . obtaining either spectra or very high - qualitylight curves are the only ways to ensure a transient source is a sn ia . even at moderate redshifts ,both of these techniques are observationally expensive , and non - ia sne will inevitably contaminate the sample .a similar issue arises if sne are of type ia , but are peculiar , or the redshift is incorrect .the analysis should thus accommodate some amount of non - ia sne , which have dissimilar colors , decline rates , and absolute magnitudes .the iterative outlier fit described above converges well for the sorts of contaminating distributions we expect when the samples are relatively pure .when the samples are large ( several hundred ) , or impure ( )conditions the field is beginning to face the outliers can dominate the other sources of uncertainty in the fit . presented a powerful bayesian technique for simultaneously modeling the distributions of normal sne and outliers , but does not confront many complexities of the data , including the luminosity standardizationsand selection effects .selection effects , the tendency for surveys to select against the faintest sne , sculpt the observed distribution of sne ( this is frequently referred to as malmquist bias , * ? ? ?if not taken into account , this selection will bias both the cosmological parameter estimation and the standardizationcoefficients .there are sn - to - sn variations in the selection efficiency , even within the same survey and for the same redshift ( e.g. , due to seeing , host - galaxy contamination , or moonlight ) .these are deterministic but difficult - to - model sources of variation .noise also plays a role , as a sn right at the detection threshold may stochastically scatter above or below it .thus , the detection efficiency transitions to zero smoothly as a function of apparent brightness . in most sn cosmological analyses ( based on equation [ eq : chi2fit ] ) ,the treatment of selection effects is performed outside the statistical framework , in the form of survey simulations followed by an ad - hoc redshift - dependent adjustment of the to approximately cancel the estimated bias .bayesian analyses face a related challenge : selection effects also influence the distribution of light - curve width and color with redshift , which can amplify selection effects if not modeled . however , these frequentist and bayesian analyses both require knowledge of the true population distribution ( as a function of redshift ) to get accurate results .the and standardizationsin equation [ eq : mbcorr ] are linear .however , nonlinear decline - rate and color standardizationsare statistically justified . , but virtually all the sne used for cosmology are bluer than this . ]our union2 result was derived from subdividing the sample by the best - fit latent variables ( see section [ sec : latentvariables ] ) .the subdivisions showed very similar cosmological results , but subdivision tests are statistically weak ( statistical uncertainties on the difference are times larger than the uncertainty on the whole sample ) . including these nonlinear standardizationsin the fit would be preferable for evaluating their impact andwould remove any bias created by the assumption of linearity .unfortunately , neither the , nor the frameworks were able to incorporate these nonlinear standardizationsin a fully self - consistent way .type ia hubble diagrams show more dispersion in distance than can be explained with measurement uncertainty alone .as noted above , the framework must include a model of the unexplaineddispersion , and should not assume that all of this dispersion is in the independent variable ( magnitude ) .in other words , is really .sn variation ( beyond light - curve width and color ) , is presently only crudely modeled by current light - curve fitters and especially affects the measured color , resulting in color measurements that mix extinction and sn differences in only partially understood ways .when using equation [ eq : chi2fit ] , there is no way to fit for the unexplaineddispersion simultaneously with the other parameters .improvement is needed here , as the unexplaineddispersion interacts with the modeling of selection effects , the outlier rejection , and fitting and .currently , we are constrained to use ad - hoc methods , such as performing many fits with a randomized unexplained - dispersion matrix , and computing the distributions of cosmological parameters fit to fit .the impact of these variations is currently subdominant to the total uncertainty , but this may not be true in future samples and a precise way to evaluate the cosmological impact is desirable . took a step in the right direction by using a bayesian hierarchical model to simultaneously fit for cosmological parameters and unexplaineddispersion , but their model can not accommodate calibration uncertainties , outliers , nonlinear standardizations , or selection effects .future sn cosmology analyses will confront additional difficulties , such as heterogeneity .even very homogeneous sn imaging surveys ( such as the dark energy survey , * ? ? ?* ; * ? ? ?* ) will gain heterogeneity as expensive observational resources , such as near - ir measurements or high - quality spectroscopy , will only be available for a subset of objects .a frequentist , `` object - by - object '' uncertainty analysis such as equation [ eq : chi2fit ] can not make efficient use of this information . in light of these problems, this paper offers an improved technique incorporating a more sophisticated , bayesian model of the data . properly making use of heterogeneous information ( like measurements that are only available for a subset of objects ) requires a model of the sn population in which the parameters of the distribution are treated as unknowns .in addition , the model of the unexplaineddispersion should allow for uncertainty in both size and functional form .we discuss the details of our procedure for parameterizing these possibilities in section [ sec : intdisp ] .only a bayesian framework can accommodate more exotic possibilities , such as very large numbers of nuisance parameters ( each possibly non - zero and having initially unknown size ) , as it can make use of a hierarchical prior around zero .one could imagine many possibilities for these parameters ( e.g. , color - standardizationcoefficient , , as a function of host - galaxy - inclination angle ) ; we do not pursue these in this work , but we do note that they could be built into the general framework. however , not all of the improvements we propose require bayesian statistics ; some are due to the improvements to the data likelihood ( and could thus fit into a frequentist framework ) .first , we introduce a modified approach to fitting for residual correlations with host - galaxy mass , described in section [ sec : hostgalaxy ] .second , like , we handle outliers with a mixture model , described in section [ sec : nonia ] , which offers improved robustness .third , to account for selection effects , our proposed framework uses a probabilistic model of selection as described in section [ sec : selection ] , modifying the classical likelihood to include selection directly .this cleanly estimates and marginalizes the hyperparameters of the true population distributions simultaneously with other parameters , propagating all uncertainties .fourth , in section [ sec : indpriors ] , we discuss our approach to fitting for the changing sn independent variable distribution with redshift ( population drift ) simultaneously with other parameters . finally , our new framework can accommodate nonlinear standardizations(section [ sec : brokenlinear ] ) .a frequentist framework can , in principle , include nonlinear standardizations , but it is computationally challenging .minimum for each latent variable and recording the best one . ] both the old and new techniques can handle correlations due to systematic uncertainties ( such as correlated peculiar velocities and photometric calibration uncertainties ) , assuming the sizes of these uncertainties are known .the modifications to the old framework to include correlations are detailed in .we implement these correlations with nuisance parameters , as discussed in section [ sec : systerrs ] .the model must also allow for population drift ( whether due to selection effects , or changes in the sn population with redshift ) , or risk a significant bias on cosmological parameters .population drift can also be handled well with either framework , as equation [ eq : chi2fit ] handles each object independently ( if the measurements are uncorrelated object to object ) and our new framework solves for population drift simultaneously with other parameters ( section [ sec : indpriors ] ) .there are two disadvantages to our approach .first , the more sophisticated model requires more cpu time ( now measured in many hours , rather than minutes ) .second , it is not necessarily possible to generate a unique distance modulus for each sn as in equation [ eq : mbcorr ] .the true - independent - variable priors ( section [ sec : indpriors ] ) must be treated in bins smaller than the other parameters of interest ( e.g. , bins of in redshift for cosmological parameters ) .however , no one set of assumptions will exactly cover all use cases ( e.g. , tests for isotropy ) .approximate sets of sn distances are possible , but we leave this for future work .our proposed framework surpasses previous analysis efforts by bringing together components that simultaneously address each of the limitations discussed above .we call our framework unity ( unified nonlinear inference for type - ia cosmology ) .the required model needs to contain thousands of nonlinear fit parameters ( as motivated in the following sections ) , which poses a problem for many statistical techniques . for example, there is no practical way to analytically marginalize over every parameter except the parameter(s ) of interest .we thus must draw random samples from the posterior distribution ( monte carlo sampling ) , and use those samples to estimate the posteriors .a natural tool for this sampling is hamiltonian monte carlo ( hmc ) .hmc samples efficiently , with short correlation lengths sample to sample , even for large numbers of fit parameters . to this end , we use stan through the pystan interface .stan automatically chooses a mass matrix , and speeds up sampling efficiency even further by using a variant of hmc sampling called the `` no - u - turn '' sampler .stan also incorporates automatic , analytic differentiation for computing the gradient of the log - likelihood , making the implementation of the model simple and readable .we show a probabilistic graphical model of our framework in figure [ fig : dag ] , and show a table of parameters in table [ tab : paramlist ] .llr[h ] absolute magnitude for & & [ sec : currentapproach ] + cosmological model ( flat ) & & [ sec : currentapproach ] + latent variables & , & [ sec : latentvariables ] + host - mass standardizationcoefficients & , & [ sec : hostgalaxy ] + outlier distribution & , & [ sec : nonia ] + sample limiting magnitudes ( fixed ) & , , , & [ sec : selection ] + latent - variable hyperparameters & , , , , & [ sec : indpriors ] + light - curve color and shape standardizationcoefficients & , , , & [ sec : brokenlinear ] + ` sample' ( unexplained ) dispersion & , , , , & [ sec : intdisp ] + systematic uncertainties & & [ sec : systerrs ] [ tab : paramlist ] offers an excellent discussion of linear regression with error bars in both dependent and independent variables .we briefly summarize here . consider the case of fitting a straight line ( slope , intercept ) in two dimensions ( , the dependent variable vs , the independent variable ) with uncertainties in both and ( and , assumed uncorrelated and gaussian for simplicity ) .the generative model for an observed ( `` '' ) is and similarly for where the value for the measurement if no uncertainty is present , and . when there is no significant uncertainty in , equation [ eq : xtrueexample ] is unnecessary , and we have a simple least - squares regression .when uncertainty is present in both variables , we can substitute for , but . for a fit with two independent variables ,there are two latent variables representing the true position on the line ( now a plane ) .the same logic holds for more than one observation , in which case there are ( number of observations)(number of independent variables ) latent variables . for the standard and standardizations , this results in additional parameters. parameters .because of the potential of misassociation , this situation might be faced by surveys measuring many host - galaxy redshifts after the sne have faded , such as the dark energy survey . ] in a frequentist analysis , if measurement uncertainties are believed to be gaussian and the and standardizationsare linear , the likelihood can be analytically maximized for each of these parameters .this technique enables the proper handling of these nuisance parameters without explicitly including them in the fit .similarly , in a bayesian analysis , if the measurement uncertainties are gaussian ( or a gaussian mixture ) , the standardizationsare linear , and the priors on these parameters are flat or gaussian ( or a gaussian mixture ) , these nuisance parameters can be analytically marginalized , providing a similar computational efficiency boost ( as was done in * ? ? ?in this work , we violate these assumptions ; therefore , we must keep these nuisance parameters in the fit ( and ) . first found evidence that hubble residuals in current light - curve fitters are correlated with host - galaxy environment .this finding later reached high statistical significance in larger samples .the current method for including this effect in cosmological analyses fits two separate absolute magnitudes for low - mass - hosted ( ) and high - mass - hosted galaxies . found evidence that most of this effect is due to the age of the progenitor system ( see also indirect evidence in * ? ? ? * ) .this effect , confirmed independently by , implies that sne hosted by high - stellar - mass galaxies may become more like low - stellar - mass - hosted sne at higher redshift ( when progenitors were young in all galaxies ) .however , newer versions of salt2 combined with different sample selections may not show as strong an age effect .we therefore do not assume a constant mass - standardization( ) ; instead we use a modified version of the model in ( similar to the model of * ? ? ?* ) \;. \label{eq : deltaofz}\end{aligned}\ ] ] those authors proposed host - mass - standardizationevolution predicts the mass - standardizationcoefficient will approach zero at high - redshift ; we instead assume it smoothly approaches a possibly non - zero quantity , .we take a flat prior on from 0 to 1 , allowing the mass standardizationto be constant or declining with redshift , and spanning all of the claims in the literature . for non - ia contamination, we use the mixture - model framework of .this framework models the observed distribution around the modeled mean as a sum of gaussians , where at least one gaussian ( the normal ia distribution ) is tightly clustered , and an outlying distribution is comparatively dispersed . although the assumption of a gaussian contaminating distribution is a strong one , it makes little difference in practice .as the outlying distribution is broader than the inlying distribution , any outlying point will be treated as an outlier . for the relatively pure spectroscopicallyconfirmed sn datasets in use today , modeling the outlying distribution accurately has little impact on the rest of the parameters in the model .because of this , and as a test of the framework , we perform our fits assuming an outlier distribution that is a unit multinormal ( ) in ( centered on the ia distribution for that redshift ) , which is different from the simulated data we generate to test the framework ( section [ sec : simulateddata ] ) .the relative normalizations of the core and outlying distributions can be chosen object by object ( from spectroscopic or other classification evidence ) , or set to be the same for every object .we fit for the fraction of outliers assuming it is the same for all objects ( ) , and place a broad log - normal prior on this quantity of ( an outlier fraction of plus or minus 50% ) .these assumptions work well with union2.1 , as discussed in section [ sec : modelwithoutl ] , but of course they can be adjusted for other datasets .if the luminosity distribution of sne ia turns out to be significantly non - gaussian ( for example , the bimodal model of * ? ? ?* ) , additional gaussian components can be added ( with redshift - dependent normalizations ) to give smaller uncertainties and capture possible population drift .we leave this and more complex models of the non - ia distribution for future work , but these extensions fit easily into this framework .we present the details of our selection model in appendix [ sec : detailsselectioneffects ] , but outline the important points here .the standard method for incorporating a selection cut is to truncate the data likelihood at the cut and divide by the selection efficiency ( e.g. , , see also for a discussion of selection effects and non - detections in the context of linear regression ) . in sn cosmology ,the truncation is not sharp , but is instead probabilistic , as discussed in section [ sec : currentlimits ] .we assume that the observation likelihood is truncated by an error function .far from the selection limit , a sn is found or missed with probability one or zero ; for sne near the selection limit , the probability transitions smoothly .an error function reasonably matches the efficiency curves of e.g. , .surveys also do not select only on one measured variable .we assume our cut is a plane in three - dimensional space , spanning the dependent variable and both independent variables . in our example, this is magnitude ( salt2 ) , shape ( salt2 ) , and color ( salt2 ) ; sne with are less than 50% likely to be found ( and more than 50% likely to be found if ) .the width of the cut is ; sne observed at are considered 16%(+ ) or 84%( ) likely to be found . and are the primary variables responsible for selection effects in sn searches .for example , sne found in the rest - frame -band have a limit in ( ) , while sne selected in the rest - frame -band have a limit in or .we note that for selection in just , bluer ( more - negative ) and slower - declining sne ( larger ) will be selected , as these correlate with brighter .that is , the only effect that a magnitude - based selection ignores is that slower - declining sne are more likely to be found , irrespective of the maximum brightness , as they stay above the detection threshold longer .a simple simulation shows that this effect is very small compared to selection on , even for cadences as large as ten rest - frame days .another bias related to selection effects is the bias due to larger uncertainties on fainter sne ( e.g. , * ? ? ?simple simulations show that our bayesian framework has much less susceptibility to this bias , and the uncertainty bias is much smaller than the one due to missing faint sne altogether .as this is a bayesian framework , we must select priors on the true and latent parameters ( see for a discussion of gaussian priors , and for a gaussian - mixture prior ) .these priors must be chosen very carefully .if the prior mean is wrong , then every distance will also be incorrect in a correlated direction .the variance of the prior has an impact as well .if the prior variance is larger than the population variance , then the true latent parameters will be scattered about the mean more than they should , and the slope of the line will be biased towards zero .the converse will bias the slope of the lines away from zero .the mean and variance of the prior are the most important parameters to estimate accurately , thus gaussians are normally adequate . in sn cosmology, these priors must also be redshift - dependent as the sn population can drift with redshift .the optimal way to ensure the proper size and redshift - dependence of the priors is to fit for the prior parameters ( the `` hyperparameters '' ) simultaneously with every other parameter .we selected skew normal distributions for the priors ( allowing the distribution to be skewed ) , and gaussians for the priors .the prior must be able to vary in redshift more rapidly than the cosmological fit in order to not introduce a bias .what we propose here more than meets this mild requirement , but there is no harm in allowing the hyperparameters to mimic more closely the redshift dependence of the and distributions . for each sample, we fit for the mean of the distributions as a function of redshift with a linear spline .we use up to four spline nodes ( the and , for and , respectively ) , equally spaced in redshift over the range of a sample , with linear interpolation between these nodes .we take non - informative flat priors on the means of the distributions and on the log of the standard deviations ( the standard deviations for each sample are and ) . and .] for the shape parameter of the skew normals , we take a flat prior on , which is also allowed to vary in redshift in the same way as the distribution mean .this prior forces the skew normal to approach a gaussian for samples with few objects .( the superscript `` s '' here is used to distinguish these skew - normal variables from the sn standardizationcoefficients . ) for simplicity , we assume that the true and distributions are uncorrelated ( the observed distributions of and show little correlation ) . we do note that ignoring correlations will bias the fit if any significant ( ) correlations are present . with the true values of the independent variables explicit in the model , it becomes trivial to have nonlinear standardizations . for this work ,we suggest a broken - linear relationship , allowing red / blue and small-/large- sne to have different size standardizationcoefficients ( / and / , respectively ) .we take a flat prior on the angle of each line segment , but transform to the average slope and the difference in slopes for display purposes : , , , and .although the and values of the break could be fit parameters , we do not do this for our primary fits . for the moment, we split the sample at and of 0 .as the unexplaineddispersion is parameterized in the model , it can be marginalized .we do not know what functional form to assume , so we use a flexible parameterization . each sn sampleis allowed its own unexplaineddispersion , allowing poorer - quality samples to be naturally deweighted .we must also distribute the unexplaineddispersion over , , and , while accounting for possible correlations .first , we split the variance of the unexplaineddispersion into , , and ( the fraction of the unexplainedvariance in , , and , respectively ) , which are constrained to sum to one .then , we scale each of these by 1 , , and . the values and approximately scale out and , respectively , where the negative sign for corresponds to the sign convention in equation [ eq : mbcorr ] .( note that using and directly would cancel the or dependence when computing a marginalized distance uncertainty for each sn , so this would be inappropriate . )we also scale the variance by the unexplaineddispersion for each sample , .finally , we form a covariance matrix out of this \{ , , } unexplainedvariance , allowing the off - diagonals to be scaled by parameters , as follows we take a non - informative `` lkj '' prior on the correlation distribution , with , as well as a flat prior on . many effects result in correlated measurements in sn cosmology . the most notable such effect results from common calibration paths : sne from a given dataset share systematics such as telescope bandpass and photometric calibration uncertainties .other effects include correlated uncertainties in milky way extinction maps , and correlated peculiar velocities . in standard analyses ,these effects are propagated into a covariance matrix . in order to speed up the monte - carlo sampling ,we leave each correlating factor explicit as a parameter ( similar to * ? ? ?* ) , ( where ranges over all systematic uncertainties ) , while leaving the data uncorrelated sn to sn .these two approaches coincide exactly for a linear model with gaussian uncertainties .the parameters capture the deviations of a measured quantity , like a zeropoint or a filter bandpass , from the estimated value . for each quantity, we numerically compute the derivative of the light - curve fit with respect to that quantity , giving each , , and .this lets us marginalize out , with a gaussian prior around zero set by the estimated size of each systematic uncertainty .our analysis framework must pass through careful testing using simulated data before it can be applied to real data . to this end , we generate thirtysimulated datasets that incorporate many characteristics of real data . as our analysis takes the salt2 light - curve fits as inputs , this is the level at which we generate simulated data .* we generate four simulated datasets spanning the redshift ranges 0.020.05 , 0.050.4 , 0.21.0 , and 0.71.4 . *each simulated dataset has 250 sne , except the highest - redshift , which has 50 . *we generate the population from a unit normal distribution , centered on zero . *we draw the population values from the sum of a gaussian distribution of width 0.1 magnitudes , and an exponential with rate .we center the distribution on zero .* we assume that the unexplaineddispersion covariance matrix is correct in salt2 , and that only dispersion in ( gray dispersion ) remains .the statistical model does not have access to this information , and fits for the full unknown matrix , overestimating the uncertainties on and , and thus slightly biasing and away from zero ( see section [ sec : indpriors ] ) .( this is not a unique problem for our framework ; the old technique would have the same bias . ) * we assume that the uncertainties on , , and are 0.05 , 0.5 , and 0.05 , and are uncorrelated .in addition , we take 0.1 magnitudes of unexplaineddispersion , magnitudes of lensing dispersion , and 300 km / s peculiar velocity uncertainty . allare approximated as gaussian and independent sn to sn .* and are assumed to be constant , with values 0.13 and 3.0 , respectively . is set to and is set to 0.3 ( flat model ) .* for the host - mass relation , we always take to be 0.08 magnitudes , and select uniformly from the range 0 to 1 . *we assume 3% of the sne are outliers , and draw their observed distribution centered around the normal ia for that redshift , and around zero in and .the spread is 1 , 2 , and 0.5 in , , and respectively , and we assume these distributions are gaussian and uncorrelated . *each sample has zeropoint uncertainties of size 0.01 , 0.01 , 0.01 , and 0.02 ( highest - redshift sample ) magnitudes .the uncertainties are taken to be independent sample to sample . *the datasets have selection effects in , with width 0.2 magnitudes .the selection cuts are chosen for 50% completeness at redshifts 0.08 , 0.25 , 0.6 , and 1.45 .note that this selects from the population and distributions in a redshift - dependent way .( we randomly draw from the population distributions and pass them through the simulated selection effects until the required number of sne are generated . ) * we assume the redshift distribution of sne scales linearly with redshift ( starting from the minimum redshift of each sample ) .this is quantitatively incorrect ( the real scaling will depend on the cosmological volume , cosmological time dilation , and the sn rate ) , but does produce sn samples where many sne are removed by the magnitude cuts , allowing a good test of our selection effect modeling . the redshift distribution of the four simulated datasets are shown in figure [ fig : simdata ] before selection effects ( gray - tinted histograms ) and as observed .we show the observed color distributions in figure [ fig : simcolor ] .this figure shows outliers in gray ( knowledge that the statistical framework does not have access to ) , and the general trend towards bluer sne at higher redshift due to selection effects .figure [ fig : simhr ] shows the hubble diagram residuals from the input cosmology with the best - fit ignoring selection effects ( ) also shown . after generating thirtycomplete sets ( with all four sn samples ) , we supply the generated salt2 result files to the framework .table [ tab : simsummary ] summarizes the key results . as expected , , show bias away from zero , butany bias on is small .ccc & 0.130 & 0.143 0.004 + & 3.000 & 3.076 .016 + & & .117 0.003 + & 0.300 & 0.298 0.005[ tab : simsummary ] we also run some simulated datasets fitting for both and the dark energy equation of state parameter ( assumed to be constant in redshift ) . and ; these priors would better preserve the gaussian sn likelihood .the cosmological results are similar with flat priors on and , so we use flat priors on these parameters for simplicity .we constrain to be between 0 and 1 , and to be between and 0 .] we see no evidence of bias on , and mild evidence of bias on the mean : .in this penultimate section , we demonstrate our framework on real data , namely , the union2.1 compilation .this compilation is a useful dataset for demonstrating the impact of the more - sophisticated analysis , as union2.1 provides light - curve fits for outliers ( the newer joint light - curve analysis , * ? ? ?* did not publish these sne ) .our cosmological fits include and assume a flat universe .this fit is qualitatively similar to the assumption of a constant equation - of - state parameter including a cmb or bao constraint , in that both fits probe the deceleration parameter . as fitting only requires sn data , it is a cleaner analysis for our purposes here . in order to identify the effect of each feature of the analysis on the inferred results , we incrementally transition from the original union2.1 frequentist framework to the analysis proposed in this work .the results of each step are shown in figure [ fig : analysissteps ] .we conducted this part of the analysis blinded , using real data only after the code was validated on simulated data .the initial version of unity required the unexplaineddispersion in to be fixed , which the improved selection effect model now presented in appendix [ sec : detailsselectioneffects ] does not require . with the improvements in place ,after a second round of blinding - unblinding , we found only a small change in between the two versions , 0.009 ( ) .first , we show the results of a frequentist calculation , based on the same assumptions as and using its 580 sne ( top line in figure [ fig : analysissteps ] ) with salt2 light - curve fits .all systematic uncertainties from the covariance matrix are included . to reproduce the results ,we include only a redshift - independent host - mass standardization . and for computational efficiency , our new results are very slightly different : 0.001 in the confidence interval . ] as a cross - check, we also try a hybrid frequentist / bayesian model , in which the from equation [ eq : chi2fit ] is converted into a likelihood as and then , , , and are marginalized over ( this is similar to the method used in * ? ? ?we obtain essentially the same results as a purely frequentist fit . for the next step ,we keep all data the same , but transition to a bayesian model for the data ( the credible intervals are shown as the second line in figure [ fig : analysissteps ] ) .this model includes the sn population terms ( described in section [ sec : indpriors ] and necessary for a bayesian analysis ) and the union2.1 systematic uncertainties , but does not include our proposed treatment of selection effects , outliers , multi - dimensional unexplaineddispersion , or the redshift - dependent host - mass standardization .other than the type of inference and these differences , this fit is identical to the first fit .the error bars shrink by , but the central value changes very little .as the data are the same for this fit and the last one , the gain in statistical power comes from the ability of a bayesian hierarchical model to make better use of heterogeneous information .will have no constraining power , but a bayesian model will .the difference between the bayesian and frequentist union2.1 fits is thus a measure of how inhomogeneous the uncertainties are within a given redshift range for each sn sample . ] next , we return to the frequentist framework , but include the redshift - dependent mass standardizationas described in section [ sec : hostgalaxy ] ( third line in figure [ fig : analysissteps ] ) .we remove the covariance matrix term that corresponds to systematic uncertainty on the mass standardization .the best fit shifts to a higher ( brighter standardized magnitudes on average at high redshift ) because the high - host - mass half of the high - redshift sne has less standardizationto fainter magnitudes .this step can be conducted through frequentist or bayesian inference , and we show both analyses . in this next fit , we continue with the same data and model as the last subsection ( redshift - dependent host - mass standardization ) , but again transition to bayesian inference , including the population terms ( fourth line of figure [ fig : analysissteps ] ) .the results are similar to the frequentist results , but the bayesian fit is more agnostic about the value of ( essentially unconstrained ) , so the fit shifts less than the frequentist one to higher . remaining in the bayesian framework , our next addition is the multi - dimensional unexplaineddispersion .we first remove the existing union2.1 unexplaineddispersion ( which is only in ) . by effectively increasing the uncertainties on the color, we increase the color - standardizationcoefficient .thus the color standardizationnow moves bluer ( bluer due to selection effects ) high - redshift sne fainter , decreasing the fitted ( the fourth line from the bottom in figure [ fig : analysissteps ] ) .we now take advantage of our explicit and values to include nonlinear color standardizations parameterized by and .we remove the union2.1 covariance terms that describe color - standardizationsystematics ( between the multivariate unexplaineddispersion and the nonlinear standardizations , these systematic uncertainties are likely much lower ) .the color standardizationis now strongly nonlinear ( discussed further in section [ sec : otherparams ] ) , with redder sne requiring a larger coefficient than bluer sne .this moves the fitted ( third line from the bottom in figure [ fig : analysissteps ] ) back in the opposite direction from the previous step .next , we include in the fit all twelve outlier sne removed by the union sigma clipping in union2.1 ( a new total of 592 ) . instead of excluding these , we add our mixture model for handling outlier sne .we also remove the union2.1 systematic uncertainties on outlier rejection .the results are quite similar to the previous step , indicating that the union sigma - clipping worked well with the 2% contamination that was present ( second line from the bottom in figure [ fig : analysissteps ] ) .ccc[h ] nearby sne & 18.5 0.5 & 18.6 + caln / tololo & 19.0 0.5 & 19.3 + scp nearby & 19.0 0.5 & 19.3 + sdss & 22.1 0.5 ( ab ) & 22.6 + snls & 24.3 0.5 ( ab ) & 25 + other mid - redshift & 24 0.5 & 24.5 + high - redshift ground & 23.8 0.5 & 25.0 + hst acs & 25.0 0.5 ( vega ) & 26.1 [ tab : selection ] finally , we model selection effects . for simplicity ,we approximate all selection as occurring in rest - frame -band ( , ) , and leave a more detailed analysis to future work .for many of the most important samples in union2.1 , this is not a bad approximation .for example , the sloan digital sky survey sne were selected in , , and -band . at redshift ( the distant end of the survey ), -band corresponds to rest - frame -band .the supernova legacy survey sne were selected in -band , which matches rest - frame -band for the highest - redshift sne with small distance uncertainties ( ) .for some surveys , rest - frame -band selection is a poor approximation .many of the nearby sne were selected from unfiltered surveys ( approximately rest - frame -band ) , but these mostly galaxy - targeted surveys have generally weak selection effects in magnitude ( discussed below ) .some distant sn surveys had selections that were different ( bluer ) than rest - frame -band . at least for the hst - discovered sne , selection effects are small formost of the redshift range ( also discussed more below ) .we estimate the selection effects for each sample as follows ( summarized in table [ tab : selection ] ) .nearby sne are generally limited by spectroscopic followup , for example the magnitude limit for cfa discussed in ( although this is unfiltered , we approximate it as -band ) .we take this limit as typical .the caln / tololo survey and the scp nearby search extend out to higher redshift ; the limiting magnitude in this case is .we take the magnitude limit for sdss from and the limit for snls from . together , these samples make up most of the mid - redshift weight . for the other mid - redshift samples , we take a limit of = 24 , judged from the approximate rolloff of the sn population in redshift . for the high - redshift ground - discovered samples, we assume the surveys were 50% complete at ; this gives a limit of 23.8 in -band .finally , for the hst - discovered sne , we take a limit from of -band ( vega ) . in all cases ,we take the width of the selection to be magnitudes ( i.e. , a sn 0.5 magnitudes brighter than the mean cut has an 84% chance of being selected ; a sn 0.5 magnitudes fainter has a 16% chance ) . as a cross - check ,we coherently shift each estimated magnitude limit fainter by 0.5 magnitudes , representing an extreme limit of how inaccurate our estimations are likely to be .the credible interval shifts by only 0.006 . we remove the union2.1 covariance matrix terms for malmquist bias before computing the fit shown in the bottom line of figure [ fig : analysissteps ] .the credible interval shifts to lower , as the distant sne with significant selection effects are standardized fainter .the central value of this final fit closely matches the original union2.1 result , but we see that our new , smaller ( by compared to the union2.1 analysis ) credible interval shows an increase in statistical power .it is worth reiterating that the list of improvements to make was established before the results were known ; we did not set out to simply achieve a similar result to union2.1 .the scatter of the intermediate results generally validates the size of the union2.1 systematics estimates for these effects that we can now properly include in the model .we now present the results for important nuisance parameters and their relation to in the form of 1d and 2d credible regions . in both cases ,our credible regions are derived by using kernel density estimation with a gaussian kernel on the mcmc samples,3,000 samples from sixteen chains .the statistics are , indicating good convergence . ] then solving for the contour level that encloses 68.3% ( inner shaded regions ) and 95.4% ( outer shaded regions ) of the posterior .figure [ fig : mass ] shows the significant degeneracy between both host - mass standardizationparameters and , illustrating that neither one should be neglected .the mean value of the estimated fraction of outliers is similar to the 12/592 found by the union sigma clipping .this parameter has no significant degeneracy with any parameter for the spectroscopically confirmed datasets in union2.1 , confirming that outlier rejection is not a significant concern at this high level of purity . as an additional cross - check , instead of assumingthe outlier distribution is centered on the sn ia distribution , we fit for it . including six parameters in the model for the mean and dispersion in \{ , , } ( taken to be uncorrelated , and assuming a constant distribution with redshift ) leaves the error bar unchanged but shifts the credible region by 0.009 in .if future versions of the unity framework are run on samples with larger contamination , a more flexible outlier parameterization can be matched with the increased number of outliers in the fit ( e.g. , * ? ? ?figure [ fig : intalphabeta ] shows the degeneracy between the fraction of the unexplainedvariance in and , and similarly / . including these unexplaineddispersion parametersincreases the uncertainty and the mean value for and .( here , and represent the mean standardizationcoefficient . )the model prefers most of the unexplaineddispersion in and , rather than .we also see statistical evidence of non - zero and ( recall that is the standardizationcoefficient for broad - light - curve sne minus the coefficient for narrow sne ; is similarly defined for red minus blue sne ) : is is .both have a correlation with , although the correlation with is larger .we note , at least for this compilation of sne and salt2 - 1 , that blue sne still have a non - zero : , and red sne have a of .this value is significantly less than 4.1 ( the value expected if all reddening were due to the mean extinction law of the milky way diffuse interstellar medium ) . in our standard analysis , letting the division be a fit parameter yields a division at ; the uncertainties on significantly increase . ]in this work , we propose unity , a unified bayesian model for handling outliers , selection effects , shape and color standardizations , unexplaineddispersion , and heterogeneous observations . we demonstrate the method with the union2.1 sn compilation , and show that our method has smaller uncertainties , but results that are consistent with the union2.1 analysis .the advantages of unity will likely be even larger in upcoming datasets , as scarce followup resources will introduce heterogeneity , and enlarged samples may reduce the cosmological - parameter statistical uncertainties below the size of our improvements .there are several future directions for this research. we could allow the unexplaineddispersion covariance matrix to vary with other parameters ( , , redshift , or rest - frame wavelength range ) .we could decompose the unexplaineddispersion into a sum of gaussians as in the model of .we could use gaussian process regression to handle the redshift - dependent priors on and .our model also lets us include more than one light - curve fit for each sn , with the covariance matrix between different light - curve fitters parameterized .we could even make use of the hierarchical model to constrain `` exotic possibilities , '' as sketched in section [ sec : desiredprops ] .all of these are straightforward modifications of what we present here , but our proposed model is already superior to current sn cosmological analysis frameworks .we believe these concepts will also be useful for other applications .we thank alex kim , marisa march , masao sako , rachel wolf , and the referee for their feedback on this manuscript .this work was supported in part by the director , office of science , office of high energy physics , of the u.s .department of energy under contract no .de - ac02 - 05ch11231 .this research used resources of the national energy research scientific computing center , a doe office of science user facility supported by the office of science of the u.s .department of energy under contract no .de - ac02 - 05ch11231 .64 natexlab#1#1 , r. , lidman , c. , rubin , d. , et al .2010 , , 716 , 712 , k. , aldering , g. , amanullah , r. , et al .2012 , , 745 , 32 , j. p. , kessler , r. , kuhlmann , s. , et al .2012 , , 753 , 152 betancourt , m. j. 2013 , arxiv , 1304 , m. , kessler , r. , guy , j. , et al .2014 , , 568 , a22 , c. r. , stritzinger , m. , phillips , m. m. , et al .2014 , , 789 , 32 , m. , aldering , g. , antilogus , p. , et al .2013 , , 770 , 108 , m. j. , wolf , c. , & zahid , h. j. 2014 , , 445 , 1898 , n. , gangler , e. , aldering , g. , et al .2011 , , 529 , l4 , a. , guy , j. , sullivan , m. , et al .2011 , , 192 , 1 , b. , kessler , r. , frieman , j. a. , et al .2008 , , 682 , 262 , b. 2005 , international journal of modern physics a , 20 , 3121 , r. j. 2012 , , 748 , 127 , r. j. , & kasen , d. 2011 , , 729 , 55 , r. j. , sanders , n. e. , & kirshner , r. p. 2011, , 742 , 89 gelman , a. , carlin , j. , stern , h. , et al . 2013 , bayesian data analysis , third edition , chapman & hall / crc texts in statistical science ( taylor & francis ) gelman , a. , & rubin , d. b. 1992 , statistical science , 7 , pp .457 , o. , rodney , s. a. , maoz , d. , et al .2014 , , 783 , 28 gull , s. 1989 , in fundamental theories of physics , vol . 36 , maximum entropy and bayesian methods , ed . j. skilling ( springer netherlands ) , 511518 , j. , astier , p. , baumont , s. , et al .2007 , , 466 , 11 , j. , sullivan , m. , conley , a. , et al .2010 , , 523 , a7 , m. , phillips , m. m. , suntzeff , n. b. , et al .1996 , , 112 , 2398 , m. , & pinto , p. a. 1999 , , 117 , 1185 , m. , phillips , m. m. , suntzeff , n. b. , et al .1996 , , 112 , 2408 , m. , challis , p. , jha , s. , et al .2009 , , 700 , 331 , r. , kunz , m. , bassett , b. , et al .2012 , , 752 , 79 hoffman , m. d. , & gelman , a. 2011 , arxiv , 1111 . 2014 , journal of machine learning research , 15 , 1593 , j. a. , marriner , j. , kessler , r. , et al .2008 , , 136 , 2306 , d. e. , & linder , e. v. 2005 , , 631 , 678 , d. o. , riess , a. g. , & scolnic , d. m. 2015 , arxiv e - prints , b. c. 2007 , , 665 , 1489 , p.l. , hicken , m. , burke , d. l. , mandel , k. s. , & kirshner , r. p. 2010, , 715 , 743 , r. , becker , a. c. , cinabro , d. , et al .2009 , , 185 , 32 , r. , guy , j. , marriner , j. , et al .2013 , , 764 , 48 , r. a. , aldering , g. , amanullah , r. , et al .2003 , , 598 , 102 , m. , rubin , d. , aldering , g. , et al .2008 , , 686 , 749 , m. , bassett , b. a. , & hlozek , r. a. 2007 , , 75 , 103508 , h. , smith , m. , nichol , r. c. , et al .2010 , , 722 , 566 lewandowski , d. , kurowicka , d. , & joe , h. 2009 , journal of multivariate analysis , 100 , 1989 malmquist , k. 1922 , on some relations in stellar statistics , arkiv fr matematik , astronomi och fysik ( almqvist & wiksells ) , k. s. , narayan , g. , & kirshner , r. p. 2011, , 731 , 120 , m. c. , trotta , r. , berkes , p. , starkman , g. d. , & vaudrevange , p. m. 2011 , , 418 , 2308 , j. , bernstein , j. p. , kessler , r. , et al .2011 , , 740 , 72 , j. , guy , j. , kessler , r. , et al .2014 , , 793 , 16 , k. , sullivan , m. , conley , a. , et al .2012 , , 144 , 59 , m. m. 1993 , , 413 , l105 rasmussen , c. e. , & williams , c. 2006 , gaussian processes for machine learning ( mit press ) , a. , scolnic , d. , foley , r. j. , et al .2014 , , 795 , 44 , a. g. , strolger , l .- g . , casertano , s. , et al .2007 , , 659 , 98 , m. , copin , y. , aldering , g. , et al .2013 , , 560 , a66 , m. , aldering , g. , kowalski , m. , et al .2015 , , 802 , 20 , s. a. , riess , a. g. , strolger , l .- g ., et al . 2014 , , 148 , 13 , d. , knop , r. a. , rykoff , e. , et al . 2013 , , 763 , 35 , c. , aldering , g. , antilogus , p. , et al . 2015 , , 800 , 57 , d. , rest , a. , riess , a. , et al .2014 , , 795 , 45 , d. m. , riess , a. g. , foley , r. j. , et al .2014 , , 780 , 37 .2014 , pystan : the python interface to stan , version 2.5.0 , m. , conley , a. , howell , d. a. , et al .2010 , , 406 , 782 , n. , rubin , d. , lidman , c. , et al .2012 , , 746 , 85 , j. l. , schmidt , b. p. , barris , b. , et al .2003 , , 594 , 1 , r. 1998 , , 331 , 815 , x. , filippenko , a. v. , ganeshalingam , m. , et al .2009 , , 699 , l139 , w. m. , miknaitis , g. , stubbs , c. w. , et al .2007 , , 666 , 694our model is a two - component mixture model , where the outlier component is a gaussian in , , and , and the normal sne ia component is a gaussian in , , and , truncated by selection effects .the full likelihood for a single sn is given by the sum of the core distribution , \left [ \begin{array}{l } { m_{b i}^{\mathrm{true } } \xspace}+ \delta m_b\\ { x_{1 i}^{\mathrm{true } } \xspace}+ \delta x_1\\ { c_{i}^{\mathrm{true } } \xspace}+ \delta c \end{array } \right ] , \c^{\mathrm{ext}}(z_i ) + c^{\mathrm{obs}}_i + c^{{\mathrm{samp}}}_i \right ) { \:}{p(\mathrm{detect}|\{{m_{b i}^{\mathrm{obs } } \xspace } , { x_{1 i}^{\mathrm{obs } } \xspace } , { c_{i}^{\mathrm{obs } } \xspace}\})\xspace } } { ( \epsilon=0.01 ) + { p(\mathrm{detect})\xspace } } ( 1 - f^{\mathrm{outl}})\\\ ] ] with the outlier distribution , \left [ \begin{array}{l } { m_{b i}^{\mathrm{true } } \xspace}+ \delta m_b\\ { x_{1 i}^{\mathrm{true } } \xspace}+ \delta x_1\\ { c_{i}^{\mathrm{true } } \xspace}+ \delta c \end{array } \right ] , \ ( \mathbbm{1 } )\right ) f^{\mathrm{outl } } \;.\ ] ] and are described in equations [ eq : fracdetobs ] and [ eq : psel ] , respectively . is not a free parameter ; instead it is completely determined by other parameters and is given by \{ , , } are the contributions from the systematic uncertainty terms , e.g. , the broken - linear slopes are defined as and is given by equation [ eq : deltaofz ] .we take the following priors on all parameters : this section , we detail the approach we take for modeling selection effects in unity .we start with a general likelihood containing missing observations in the case when both the number of observed objects ( ) and the number of missed objects ( ) are exactly known : as described in section [ sec : selection ] , we assume that the efficiency smoothly varies with : where is the gaussian cdf . of course , we do not know any of the parameters for each of the missing observations . is thus marginalized over the entire distribution when it is not referencing a specific observation .we now make the counterintuitive approximation that the redshift of each missed sn is exactly equal to the redshift of a detected sn .this approximation is accurate because the sn samples have , on average , enough sne that the redshift distribution is resonably sampled .we now refactor equation [ eq : fracdetect ] to consider each detected sn : the combinatoric factor trivially becomes . to minimize the number of parameters in the cosmological fit, we now seek to marginalize over from 1 to .it is common ( e.g. , * ? ? ?* ; * ? ? ?* ) to take a flat prior on ; for reasons that will become clear in a moment , we force this prior to zero faster than this by multiplying this prior by a weak exponential decay where is a small positive number .thus , the marginalization becomes : which is equal to the geometric series which sums to this expression highlights the benefit of this prior .if a flat prior on is assumed ( ) , then the likelihood can be poorly behaved in regions of parameter space where the efficiency is poor ( even if care is taken in the log likelihood to correctly handle the asymptotes of the log of the efficiency ) . running unity on union2.1 with either 0 or 0.01makes virtually no difference , but stan has difficulty with many of the simulated datasets ( which we constructed with severe malmquist bias as a test , see section [ sec : simulateddata ] ) for , so we make our default choice .we now need the efficiency as a function of the parameters .we derived the following expression by analytically computing the variance , then expanding to first order in ( this approximation is good to better than one percent in the variance for our distributions ) . for simplicity, we use the same expression for , although the distribution is skew - normal instead of normal .( changing the assumed intrinsic color distribution to normal , rather than skew - normal , changes by only 0.001 , so this approximation is not a significant concern . )the first line contains the measurement and unexplaineddispersion in , the dispersion of the selection cut , and the dispersion due to the host - galaxy relation .the second and third lines give the dispersion due to the and relations , respectively .using the same approximations , we derived an expression for the mean of the ( unstandardized ) magnitude distribution in this case , our approximations are valid to a few hundredths of a magnitude .we then compute the selection efficiency assuming that the distribution of magnitudes is gaussian this approximation is not completely valid , as there is a tail to faint magnitudes of red sne , but for the central part of the distribution ( which is the part that is cut into by severe malmquist bias ) , it is a good approximation .
|
while recent supernova cosmology research has benefited from improved measurements , current analysis approaches are not statistically optimal and will prove insufficient for future surveys . this paper discusses the limitations of current supernova cosmological analyses in treating outliers , selection effects , shape- and color - standardizationrelations , unexplaineddispersion , and heterogeneous observations . we present a new bayesian framework , called unity ( unified nonlinear inference for type - ia cosmology ) , that incorporates significant improvements in our ability to confront these effects . we apply the framework to real supernova observations and demonstrate smaller statistical and systematic uncertainties . we verify earlier results that sne ia require nonlinear shape and color standardizations , but we now include these nonlinear relations in a statistically well - justified way . this analysis was primarily performed blinded , in that the basic framework was first validated on simulated data before transitioning to real data . we also discuss possible extensions of the method .
|
in an investigation of the dynamic properties of computing machines using a general lossless compression approach led to reasonable classifications of elementary cellular automata ( ca ) and other systems , classifications corresponding to wolfram s four classes of behaviour . in the spirit of other analytical concepts for scale predictability ( for example , lyapunov exponents ) , but employing different means , this compression - based method also led to the definition of a phase transition coefficient as a way of detecting a system s ( in)stability vis - - vis its initial conditions and of measuring its dynamic ability to carry information .a conjecture relating the magnitude of this coefficient and the capability and efficiency with which a system performs universal computation was introduced . in this paperthe conjecture is developed further , with some additional arguments . in ,a related conjecture concerning other kinds of simply defined programs was presented , establishing that all busy beaver turing machines may be capable of universal computation , as they seem to share some of the informational and complex properties of systems capable of universal computational behaviour .the conjecture will be regarded in light of algorithmic complexity , particularly of bennett s logical depth , and will be reconnected to the first conjecture via the dynamical properties of these machines through the compression - based phase transition coefficient . some definitions of concepts to be discussed either as foundations of these possible new connections , or as evidence for making such claims will be introduced first .the investigation is meant to be an exploration of empirical observations through quantitative measures which attempt to capture qualitative properties of the dynamic behaviour of systems capable of computational universality .proof - of - universality results for simple programs have traditionally relied on localized structures ( or `` particles '' ) , as distinguished from relatively uniform regions .this means that a measure of entropy of a system will tend to be below its theoretical maximum . at the same time , however , this `` particle - like '' behaviour is , and must in principle be unpredictable for the system to reach computational universality .stephen wolfram has classified all the one - dimensional nearest neighborhood ca into four classes : ( i ) class 1 : ordered behaviour ; ( ii ) class 2 : periodic behaviour ; ( iii ) class 3 : random or chaotic behaviour ; ( iv ) class 4 : complex behaviour .the first two are totally predictable .random ca are unpredictable . somewhere in between , inthe transition from periodic to chaotic , complex , interesting behaviour can occur .one of wolfram s open problems in cellular automata , for example , is the question of the computational universality of a system belonging to class 3 ( random - looking , such as rule 30 ) for which an entropy measure remains near its maximum at every time step , and which is unlikely to show any `` particle - like '' behaviour .the question is whether such a `` hot system '' can carry information and be programmed .the techniques to prove such a system universal may require methods different from those hitherto used for systems in which structures can be distinguished and which can therefore be made to carry information through them .the common belief is that these kinds of systems may be powerful enough but are just too complicated , perhaps even impossible to program .the encoding required to deal with the sophistication of a class iii rule cellular automaton would itself probably have to possess the sophistication of a computationally universal system .this brings us to wolfram s pce , which states that almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication ( , pp . 5 and 716 - 717 ) .in 1970 , conway invented an automaton , which was popularised by gardner and was known as the game of life .it was proved that life was capable of universal computation .the proof of universality uses what in the jargon of ca are known as gliders , glider guns , and eaters , that is , structures to carry and manipulate information through the system ( by combining such emergent propagating structures one can simulate logic gates and circuits ) .langton s ant is a two - dimensional turing machine with 2 symbols and 4 states following a set of very simple rules . in ,a very simple construction is presented which proves that langton s ant is also capable of universal computation .but an exhaustive exploration of one - dimensional elementary ca ( that by most standards would be considered the simplest possible ca ) unlike any previous system that has been constructed , was undertaken in .the rule with number 110 ( and equivalent rules : 124 , 137 and 193 ) in wolfram s numbering scheme , presenting the characteristic `` particle - like '' structures , turned out to be capable of universal computation .rule 110 can be set up with initial configurations that have signals transmitted in the form of collisions of `` particle - like '' dynamical structures , simulating a variant of a tag system , another rewriting system capable of universal computation .the proofs of universality of all these systems imply that their dynamics are unpredictable .the notion of universality implies the existence of undecidable problems related to most questions concerning these machines .questions related to these simple dynamical systems can not therefore be algorithmically answered .from which it follows that undecidability is a measure of the unpredictability of a system associated with its dynamical behaviour .definition 1 . where is the length of measured in bits .+ a measure of complexity is derived by combining the algorithmic complexity describing a system and the time it takes to produce a string .bennett s concept of logical depth is a complexity measure capturing the structure of a string defined by the time that a turing machine takes to reproduce the said string from its ( near ) shortest description .formally , + definition 2 .a string s logical depth d is given by = + according to this measure , the longer it takes , the more complex the string .complex objects are therefore those which can be seen as `` containing internal evidence of a nontrivial causal history . ''a measure based on the change of the asymptotic direction of the size of the compressed evolutions of a system for different initial configurations ( following a proposed gray - code enumeration of initial configurations ) was presented in .it gauged the resiliency or sensitivity of a system vis - - vis its initial conditions .this phase transition coefficient led to an interesting characterisation and classification of systems , which when applied to elementary ca , yielded exactly wolfram s four classes of systems behaviour , with no human intervention .the coefficient works by compressing the changes of the different evolutions through time , normalised by evolution space , and it is rooted in the concept of algorithmic complexity , being an upper bound of the algorithmic complexity of a string .the more compressed a string , the less algorithmically complex .let the characteristic exponent be defined as the mean of the absolute values of the differences of the compressed lengths of the outputs of the system running over the initial segment of initial conditions with following the numbering scheme devised in based on a gray - code optimal enumeration scheme , running for steps in intervals of .formally , + definition 3 .+ definition 4 .let denote the transition coefficient of a system defined as , the derivative of the line that fits the sequence by finding the least - squares as described in with for a fixed and .+ the value , based on the phase transition coefficient , is a stable indicator of the degree of the qualitative dynamical change of a system .the larger the derivative , the greater the change . according to , rule numbers such as 0 and 30 appear close to each other both because they remain the same despite the change of initial conditions , and because their evolution can not be perturbed .the measure indicates that rules like rule 0 or rule 30 are also incapable of or inefficient at transmitting any information , given that they do not react to changes in the input of the system .odd as it may seem , this is because there is no change in the qualitative behaviour of these ca when feeding them with different inputs , regardless of how different the inputs may be rule 0 remains entirely blank while 30 remains mostly random - looking , with no apparent emergent coherent propagating structures ( other than the regular and linear pattern on one of the sides ) . on the other hand , rules such as rule 122 and rule 89 appear next to each other as the most sensitive to initial conditions , because as the investigation proves , they are both highly sensitive to initial conditions and present phase transitions which dramatically change their qualitative behaviour when starting from one or another initial configuration .this means that rules 122 and 89 can be more successfully used to transmit information from the input to the output .evidently if a system is completely predictable and therefore dynamically trivial , it is decidable , and therefore not turing universal .rule 110 should therefore not be very predictable according to the phase transition measure , but at the same time we can expect it to be versatile enough to produce the variety needed to behave as a universal .rule 110 is one rule about which my own phase transition classification says that , despite showing some sensitivity , it also shows some stability . which means that one can say with some degree of certainty how it will look ( and behave ) for certain steps and certain initial configurations , unlike those at the top . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is acknowledged by wolfram himself when discussing rule 54 ( page 697 ) : ` it could be that if one went just a little further in looking at initial conditions one would see more complicated behaviour . and it could be that even the structures shown above can be combined to produce all the richness that is needed for universality .but it could also be that whatever one does rule 54 will always in the end just show purely repetitive or nested behaviour which can not on its own support universality.' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for every ca rule , there is a definite ( often undecidable ) answer to the question whether or not it is capable of universal computation ( or in reachability terms , whether a ca will evolve into a certain configuration ) .the question only makes sense if the evolution of a ca depends on its initial configuration .no rule can be universal that fixes the initial configuration once and for all ( there would be no way to input an instruction and carry out an arbitrary computation ) .an obvious feature of universal systems is that they need to be capable of carrying information by reflecting changes made to the input and transmitted to the output . in attempting to determine whether a system is capable of reaching universal computation, one may ask whether a system is capable of some minimal versatility in the first place , and how efficiently it can transmit information . andthis is what the phase transition measures it indicates how well a system manages to respond to an input . obviously , a system such as rule 0 or rule 255 , which does not change regardless of the input , is trivially decidable .but a universal system should be capable of reaction to external manipulation ( the input to the system ) in order to behave as a universal system , that is , to be capable of simulating and reaching the output of any other universal system .+ conjecture 1 : let be a machine capable of ( efficient ) universal behaviour .then .+ conjecture 1 is one - way only , meaning that it states that an efficient universal system should be equipped with these dynamical properties , but the converse does not necessarily hold , given that having a large transition coefficient by no means implies that the system will behave with the freedom required for turing universality ( a case in point is rule 22 , which , despite having the largest transition coefficient , seems restricted to a small number of possible evolutions ) .the conjecture is based on the following observations : the phase transition coefficient provides information on the ability of a system to react to external stimuli .universal systems are ( efficient ) information processors capable of carrying and transmitting information .trivial systems and random - looking systems are incapable of transmitting information .trivial systems have negative values , close to zero .rules such as 110 , proven to be universal , and rule 54 ( suspected to be universal , see page 697 ) turn out to be classified next to each other , with a positive transition coefficient .the capacity for universal behaviour implies that a system is capable of being programmed and is therefore reactive to external input .it is no surprise that universal systems should be capable of responding to their input and doing so succinctly , if the systems in question are efficient universal systems . if the system is incapable of reacting to any input or if the output is predictable ( decidable ) for any input , the system can not be universal .values for the subclass of ca referred to as elementary ( the simplest one - dimensional ) have been calculated and published in .we will refrain from evaluations of to avoid distracting the reader with numerical approximations that may detract from our larger goal .the aim is to propose some basics of a behavioural characterisation of computational universality .+ for example , some rules , such as rule 0 , do nt produce different configurations relative to variant initial configurations . no matter how one changes the initial condition, there is no way to make it produce other than what it computes for every other initial configuration .these trivial elementary ca rules are automatically ruled out , particularly the most simple among them that can not usually be ruled out as candidates for universal behaviour given that even if they look trivial for the simplest or for certain initial configurations , they could still be capable of the necessary versatility and eventually be programmed in light of the space of all possible inputs for which they may be sensitive .the foundations of conjecture 1 and the conjecture itself are consistent with all these observations , but it is most meaningful for systems that are believed to be of great complexity but are usually not believed to be malleable enough to be programmed as universal systems , such as is the case with rule 30 . if the conjecture is true , may not only rule out systems which intuition strongly suggests are unable to behave as universals , but it would also indicate that random - looking systems such as rule 30 are not capable of universal computation because they are incapable of carrying information . in this sense , the measure may also be a characterisation of the practical randomness of a system in terms of efficient information transmission .rule 110 , however , has a positive value , meaning it is efficient at carrying information from its input through the output , and that one can actually program it to perform computations . is compatible with the fact that it has been proven that rule 110 is capable of universal computation .+ a universal computer ( would therefore have a non - zero limit value . also captures some of the universal computational efficiency of the computer in that it has the advantage of capturing not only whether it is capable of reacting to the input and transferring information through its evolution , but also the rate at which it does so .so is an index of both capability in principle and ability in practice .a non - zero means that there is a way to codify a program to make the system behave ( efficiently ) in one fashion or another , i.e. to be programmable .something that is not programmable can not therefore be taken to be a computer . in , margolus asserts that reversible cellular automata ( rca ) can actually be used as computer models embodying discrete analogues of classical notions in physics such as space , time , locality and microscopic reversibility .he suggests that one way to show that a given rule can exhibit complicated behaviour ( and eventually universality ) is to show ( as has been done with the game of life and rule 110 ) that in the corresponding ` world ' it is possible to have computers " starting these automata with the appropriate initial states , with digits acting as signals moving about and interacting with each other to , for example , implement a logical gate for digital computation .conjecture 1 also seems to be in agreement with wolfram s beliefs concerning rule 30 , which according to his principle of computational equivalence ( pce ) may be computationally universal and still be impossible to control so as to be able to perform a computation ( something that wolfram has himself suggested ) .rca are interesting because they allow information to propagate , and in some sense they can be thought of as perfect computers indeed in the sense that matters to us .if one starts an rca from a non - uniformly random initial state , the rca evolves , but because it can not get simpler than its initial condition ( for the same reason given for the random state ) it can only get more complicated , producing a computational history that is reversible and can only lead to an increase in entropy .rado also studies the behaviour of a special kind of one - tape -state deterministic turing machine , one that starts with a blank tape , writes more non - blank symbols than any other -state turing machine , and halts .notation : we denote by the class ( or space ) of all -state 2-symbol turing machines ( with the halting state not included among the states ) .definition 5 . if is the number of 1s on the tape of a turing machine upon halting , then : . if is the number of steps that a machine takes upon halting , then . and as defined in 1 and 2 are noncomputable functions by reduction to the halting problem . yet values are known for with .the busy beaver problem lies at the heart of what may be seen as a paradox , for while a busy beaver machine of states can be thought of as having maximal sophistication vis - - vis all state turing machines as regards the number of steps and printed symbols , busy beaver machines can be extremely easily defined .the definition of busy beaver machines describes an infinite set of turing machines characterised by a particular behaviour the attribute of printing more non - blank symbols on the tape before halting , or having the longest runtime among all turing machines of the same size ( number of states ) .bennett s logical depth measure is relevant in characterising the complexity of an -state busy beaver machine both in terms of size ( fixed among all -state machines ) and in terms of the behaviour that characterises this type of machine , because it follows from rado s definitions and bennett s concept of logical depth that busy beavers are the deepest machines provided that they are the ones with the longest history producing a string . yeta busy beaver is required to halt .when running for the longest time or writing the largest number of non - blank symbols , has to be clever enough to make wise use of its resources and still save a rule to halt .these facts may suggest the following conjectures , also in connection with the dynamic behaviour of a set of simply described machines with universal behaviour .+ conjecture 2 : ( strong version ) : for all , is capable of universal computation .( sparse version ) : for some , is capable of universal computation .( weak version ) : for all , is capable of ( weak ) universal computation .( weakest version ) : for some , is capable of ( weak ) universal computation. it is known that among all 2-state 2-symbol turing machines none can be universal .remember , however , that as defined by rado , is a turing machine with states plus a special halting state .so is actually a 3-state 2-symbol machine in which one state is specially reserved for halting only . by letting a weak universal machine , one allows initial tape configurations other than those filled with just a single symbol ( usually called a blank tape , but blankness is a symbol in itself ) , but with initial configurations simple enough so that one can guarantee that the computation is not performed before it is given already computed in the input encoding .in other words , is allowed ( in the conjecture versions 2.3 and 2.4 ) to start either from a periodic tape configuration or an infinite sequence of the type accepted by a regular -automaton .if any version of the conjectures excepting conjecture 2.4 is true , the characterisation would define a countable infinite set of universal turing machines .their proof may provide an interesting framework and a possible path to take for proving a whole set of turing machines to be capable of universal computation on the basis of their common dynamical properties . because halting machines that always halt can not be capable of unbounded computation , and therefore of universal turing behaviour , among the analytical tools necessary to demonstrate the universality of ( any ) of these systems are proofs that busy beavers are capable of avoiding the halting state .if one proves that busy beavers always halt , that would amount to proving that they can not be universal .but to disprove conjectures 2.1 to 2.3 one can simply prove that at least one busy beaver is not capable of a halting configuration , and a study of this type is likely to be simplified for bb(3 ) or bb(4 ) , for which busy beaver functions are known and are turing machines small enough to be subjected to a thorough and potentially fruitful investigation in this regard .the investigation of the behaviour of busy beaver machines for other than blank tape initial configurations indicates that these machines are capable of non - trivial behaviour for other than the simplest initial configuration ( as intuition would suggest , given that if they behave in a sophisticated fashion for the simplest initial condition , they may be expected to continue doing so for more complicated ones ) . in a future paperwe will explore the specific behaviour of these machines .the truth of the conjectures may not seem intuitively evident to all researchers , given that it is possible that these machines are only concerned with producing the largest numbers by using all resources at hand , regardless of whether they do so intelligently .however , the requirement to halt is , from our point of view , a suggestion that the machine has to use its resources intelligently enough in order to keep doing its job while saving a special configuration for the halting state . despite the conclusion that conjecture 2.4 would imply , namely that the property of being a busy beaver machine is not a characterisation of the computational power of this easily describable set of countable infinite machines , among the intuitions suggestingthe truth of one of these conjectures is that it is easier to find a machine capable of halting and performing unbounded computations for a turing machine if the machine already halts after performing a sophisticated calculation than it is to find a machine showing sophisticated behaviour whose previous characteristic was simply to halt .this claim can actually be quantified , given that the number of turing machines that halt after for increasing values of decreases exponentially . in other words , if a machine capable of halting is chosen by chance , there is an exponentially increasing chance of finding that it will halt sooner rather than later , meaning that most of these machines will behave trivially because they wo nt have enough time to do anything interesting before halting .we have no positive proof of any version of these conjectures and much more work remains to be done on the dynamical behaviour of these systems . but conjectures 1 and 2 lead us to : + conjecture 3 : .the first conjecture relates computational universality to the capacity of a computational system to transfer information from the input to the output and reflect the changes in the evolution of the system when starting out from different initial configurations .we established that the property of having a large phase transition coefficient seems necessary . on the other hand, a universal system seems to be capable of manifesting an abundance of possible evolutions and reacting to different initial configurations in order to ( efficiently ) behave universally .a second conjecture concerning the possible universality of a kind of well - defined infinite set of abstract busy beaver turing machines was introduced also in terms of a version of a measure of complexity related to algorithmic complexity and the dynamic behaviour of these machines having a particular common characterisation .the third conjecture relates conjectures 1 and 2 .these conjectures will be the subject of further study in a paper to follow this one .we would like to see the conjectures proved or disproved , but underlying the conjectures are many other interesting questions relating to the size , behaviour and complexity of computing machines .it would be interesting , for example , to find out whether there is a polynomial ( or exponential ) trade - off between program size and the concept of simulating a process .`` logical depth and physical complexity '' in r. herken ( ed ) , _ the universal turing machine a half - century survey _ ; oxford university press , p 227257 , 1988 .`` how to define complexity in physics and why , '' in _ complexity , entropy and the physics of information _ , w.h .zurek ( ed ) , addison - wesley , p 137148 , 1990 .berlekamp , j.h . conway and r.k .`` what is life ? '' ch .25 in _ winning ways for your mathematical plays , _ vol .2 : games in particular , 1982 . c.s .calude and m.a .`` most programs stop quickly or never halt , '' _ advances in applied mathematics . _ 40 295308 , 2005 .`` universality in elementary cellular automata , '' _ complex systems _ , 2004 ._ algorithmic information theory. _ cambridge university press , 1987 .calude , m.a .stay , most programs stop quickly or never halt , _ advances in applied mathematics _, 40 , p 295 - 308 , 2005 .a. gajardo , a. moreira , e. goles .`` complexity of langton s ant '' , _discrete applied mathematics _, 117 , 4150 , 2002 . m. gardner , `` mathematical games : the fantastic combinations of john conway s new solitaire game `` life '' '' ._ scientific american _ 223 : 120123 , 1970 . a. n. kolmogorov .`` three approaches to the quantitative definition of information . '' _ problems of information and transmission _ , 1(1 ) , p 17 , 1965 . c. langton .`` studying artificial life with cellular automata , '' physica d 22 , 120149 , 1986 .universal search problems , _ problems of information transmission _ 9 ( 3 ) , p 265 - 266 , 1973 .s. lin & t. rado .`` computer studies of turing machine problems , '' _ j. acm _, 12 , p 196212 , 1965 .margolus , n. physics - like models of computation " , physica , vol .10d , pp . 8195 , 1984 .wolfram s 2 , 3 turing machine research prize , http://www.wolframscience.com/prizes/tm23/ ; accessed on june , 24 , 2010 .`` on non - computable functions '' , _ bell system technical _ j. 41 , 877884 , 1962 . s. wolfram . `` twenty problems in the theory of cellular automata , '' _ physica scripta _ , t9 170183 , 1985 . s. wolfram ._ a new kind of science _ , wolfram media , 2002. t. wolfgang .`` automata on infinite objects '' , in van j. leeuwen , _ handbook of theoretical computer science , _ vol .b , mit press , pp . 133191 , 1990 . h. zenil .`` compression - based investigation of the dynamical properties of cellular automata and other systems , '' journal of complex systems , 19(1 ) , 2010 .`` faqs , '' _ the shortest universal machine implementation contest _, 2008 http://www.mathrix.org/experimentalait/turingmachine.html .
|
we explore the possible connections between the dynamic behaviour of a system and turing universality in terms of the system s ability to ( effectively ) transmit and manipulate information . some arguments will be provided using a defined compression - based transition coefficient which quantifies the sensitivity of a system to being programmed . in the same spirit , a list of conjectures concerning the ability of busy beaver turing machines to perform universal computation will be formulated . the main working hypothesis is that universality is deeply connected to the qualitative behaviour of a system , particularly to its ability to react to external stimulus as it needs to be programmed and to its capacity for transmitting this information . classification : 89.75.-k , 05.10.-a , 89.20.-a , 89.20.ff + keywords : dynamic behaviour , elementary cellular automata , small turing machines , algorithmic complexity , computational ( turing ) universality , busy beaver machines , sensitivity , phase transitions , theory of information .
|
consumption is increasing all over the world . fossil fuels being being a major source of energy are getting depleted much faster than they can be replenished . as a result, renewable energy sources are being researched for applications that require different power ranges .low power range application devices such as remote location sensors , bio - sensors are of much interest in recent years . scaling down high power range devices for such applications is difficult due to various design issues .microbial fuel cells , which are electrochemical devices that use electrons produced during respiration of microbes to generate current has gained much attention for such applications .mathematical modeling of microbial fuel cells have also been attempted .micro photosynthetic cells ( ) are a sustainable option for low power applications . uses oxygenic photosynthetic organisms such as algae to generate current in the presence of light and function as a microbial fuel cell in the absence of light .a major advantage of is its ability to generate current by harnessing the electrons from the electron transport chains in photosynthetic organelle of photoautotrophs using sunlight . in the absence of light , the cell generates current from the electrons that are harnessed from the metabolic pathways of the respiration process in photosynthetic organisms .prototypes for are available in the literature . till date , the focus has been mainly on experimental aspects of , recent works reported on , , and there has been very little attempt at developing mathematical models for . in general , the aim of any modeling exercise is to understand the underlying physical phenomena of a device and explore methods for improving device performance .modeling of can help in understanding the performance limiting step(s ) in a series of processes that occur during the operation of the device .performance enhancement of the device can be achieved by focusing on the rate limiting steps .modeling also helps in determining the optimal design and operational parameters , that can maximize the device performance . modeling a system like is complex since the device performance depends on the interactions of microorganisms with the operational parameters such as light intensity , quantum yield and so on .further , design parameters such as electrode structure and the electrochemical interactions at the surface of the electrodes have an effect on the device performance. a mathematical model which incorporate all the phenomena that occur during the operation of will be complicated . in this work ,a simple model based on first principles is developed .the aim of this modeling exercise is to predict the performance of , given device specifications and a set of operational parameters . in the present work ,a lumped parameter model approach is used in the model development .the number of parameters chosen to describe the various processes will be largely determined by the richness of data in terms of the variables that are measured .the lumped parameter used in the current study is the characteristic rate constant .this paper is organized as follows .the section on operation principles describes the details of working .the model equations are described in the model formulation section .this is followed by a description of the methodology adopted for solving the model equations and parameter estimation from the published data .subsequently , model validation and sensitivity analysis studies that are performed are described .finally , the utility of the model and interesting insights that can be derived from such a model analysis are outlined .a schematic of the photosynthetic cell is shown in figure [ schematic ] . a device consists of two chambers ( anode and cathode ) separated by a proton exchange membrane ( nafion ) .anode and cathode chambers have a capacity to hold of anolyte and catholyte respectively .porous gold electrode patterns of thick developed on both the sides of the nafion membrane using lithography techniques act as both electrodes and current collectors .green algae ( _ chlamydomonas reinhardtii _ ) suspended in a growth medium ( sueoka s high salt medium , hsm ) and a mediator ( methylene blue ) are the major components of the anolyte .potassium ferricyanide ( ) solution is used as catholyte .cell growth ( using both nutrients and glucose as substrate ) and cell decay occur inside the anode chamber .photosynthesis takes place in the presence of light , producing glucose from carbon dioxide and water .respiration occurs in both dark and light conditions .the reactions are as follows . electrons and protons are released during both respiration and photosynthesis .the mediator methylene blue ( mb ) diffuses into the microorganism and siphons these electrons from the electron carriers nadph ( during photosynthesis ) / nadh ( during respiration ) during which it gets reduced ( see eq .eq.re ) the reduced methylene blue ( ) then diffuses out of the microorganism and releases the electrons at the anode surface along with the protons thereby converting back to its original oxidized form ( see eq . eq.res ) . the electrons from the anode travel through the external circuit to the cathode chamber producing electricity .protons diffuse through the nafion membrane to the cathode side . at the cathode surface , potassium ferricyanide( ) gets reduced to potassium ferrocyanide ( ) using the electrons from the external circuit , ( see eq . thus formed supplies electron to and protons which combine to form water and in the cathode chamber ( see eq . in this paper , data from the experimental results reported in the prior work of the some of the authors of this paper are used for parameter estimation and model validation .the mathematical model developed in this work is intended to predict the current - voltage behavior of a in the presence of light . the inputs to the model are the initial concentrations of the species in the anode , light intensity , the external loads and the design parameters of the device .+ _ assumptions : _ 1 . is operated under isothermal conditions at and atmospheric pressure .both anode and cathode chambers are assumed to be well mixed batch reactors .cell growth in anode chamber is governed by monod kinetics .( monod kinetics is most generally used mathematical model to describe the growth of the suspended microorganisms in the aqueous medium as a function of concentration of nutrient medium ) 4 .electrode kinetics follows butler - volmer equation .diffusion of carbon dioxide , oxygen and sugar through nafion membrane are assumed to be negligible .photosynthesis is considered to be the dominant process in the presence of light .concentration of species on the electrode surface is assumed to be equal to their corresponding bulk concentrations .no diffusion effects are considered .water is assumed to be in excess in the anode chamber .oxygen is assumed to be available in excess in the cathode chamber .activation losses near the cathode are assumed to be negligible .the following phenomena occur in the bulk of the anode chamber .cells grow by consuming the nutrient medium and they decay at a specified rate or when the nutrient medium is exhausted . with the assumptions stated , the temporal variation of concentrations of the species ( cells ) , ( nutrients ) , in the anode chamber can be described by the following differential equations .the change in cell concentration by growth and decay of cells can be described by eq .cell with , the death rate of cells , , the growth rate of cells , characterized by monod kinetics . where is maximum specific growth rate , the nutrient medium used for growth , and is the half saturation constant w.r.t . the nutrient concentration change can be represented by eq .nutrient where is yield coefficient of cells w.r.t nutrients .the next step is to understand the source of electrons and the mechanism by which they reach the anode surface .there are several reactions that occur during photosynthesis and a complete description of the process with detailed mechanisms can be very complicated .therefore , we propose a simple one - step mechanism that results in a tractable and useful model . a good description of processes that happen during photosynthesis can be found in . the first step in photosynthesis is the water splitting reaction .this reaction happens inside the thylakoid membrane of chloroplast . the electrons are received by the electron acceptor and forms at the end of the electron transport chain .each can take and one . mb added in the anolyte diffuses in to the cell and siphons the electrons and the protons from to form . at the anode surface is oxidized back to . the ideal way to model this phenomena is to consider the concentration variations of all the species in anode chamber , the rate of reactions occurring in bulk and the effect of diffusion on the concentration of species at the electrode surface for use in bulter - volmer equation . to simplify the model , we conceptualize the electron carrier , and as intermediates and adding eq .water split , eq .nadph formation , eq .mb_red formation and eq .mb_red to mb , we obtain assuming eq .anode final occurs at the anode surface , the complexity will be reduced to a great extent by considering the cell concentration and light intensity as reactants .this is because , at a macro level , the rate at which the water splitting occurs in a microorganism is a function of both cell concentration and light intensity .an important point to note at this juncture is that the water splitting reaction is assumed as a representative of all the phenomena that occur in the anode chamber .this can be summarized in eq .anode complete reaction here , has much more significance than a mere rate constant of the water splitting reaction since this rate constant represents the reaction rate and also the many transport phenomena that are involved in the movement of all the active species .the following processes occur in the cathode chamber .the electrons received at the cathode surface are used to reduce potassium ferricyanide( ) to potaassium ferrocyanode( ) . protons diffuse from the anode side to cathode chamber through nafion , and take part in the reaction , where is oxidized to by donating electrons to oxygen to form water . detailed modeling of the cathode chamber should ideally track the concentrations of all the species in cathode chamber and the influence of diffusion effects on the species concentrations at the electrode surface where the electrochemical reactions occur .this complexity can be handled if we assume that the overall reaction that occurs on the cathode surface as the oxygen reduction reaction ( orr ) .the rationale for this assumption is the same as the one used in the modeling of the anode chamber .adding eq .pfred and eq .pf we obtain similar to the anode chamber , the rate constant has to be interpreted as not just being the rate constant for the oxygen reduction reaction .the voltage , and current , produced when an external resistance , is connected to a is given by where is non - standard thermodynamic voltage ; and are ohmic , concentration and activation losses across the cell respectively .ohmic losses occur due to the transfer of current through the internal resistance of the and can be represented by ohm s law . using eq .ohmic , and rewriting the activation losses at two electrodes individually , eq .voltage takes the form concentration over - potentials are due to mass transport losses at higher current densities .since the diffusion effects are not explicitly modeled , the concentration losses are incorporated into the characteristic rate constant and are not represented in terms of voltage .activation losses near cathode are neglected following and rewriting eq .v1 for , the anodic activation loss is related to current density of the anodic reaction . with , eq .volt2 can be written as the current density at the anode surface is given by the butler - volmer equation . \end{gathered}\ ] ] where is current density ; is exchange current density and is given by eq .currdens and is the electrode surface area . with , the characteristic rate constant , , intensity of incident light ( ) , , conversion factor ( lumen to photons per sec ) , , quantum yield,( ) , , rate constant of anode reaction , , rate constant of cathode reaction , , product of reactant species concentration in anode reaction , , product of reactant species concentration in cathode reaction and , exposure surface area . is taken as unity based on assumption that oxygen is in excess in cathode chamber .solving eq .volt3 and eq .bv simultaneously , the current density of can be obtained .voltage from the can be calculated from eq .the characteristic rate constant , , has information about both the rate constants and of the proposed consolidated equations for anode and cathode . , and are related by the equation .the following reactions occur on the electrode surfaces .the standard reduction potentials of the bio - reactions at the anode are adapted from .anode surface : oxidation of reduced methylene blue to methylene blue . cathode surface : reduction of potassium ferricyanide to potassium ferrocyanide . following , , the standard cell potential can be obtained at with species concentrations at m .generally , nernst equation is used to relate the standard cell potential and the potential that can be obtained with the cell operating conditions . in the present study ,the model developed being lumped , the effect of species concentrations at the operating conditions can not be incorporated through the nernst equation .hence , the standard cell potential is used in the simulations assuming that the other terms are absorbed in the characteristic rate constant .the model equations presented contains odes ( eq . cell and eq .nutrient ) and algebraic equations ( eq . volt3 and eq .the four unknown variables are : the cell concentration , nutrient concentration , the current density , and the activation over potential .first , the two odes are integrated using the matlab inbuilt integrator ( ode15s ) and the final concentration at each iteration of is used as an initial guess for the next and also to calculate .the current density of , , is obtained by solving the two algebraic equations using the non - linear equation solver of matlab .the current and the voltage from the model are calculated by using eq .bv and eq .voltage respectively .various parameters present in the model and their approximate values taken from the literature are listed in table [ param ] . [ cols="<,<,<,<,<",options="header " , ] an optimization problem is solved to find the optimal values of for a chosen set of points in the data .the available data ( points ) is divided in to two sets , first set( points ) is used for parameter estimation and the second set( points ) , test data , is used to validate the model with the chosen parameters .the objective of the optimization problem is to reduce the sum square error ( sse ) between the experimental and the predicted values from model .the optimization problem formulated is shown in eq . optimum values of characteristic rate constant are estimated for the chosen points from the data . since the model developed is a lumped parameter model , the effects of phenomena that are not modeledhave been incorporated through the variation of the characteristic rate constant for each external load .figure [ vi_chara ] shows the comparison between the experimental data points of data used for parameter estimation and the values obtained from the model with the estimated parameters .the rmse of the fit is 0.0025 .this indicates that the parameters estimated are consistent and accurate .power densities are calculated based on the data from the model and are plotted against the experimental power densities . figure [ powerdens ] compares power density calculated from model and experimental values .the fit emphasizes the capability of the model to produce consistent output with the trained data and the optimized parameters .figure [ loglogka ] shows the log - log plot of the estimated and .it is interesting to observe that there are two power law regions in the plot .the implications of this behavior of as a function of provide some insights about the operating regimes of the cell .a discussion on these insights are presented later .vs ]model validation is a crucial step in any modeling exercise . in the presentwork the model is validated by using the steady - state experimental data , which is not used in parameter estimation . for data validation ,the model responses are predicted for the test data points .figure [ vi_test ] shows comparison between the experimental and the predicted characteristics .the rmse is 0.0024 for the fit .the fit suggests that the model is able to quite accurately predict the voltage and current for the test data .the k values for the test data are estimated as a non - parametric interpolation of the k data identified from the training data .sensitivity analysis is performed to understand the opportunities for performance assessment .the proposed model is used for predicting the performance of the device for changes in : electrode surface area , incident light intensity , the concentration of cells in anode chamber , light exposure surface area .[ applications ] the model predicted characteristics for different electrode surface areas are shown in the figure [ ae_vi ] .the model predicts increase in current with increase in electrode surface area which is consistent .a similar trend was also obtained for power density .decrease in electrode surface decreases the area available for reactions to occur directly affecting the performance .this shows that even with the same cell concentrations much better performance might be possible by increasing the electrode surface area .the characteristics for the response of the system to different light intensities is shown in .the model predicts decrease in current with decrease in light intensity and vice versa .one would expect that the current produced should be a very strong function of , but the results show that the current is a weak function of the former . in other words , for this , between increasing the electrode area and light intensity ( or illumination surface area as shown in figure [ as_vi ] ) , the former is preferable .shows the characteristics for various initial cell concentrations .the decrease in the current and voltage is observed with decreasing cell concentrations as shown in .it is observed that the model predicts the current as a weak function of cell concentration .this is consistent with the other results as the strong dependence on the electrode area shows that the current cell population itself is underutilized and increasing the cell count is not likely to increase current because of the paucity of the active surface area for the electrochemical reactions .it is important to note that these conclusions are not easy to reach by looking at just the data without this modeling exercise .this emphasizes the power of this simple model in identifying key performance limiting factors .when is plotted vs on a log - log graph ( see figure [ goodka ] . ) it can be observed that the experimental data collected for the cell can be divided into two regions . in the first region , the order of is almost constant , i.e , the cell is being operated in an ohmic region , where the performance is not mass transfer limited .the reactants are supplied at electrodes at the same rate at which the reactants are used up in reaction .the second regime where decreases as a power law , corresponds to the reaction rate limiting region ( activation regime ) of the operation of the cell .the sudden drop of voltage from in v - i profile , which corresponds to the activation loss dominant regime , is captured by large changes in parameter k. the key parameter in the model is k , which represents the several transport phenomena and the rates of reactions . the k is related to through . also has an interpretation of a charge transfer coefficient in butler - volmer equation , which is typically assumed to be .however , in our optimization approach whenever the range of was fixed as , there were discontinuities in the estimated values of as shown in the log - log plot of vs in figure [ worstk ] . however , for a small value as used in our model , we could observe the natural two power law region curves as shown in figure [ vi_test ] .hence , a value of was chosen , which is still in the acceptable range of .it is also worthwhile to keep in mind that the complicated multi - step photosynthesis reaction mechanism has been simplified into a single water splitting reaction step and hence the charge transfer coefficient can only be thought of as another parameter in the model .it is also interesting to observe that when is used to fit the v - i data , k is almost of the same order of magnitude in the lower current regime and then varies significantly in the high current regime .that is , fits the data into the regimes where ohmic losses and mass transfer losses are dominant .however , for , data was fit to regimes where ohmic losses and activation losses are dominant as mentioned before .the reason for a smooth fit for compared to could be because the experimental data that was used for parameter estimation lies in the region of dominant ohmic and activation losses .a mathematical model to predict the performance of a was developed .the model was thoroughly validated with steady - state experimental data .it was shown that this model could be used to predict the behavior of the that was considered .several insights provided by the model regarding the operation of the were described . for this particular design ,the model was able to unequivocally identify that the performance limitation as largely due to lack of enough active sites for reaction and not cell concentration or light intensity .as future work , if the model is extended to include the geometry of the electrode patterns , diffusion phenomena , and the multi - step reaction processes that occur , then it can be used to comprehensively optimize the various design and operational parameters of a .the authors would like to thank the peer group , dr .simona badilescu and dr .jayan ozhi kandthil from concordia university , montreal , canada for their support. the authors also thank mr .laya das from iit gandhinagar , and mr .srinivasan raman and dr .parham mobed from texas tech university for their help in discussions on solutions to model equations . c. picioreanu , k. p. katuri , m. c. van loosdrecht , i. m. head , and k. scott , `` modelling microbial fuel cells with suspended cells and added electron transfer mediator , '' _ journal of applied electrochemistry _40 , no . 1 ,pp . 151162 , 2010 .r. p. pinto , b. srinivasan , and b. tartakovsky , `` a unified model for electricity and hydrogen production in microbial electrochemical cells , '' in _ preprints of the 18th intenational federation of automatic control ( ifac ) world congress milano ( italy ) august _ , 2011 .k. b. lam , e. johnson , and l. lin , `` a bio - solar cell powered by sub - cellular plant photosystems , '' in _ micro electro mechanical systems , 2004 .17th ieee international conference on.(mems)_.1em plus 0.5em minus 0.4emieee , 2004 , pp . 220223 .m. rosenbaum , u. schrder , and f. scholz , `` utilizing the green alga chlamydomonas reinhardtii for microbial electricity generation : a living solar cell , '' _ applied microbiology and biotechnology _ , vol .68 , no . 6 , pp . 753756 , 2005 .k. b. lam , e. a. johnson , m. chiao , and l. lin , `` a mems photosynthetic electrochemical cell powered by subcellular plant photosystems , '' _ microelectromechanical systems , journal of _ , vol .15 , no . 5 , pp . 12431250 , 2006 .a. ramanan , m. packirisamy , and s. williamson , `` advanced fabrication , modeling , and testing of a micro - photosynthetic electrochemical cell for energy harvesting applications , '' _ power electronics , ieee transactions on _ , vol .pp , no .99 , pp . 11 , 2014 .a. v. ramanan , m. pakirisamy , and s. s. williamson , `` advanced fabrication , modeling , and testing of a microphotosynthetic electrochemical cell for energy harvesting applications , '' _ power electronics , ieee transactions on _ , vol . 30 , no . 3 , pp .12751285 , 2015 .d. noren and m. hoffman , `` clarifying the butler volmer equation and related approximations for calculating activation losses in solid oxide fuel cell models , '' _ journal of power sources _ , vol .152 , pp . 175181 , 2005 .m. vtov , k. biov , m. hlavov , s. kawano , v. zachleder , and m. kov , `` chlamydomonas reinhardtii : duration of its cell cycle and phases at growth rates affected by temperature , '' _ planta _ , vol .234 , no . 3 , pp .599608 , 2011 .
|
a simple first - principles mathematical model is developed to predict the performance of a micro photosynthetic power cell ( ) , an electrochemical device which generates electricity by harnessing electrons from photosynthesis in the presence of light . a lumped parameter approach is used to develop a model in which the electrochemical kinetic rate constants and diffusion effects are lumped into a single characteristic rate constant . a non - parametric estimation of for the is performed by minimizing the sum square errors ( sse ) between the experimental and model predicted current and voltages . the developed model is validated by comparing the model predicted characteristics with experimental data not used in the parameter estimation . sensitivity analysis of the design parameters and the operational parameters reveal interesting insights for performance enhancement . analysis of the model also suggests that there are two different operating regimes that are observed in this . this modeling approach can be used in other designs of for performance enhancement studies . micro photosynthetic power cell , first principles model , parameter estimation , optimization , sensitivity analysis
|
a numerically fast method of obtaining aerodynamic characteristics of an aircraft wing airfoil , e. g. the shape of the wing as seen in cross - section , is of great interest , especially since obtaining an exact solution based on full navier - stokes equations is usually a prohibitively complex task . over years, a range of approaches has been developed to predict the lift and drag forces acting on a given shape .the analytical solutions are possible with the conformal mapping , where a solved problem of a flow around a simple shape such as a cylinder is used along coordinate transform to obtain the lift and drag of an airfoil shape . nowadays, the analysis of a two - dimensional viscous and inviscid flow over an airfoil can be performed with a great accuracy on a personal computer , and the so - called inverse design can be used to generate the shape out of a prescribed pressure distribution .alternatively , one can design the airfoil by trial and error , using some model equation to generate the geometry and refine it either manually or with optimization procedure .therefore , it is of interest to provide a model having enough flexibility to describe a wide range of airfoils while keeping the number of free parameters to the minimum .several commonly used approaches for shape generation and genetic optimization are b - splines and other parametric descriptions such as 11 parameter parsec airfoil family .the airfoil is defined as a pair of parametric equations for and for , where + + b - base shape coefficient . for b=2 ,the base shape of the airfoil is ellipse . in the limit of b approaching 1 , the shape becomes a rectangle .this parameter affects mostly the leading edge .+ + t - thickness as a fraction of the chord .+ + p - taper exponent . for p=1 ,when going to the trailing edge , the thickness approaches 0 in a linear manner . for higher value , the airfoil tapers more suddenly near the trailing edge .+ + c - camber , as a fraction of chord . + + e - camber exponent , defining the location of the maximum camber point , where describes camber point , and smaller value shifts it towards the leading edge . + + r - reflex parameter .positive value generates reflexed trailing edge , while negative one emulates flaps .+ + it should be stressed that the presented description allows for intuitive manipulation of airfoil shape .the parameters have a clear meaning , and their change impacts the profile in a predictable manner . moreover , the generated shape is continuous and relatively smooth .the example airfoils generated with the above equation are shown on the fig . [ fig_foils ] .one can make several general observations .all the parameters affect the airfoil globally .one of the crucial characteristics of the airfoil is the leading edge , which is strongly dependent on the value of , with a typical , rounded shape obtained for .the taper parameter provides control over the maximum thickness point ; a characteristic shape of low drag laminar flow airfoils can be obtained for high values of . the thickness distribution is a product of and , and it is given by for and given by eq .[ main_eq ] .likewise , one can also define the camber line these two parameters give an insight on the design performance characteristic and allow for easy comparison with other airfoils ., , , , , , where one of the parameters is changed according to the description near the profile.,title="fig : " ] , , , , , , where one of the parameters is changed according to the description near the profile.,title="fig : " ] , , , , , , where one of the parameters is changed according to the description near the profile.,title="fig : " ] , , , , , , where one of the parameters is changed according to the description near the profile.,title="fig : " ] , , , , , , where one of the parameters is changed according to the description near the profile.,title="fig : " ]the performance optimization is performed in several steps .a known airfoil is chosen as a reference .first , the shape parameters are adjusted to match the selected profile . due to the limited precision of the curve fitting and the constraints resulting from the relatively low number of free parameters of eq .[ fig_foils ] , small differences between generated shape and reference one are expected .this is shown on the fig .[ fig_clark ] , bottom right panel , where a commonly used clark y profile has been selected as the base shape . as a next step , the analysis tool xfoil by mark drela is used to evaluate the lift and drag polars of the resulting shape and compare it with the chosen airfoil at a given operating point .the results for reynolds number are shown on the fig .[ fig_clark ] . due to the highly nonlinear nature of the problem, relatively small deviations from the reference shape result in significant differences in the performance figures .the lift - drag polar is shown on the upper left panel .the obtained airfoil maintains general characteristics of clark y - the minimum coefficient of drag is obtained at lift coefficient and starts to increase significantly for .interestingly , the generated profile is characterized by notably smaller drag at , which contributes to higher maximum lift to drag ratio ( marked as , upper right panel , the peak is reached for angle of attack ) . moreover , the new airfoil has the advantage of a lower pitching moment ( bottom left panel ) . as a final step ,the optimization of parameters is performed , with the aim of maximization of over a range of angles of attack .the evolutionary algorithm is used to alter the shape , using the obtained profile as a seed upon which new airfoils are generated .the results of optimization are shown on the fig .[ fig_clark2 ] .the peak is significantly increased and high values of lift to drag ratio are obtained at a bigger range of angles of attack .one can see that the evolutionary optimization allowed for drag reduction , while keeping maximum lift coefficient and pitching moment relatively constant . in the optimization ,the genetic algorithm ran for 10 generations , each consisting of 9 airfoils .notably , due to the smooth nature of the generated shapes , there were no cases of a very bad performance , and variation of peak among the population was not very significant .one can conclude that the procedure is capable of improving upon existing airfoil while keeping its key characteristics , provided that the initial shape can be closely matched by eq .[ main_eq ] . as another example , where clear performance improvement has been obtained , we have used naca 4 digit profile , the naca5412 .the optimization results are shown on the fig .[ fig_naca1 ] .a significant increase of over the whole angle of attack range is gained with relatively small alteration to the initial shape , while having almost no impact on pitching moment curve .it should be stressed that calculations were stopped before local optimum has been found and further refinement is still possible .however , longer evolution would divert more from the starting shape and would possibly need additional constraints on the geometry of generated airfoil to keep the assumed design criteria .an example of overspecialisation due to the lack of sufficient constraints is shown the fig . [ fig_drela ] .the evolution of low reynolds number airfoil ag24 by mark drela results in a design compromise which sacrifices low drag at small lift coefficient for a small gain in peak .finally , the fig .[ fig_wing ] shows the results for a brand new airfoil created for a flying wing . in this example, the baseline shape has been generated by manual adjustment of the parameters according to the design goals of thickness and near zero pitching moment at .the optimization has been used to perform only fine adjustments , further refining the design . , , , , , .] , , , , , . ] , , , , , . ] , , , , , . ]we have created a simple mathematical model for description of the airfoil shape with only parameters , which allows the designer to easily numerically describe the curve by using a simple terms like thickness , camber , reflex .the description is flexible enough to approximate a wide range of existing profiles , while its simplicity is well suited for optimization with a genetic algorithm . for a wide range of parameters ,the obtained shape is smooth and lacks discontinuities , making it attractive for easy numerical analysis .we show the procedure that can be used to improve upon existing , well known airfoils at specific operating points by creating a close approximation with provided model and then refining it .we also show an example of another hybrid approach , where a new design tailored specifically for a flying wing type aircraft is designed manually and further improved by genetic algorithm .99 t. benson , _ interactive educational tool for classical airfoil theory _ , cleveland ,ohio : nasa lewis research center ( 1996 ) , http://www.grc.nasa.gov / www / k- 12/airplane / foiltheory.pdf .m. drela , _ xfoil : an analysis and design system for low reynolds number airfoils _ , low reynolds number aerodynamics , springer berlin heidelberg ( 1989 ) .d. goldberg , _ genetic algorithms in search , optimization and machine learning _ , addison - wesley , reading , massachusetts ( 1989 ) .d. w. fanjoy , and w. a. crossley , _ aerodynamic shape design for rotor airfoils via genetic algorithm , _ journal of the american helicopter society , * 43 * , 3 , 263 - 270 ( 1998 ) .a. viccini , and d. quagliarella , _ inverse and direct airfoil design using a multiobjective genetic algorithm _ , aiaa journal * 25 * , 9 , 1499 - 1505 ( 1997 ) .h. sobieczky , _ parametric airfoils and wings _ , notes on numerical fluid mechanics * 68 * , 71 - 88 ( 1998 ) .m. s. selig , m. d. maughmert , and d. m. somers , _ natural - laminar - flow airfoil for general - aviation applications _ , journal of aircraft * 32 * , 4 ( 1995 ) .a. piccirillo , _ the clark y airfoil - a historical retrospective _ , sae / aiaa paper 2000 - 01 - 5517 , the world aviation congress & exposition , san diego , california ( 10.10.2000 ) .e. n. jacobs , k. e. ward , and r. m. pinkerton , naca report no . 460 , _ the characteristics of 78 related airfoil sections from tests in the variable - density wind tunnel _ , naca ( 1933 ) .g. a. williamson , b. d. mcgranahan , b. a. broughton , r. w. deters , j. b. brandt , m. s. selig , _ summary of low - speed airfoil data _ * 5 * , department of aerospace engineering , university of illinois at urbana - champaign ( 2012 ) .
|
we show a simple , analytic equation describing a class of two - dimensional shapes well suited for representation of aircraft airfoil profiles . our goal was to create a description characterized by a small number of parameters with easily understandable meaning , providing a tool to alter the shape with optimization procedures as well as manual tweaks by the designer . the generated shapes are well suited for numerical analysis with 2d flow solving software such as xfoil .
|
recently , detailed analysis on the high - frequency financial market data has shown that there exist some universal statistical characteristics for price or index fluctuations , in particular the fat tail distribution and rapid decay of correlation for price changes , and the persistence of long - range volatility correlation .for one of these fundamental features , the probability distribution , the power - law asymptotic behavior with an exponent about has been found from the daily and high - frequency intra - daily stock market data .many efforts have been made to simulate the market behaviors and dynamics , and then to reproduce these stylized observations of real markets .much work focuses on the microscopic discrete models , with different mechanisms based on the intrinsic structure of financial markets , including the herding and imitation behaviors as well as the mutual interactions among market participants .the other way to model the dynamics of financial markets is using the approach of continuous stochastic process and then , e.g. , determining the effective stochastic equation for price evolution .based on the analysis of hang seng index ( hsi ) in hong kong and the method of conditional averages proposed for generic stationary random time series and previously applied in fluid turbulence , a langevin equation reproducing well both the observed probability distribution of index moves with fat tails and the fast decay of moves correlation has been derived .the existence of a viscous market restoring force and a move - enhanced noise is shown in the equation . moreover ,an analytic form for the whole range of probability distribution has been obtained , and interestingly , the corresponding asymptotic tail behavior is an exponential - type decay : where the index move with time interval ( e.g. , 1 min ) , faster than the power law behavior with exponent about found in recent studies .the parameters can be directly determined from the market data ( in which the first 20 minutes in the opening of each day are skipped ) , and the tail behavior ( [ exp ] ) has also been observed in the simulations of our self - organized microscopic model with social percolation process , which is proposed to describe the information spread for different trading ways across a social system . instead of describing the details of our modelings for financial market behaviors which have been or will be published elsewhere , here we present our work on the analysis of hang seng index ( hsi ) , showing that the properties of probability distribution and volatility correlations for index fluctuations depend on the opening effect of each trading day ( i.e. , the overnight effect ) , which can also explain the above difference between the exponential - type fat tail behavior derived in our langevin approach and the recent empirical findings of power law distribution .the hsi data we used contains minute - by - minute records of every trading day from january 1994 to december 1997 , and the break between the morning and afternoon sessions as well as the difference between trading days are considered in our analysis .first , we skip the data in the first 20 minutes of each morning session , i.e. , skip the opening of each trading day , and the deviation from power law in the tail region of the distribution for 1 min interval index moves is found ( fig .[ fig - pdf ] , circles ) . in this casethe power law seems to be a crossover effect within a limited range , and for large index moves the log - log plot exhibits curvature , corresponding to the exponential - type eq .( [ exp ] ) derived from the langevin approach .next , we analyze the data without any skip in daily opening , and it is interesting to find that the power law scaling is recovered for 1 min interval , as shown in fig .[ fig - pdf ] ( triangles ) , which is in agreement with recent observations from german share price index dax and s&p500 index .this phenomenon shows the importance of the daily opening or overnight effect for the properties of stock market .it is well known that price fluctuations in the opening of trading day are highly influenced by exogenous factors , and the studies on trading volume have exhibited the larger and less elastic transactions demand at opening and close times compared with that at other times of the trading day .very recently , it has been observed from the german dax data that due to the peculiarity in the calculation of the opening index with the mixture of overnight and high - frequency price changes , the first observations of each day are governed by the process different from that of the other times .however , a power law scaling with exponent between 4 and 5 is found for dax data in when the first 15 minutes of every day are dropped , instead of the exponential - type behavior here .for hsi data it is found that the values of index moves and the volatility at the daily opening times are much larger than those of other times .[ fig - volat ] shows the mean of the absolute value of index moves and the volatility for different times of morning session ( open at 10:00 ) , where the averages are over different trading days from 1994 to 1997 at the same minute .both of the values are obviously larger for the first 20 minutes , and then remains almost unchanged at late times , similar to the phenomena of german dax data .thus , when skipping the opening data , much less extreme values of index move are calculated in the probability distribution , and consequently , the far tail of distribution may decay faster , as seen in fig .[ fig - pdf ] .here we find that the different behavior of distribution shown in fig .[ fig - pdf ] is relevant to the different properties of volatility clustering .[ fig - corr ] shows the autocorrelations of index moves and volatility for 1 min interval , with and without the skip of first 20 minutes , where the correlation for the index move rapidly decays to zero in about 10 minutes , and the persistence of long - range volatility correlation , ( averaged over the whole index time series ) is found , in accordance with the previous studies .the correlations of moves present little difference with or without the skip , however , the volatility correlation with no skip ( fig .[ fig - corr ] , stars ) is obviously smaller .this decrease is due to the fact that the volatility correlations of the first few minutes in the daily opening are much smaller than those of other times , as given in fig .[ fig - open ] .note that hong kong stock market opens at 10:00 in the morning , and fig .[ fig - open ] shows the volatility correlations of different times , defined as which is similar to eq .( [ volat ] ) , but averaged only over different days ( at the same time ) in the period of 1994 - 1997 . in the opening time region , the value of correlation increases with the increasing of time , and after the opening ( about 20 minutes , i.e. , 10:20 ) , the correlation keeps relatively unchanged ( with the values around the pluses of fig .[ fig - corr ] ) .the absolute value of index move is used to calculated the volatility correlations in the above study , as shown in eqs .( [ volat ] ) and ( [ volat - open ] ) .if using the square of move instead , the values of correlation are found to be smaller , but the above results will not change , as shown in figs .[ fig - corr2 ] and [ fig - open2 ] .= 9.5 cm = 9.5 cm it is known that the hong kong stock market behaved abnormally during the second half of 1997 , due to the much more significant impact of external conditions .when we discard the data of 1997 and only study the market from 1994 to 1996 , the results are the same as above .in this work we have presented that the index fluctuations for the first few minutes of daily opening behave very differently from those of the other times , and the lower degree of volatility clustering at the opening can affect the behaviors of fat tail distribution : power law behavior if including the daily opening data , or the exponential - type if not . to further understand these properties of hsi market data ,more work is needed to study the details of the opening procedure of stock market .the author thanks the workshop organizers of `` economic dynamics from the physics point of view '' for such a very enjoyable seminar , and dietrich stauffer , lei - han tang , and thomas lux for very helpful discussions and comments .i also thank lam kin and lei - han tang for providing the hsi data .this work was supported by sfb 341 .
|
based on the minute - by - minute data of the hang seng index in hong kong and the analysis of probability distribution and autocorrelations , we find that the index fluctuations for the first few minutes of daily opening show behaviors very different from those of the other times . in particular , the properties of tail distribution , which will show the power law scaling with exponent about or an exponential - type decay , the volatility , and its correlations depend on the opening effect of each trading day . probability distribution ; volatility ; autocorrelation ; exponential ; power law .
|
multiuser multiple - input multiple - output ( mu - mimo ) antenna system has been recognized as an effective means to increase capacity in the downlink . however , mu - mimo may not be as effective if edge - of - cell users are concerned due to the severe inter - cell interference that is hard to suppress . in recent years, it has emerged that letting base stations ( bs ) cooperate can greatly improve the link quality of the edge - of - cell users by turning unwanted interference into useful signal energy , e.g. , ( and the references therein ) .ideally , by sharing all the required information via high - speed backhaul links , all bss in a downlink cellular network can become a super bs with distributed sets of antennas .this architecture will then allow the use of well - known optimal or suboptimal transmission strategies such as capacity - achieving dirty - paper coding ( dpc ) techniques and zero - forcing beamforming ( zfbf ) , respectively .although dpc is capacity - achieving , it is very complex and massive interest has been to employ zfbf with a simple scheduler to approach near - capacity performance . for example , several testbeds for implementing bs cooperation have adopted zfbf techniques , e.g. , .regularized zfbf ( rzfbf ) is a generalization of zfbf by introducing the regularization parameter .it has been revealed that several beamformers can have a rzfbf structure by selecting the regularization parameter properly . even though information - theoretic studies have provided overwhelming support to rzfbf ,the real question is how could rzfbf be implemented in a very large - scale cellular network ? a straightforward way to implement rzfbf would be to require that there is a central processing unit which possesses all the necessary channel state information ( csi ) and performs the entire optimization .however , as a network expands with more bss cooperating , it becomes inviable to perform joint processing over all bss because of the limiting backhaul capacity and the excessive computational complexity .it is therefore of greater interest to consider an architecture where bss only communicate with neighboring bss and the overall computation cost is decomposed into many smaller computational tasks , amortized by groups of smaller number of cooperating bss .motivated by this , in this paper , we propose two message passing algorithms to realize rzfbf in a distributed manner .the proposed approaches are particularly well suited to cooperation of large clusters of simple and loosely connected bss .most importantly , in our designs , each bs is only required to know the data symbols of users within its reception range rather the entire cellular network , greatly reducing the backhaul requirements .the use of distributed methods in beamforming computations has been studied recently in .our approach is similar to in that both aim at achieving rzfbf and use belief propagation ( bp ) .nonetheless , the two approaches differ considerably .our main contributions are summarized as follows : * first , we generalize the earlier results in to incorporate_ multiple antennas _ at both bss and user equipments ( ues ) and our results can be applied to a wide range of scenarios with _ complex - valued systems_. further , we adopt the approximate message passing ( amp ) method in to significantly reduce the number of exchange messages .the proposed amp - rzfbf exhibits the advantage that every communication of bs with its neighbors only takes place in a broadcast fashion as opposed to the unicast manner in .the used amp method has recently received considerable interest in the field of compressed sensing .our form of the message passing algorithm is closely related to the amp methods in which are a special case of the generalized amp . * in amp - rzfbf , bss must compute several matrix inversions for every channel realization and then exchange these auxiliary parameters among themselves , requiring very high computational capability and rapid information exchange between the bss . to tackle this ,we approximate some of the auxiliary parameters by exploiting the spatial channel covariance information ( ccoi ) . the ccoi - aided amp - rzfbf results in significantly simpler implementations in terms of computation and communication .with the ccoi - aided amp - rzfbf , the bss compute and exchange the auxiliary parameters at the time scale merely at which the ccoi changes but not the instantaneous csi .simulation results show that ccoi - aided amp - rzfbf achieves promising results , which are different from earlier results based on the ccoi , e.g. , , where a performance degeneration is usually expected . * implementing rzfbf in a distributed manner can be achieved by an optimization technique called the alternating direction method of multipliers ( admm ) approach in .applications of admm to the concerned beamforming problem can be found in ( or ( * ? ? ?* section 8.3 ) ) .however , it is known that admm can be very slow to converge .simulation results will demonstrate that our proposed message passing algorithms exhibit a much faster convergence rate when compared to admm . _notations_throughout this paper , the complex number field is denoted by . for any matrix , denotes the entry , while , and return the transpose and the conjugate transpose of , respectively .for a square matrix , , , , and denote the principal square root , inverse , trace , and determinant of , respectively .in addition , is an identity matrix , denotes either an zero matrix or a zero vector depending on the context , and denotes the column vector with the element being and elsewhere .finally , represents the euclidean norm of an input vector , and returns the expectation of an input random entity .as shown in figure [ fig : mimonetwork ] , we consider a large - scale mimo broadcast system where interconnected multi - antenna bss , labeled as , simultaneously send information to users , labeled as . in the system , is equipped with antennas while is equipped with antennas .let and .the received signals at all the ues can be expressed in a vector form as ^t \in { { \mathbb c}}^n ] consists of the random components of the channel in which the elements are i.i.d .complex gaussian random variables with zero mean and unit variance . to get a proper definition on the channel gain of each link pair ,we consider the power of the channel if we assume that and are normalized such that and , then can be used as an indicator for the link gain between and . or . ] in the broadcast system ( [ eq : defsystem ] ) , linear precoding , referred to as rzfbf , is used to project the data symbols onto a subspace using the transmit antennas .let ^t ] and {i , j } = \rho_{{\sf t}_{k , l}}^{|i - j|} ] and has been defined in notations .the results provided are for a particular realization of the channel .it is natural that when the number of iterations increases , the average throughput increases and saturates eventually . here, rzfbf in ( [ eq : wprecoder ] ) serves as a benchmark for the optimal beamformer . from figure[ fig : fig1_l100k100m8n4 ] , it can be observed that the proposed message passing algorithms converge significantly faster than the admm approach .the convergence rates of all the proposed message passing algorithms are very similar . recall that amp - rzfbf follows from bp - rzfbf but using the approximations that and are nearly independent of .this approximation is expected to be good if and are extremely large .furthermore , the ccoi - aided amp - rzfbf uses the large system approximation by assuming .although the setting in figure [ fig : fig1_l100k100m8n4 ] corresponds to a practical system dimension , it is intriguing to see their performances under a relatively small network ; e.g. , , , , and . under the small network consideration, figure [ fig : fig2_l10k10m4n2 ] illustrates the convergence of the algorithms .similar characteristics as in figure [ fig : fig1_l100k100m8n4 ] before are observed .additionally , comparing to bp - rzfbf , amp - rzfbf and ccoi - aided amp - rzfbf only slightly degrades the convergence rate .this result is quite different from several earlier designs based on ccoi , e.g. , .usually , when some calculations are approximated by the ccoi , an obvious degradation in performance would be observed but this is not the case in our scheme .using bayesian inference , this paper proposed several message passing algorithms for realizing rzfbf in cooperative - bs networks , namely , bp - rzfbf , amp - rzfbf and ccoi - aided amp - rzfbf .results showed that the proposed algorithms converge very fast to the exact rzfbf . comparing to bp - rzfbf , both amp - rzfbf and ccoi - aided amp - rzfbf perform well with only very slight degradation in the convergence rate , but greatly reducing the burden for information exchange between the bss .to derive amp - rzfbf , we use a heuristic approximation which keeps all the terms that are linear in the matrix while neglecting the higher - order terms .the similar methodology was used in in the case of compressed sensing although some modifications are required to reflect the concerned case .we start by noticing that is the sum of terms each of order because scales as .therefore , it is natural to approximate by which only depends on the index and not on . similarly , it is natural to anticipate a similar approximation for .however , we must be careful to keep all correction terms of order . to that end, we instead set recall from ( [ eq : bpupdate ] ) that .then we get we will approximate the above two terms by dropping their negligible components . before proceeding, we deal with the approximation of .let us define .then we have where the approximation follows from the fact that is of order and can be safely neglected .similarly , we note that is nearly independent of .this leads to then we get now , we return to the approximation of .first , we deal with the second terms of ( [ eq : xx1 ] ) and get \\ & \approx { { \bf x}}_l^{(t ) } - \left({{\boldsymbol \sigma}}_{l}^{(t ) } + { { \bf i}}_{n_l } \right)^{-1 } { { \bf h}}_{k , l}^h \left ( { { \boldsymbol \omega}}_k^{(t ) } + \beta { { \bf i}}_{m_k } \right)^{-1 } { { \boldsymbol \nu}}_k^{(t)},\end{aligned}\ ] ] where the first approximation is directly from ( [ eq : xx1 ] ) by substituting the definition of and we have defined . substituting the above approximation of in , we get where the second equality follows from ( [ eq : omega_fin ] ) .now , it remains to complete the calculation of .we start from the definition ^{-1 } \left({\overline{\bf e}}_{l}^{(t ) } \right)^{-1 } { \overline{\bf f}}_{l}^{(t ) } = \left [ \left({\overline{\bf e}}_{l}^{(t ) } \right)^{-1 } + { { \bf i}}_{n_l } \right]^{-1 } { { \boldsymbol \mu}}_l^{(t)},\ ] ] where we have defined following the similar approximations as above , we get ^{-1 } \nonumber \\ & \times \left [ \sum_{k } { { \bf h}}_{k , l}^h \left ( { { \boldsymbol \omega}}_k^{(t ) } - { { \bf h}}_{k , l } { { \bf v}}_{l \rightarrow k}^{(t-1 ) } { { \bf h}}_{k , l}^h + \beta { { \bf i}}_{m_k } \right)^{-1 } \left ( { { \boldsymbol \nu}}_k^{(t ) } + { { \bf h}}_{k , l } { { \bf x}}_{l \rightarrow k}^{(t-1 ) } \right ) \right ] \nonumber \\\approx & ~{{\bf x}}_{l}^{(t-1 ) } + \left({{\boldsymbol \sigma}}_{l}^{(t)}\right)^{-1 } \left [ \sum_{k } { { \bf h}}_{k , l}^h \left ( { { \boldsymbol \omega}}_k^{(t ) } + \beta { { \bf i}}_{m_k } \right)^{-1 } { { \boldsymbol \nu}}_k^{(t ) } \right ] \label{eq : mu_final_app}\end{aligned}\ ] ] and then putting the above relations ( [ eq : sigma_final_ap ] ) , ( [ eq : omega_fin ] ) , ( [ eq : nu_final_ap ] ) , ( [ eq : mu_final_app ] ) and ( [ eq : x_final_app ] ) together , we get amp - rzfbf .for convenience , we provide some mathematical tools needed in this paper . [ lemma 2 ] a random matrix said to have a matrix variate complex gaussian distribution with mean and covariance matrix , if it can be written by , where and are both positive definite and the elements of are i.i.d .complex gaussian random variables with zero mean and unit variance. then we have , `` on the sum - rate of uplink mimo cellular systems with amplify - and - forward relaying and collaborative base stations , '' _ ieee j. sel .areas commun . special issue cooperative commun .mimo cellular net ._ , vol . 28 , no . 9 , pp .14091424 , dec . 2010 ., `` a vector - perturbation technique for near - capacity multiantenna multiuser communication parti : channel inversion and regularization , '' _ ieee trans ._ , vol .53 , no . 1 ,195202 , jan . 2005 . , `` cooperative multicell precoding : rate region characterization and distributed strategies with instantaneous and statistical csi , '' _ ieee trans_ , vol .58 , no . 8 , pp .42984310 , aug . 2010 .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein , _ distributed optimization and statistical learning via the alternating direction method of multipliers_.1em plus 0.5em minus 0.4emfoundations and trends in machine learning , 2011 .
|
base station ( bs ) cooperation can turn unwanted interference to useful signal energy for enhancing system performance . in the cooperative downlink , zero - forcing beamforming ( zfbf ) with a simple scheduler is well known to obtain nearly the performance of the capacity - achieving dirty - paper coding . however , the centralized zfbf approach is prohibitively complex as the network size grows . in this paper , we devise message passing algorithms for realizing the regularized zfbf ( rzfbf ) in a distributed manner using belief propagation . in the proposed methods , the overall computational cost is decomposed into many smaller computation tasks carried out by groups of neighboring bss and communications is only required between neighboring bss . more importantly , some exchanged messages can be computed based on channel statistics rather than instantaneous channel state information , leading to significant reduction in computational complexity . simulation results demonstrate that the proposed algorithms converge quickly to the exact rzfbf and much faster compared to conventional methods . * index terms*base station cooperation , belief - propagation , distributed algorithm , message passing , zero - forcing beamforming .
|
radiation transport and its interaction with matter via emission , absorption and scattering of radiation have a substantial effect on both the state and the motion of materials in high temperature hydrodynamic flows occurring in inertial confinement fusion ( icf ) , strong explosions and astrophysical systems . for many applicationsthe dynamics can be considered non - relativistic since the flow velocities are much less than the speed of light . in order to describe properly the dynamics of the radiating flow ,it is necessary to solve the full time - dependent radiation transport equation as very short time scales ( corresponding to a photon flight time over a characteristic structural length , or over a photon mean free path ) are to be considered .two methods commonly used are non - equilibrium diffusion theory , and radiation heat conduction approximation .the former is valid for optically thick bodies , where the density gradients are small and the angular distribution of photons is nearly isotropic .the conduction approximation is only applicable when matter and radiation are in local thermodynamic equilibrium , so that the radiant energy flux is proportional to temperature gradient , and for slower hydrodynamics time scales .use of eddington s factor for closing the first two moment equations is yet another approach followed in radiation hydrodynamics .radiative phenomena occur on time scales that differ by many orders of magnitude from those characterizing hydrodynamic flow .this leads to significant computational challenges in the efficient modeling of radiation hydrodynamics . in this paperwe solve the equations of hydrodynamics and the time dependent radiation transport equation fully implicitly .the anisotropy in the angular distribution of photons is treated in a direct way using the discrete ordinates method .finite difference analysis is used for the lagrangian meshes to obtain the thermodynamic variables .the hydrodynamic evolution of the system is considered in a fully implicit manner by solving a tridiagonal system of equations to obtain the velocities .the pressures and temperatures are converged iteratively .earlier studies on the non - equilibrium radiation diffusion calculations show that the accuracy of the solution increases on converging the non - linearities within a time step and increasing benefit is obtained as the problem becomes more and more nonlinear and faster , . in this work , by iteratively converging the thermodynamic variables , we observe a faster decrease in the -error as compared to the commonly used semi - implicit scheme .the organization of the paper is as follows : in section [ model ] we discuss the finite difference scheme for solving the hydrodynamic equations followed by the solution procedure of the radiation transport equation and their coupling together to result in an implicit radiation hydrodynamics code .section [ results ] presents the results obtained using this fully implicit one - dimensional radiation hydrodynamics code for the problems of shock propagation in aluminium and the point explosion problem .these benchmark results , thus , prove the validity of the methods . next , extensive results of convergence studies w.r.t .time step are presented .these results using the full transport equation are new , though similar studies have been reported earlier within approximate methods , .finally the conclusions of this paper are presented in section [ conc ] .for hydrodynamics calculations , the medium is divided into a number of cells as shown in fig .[ figure1 ] .the coordinate of the th vertex is denoted by and the region between the and th vertices is the th cell .the density of the th grid is and its mass is given by with and for planar , cylindrical and spherical geometries respectively .velocity of the th vertex is denoted by and and are the total pressure , specific volume , temperature of ions and electrons and the specific internal energy of ions and electrons in the th mesh respectively . during a time interval vertexes of the cells move as where is the average of velocity values at the beginning and end of the lagrangian step , and , respectively . in the lagrangian formulation of hydrodynamics ,the mass of each cell remains constant thereby enforcing mass conservation .the lagrangian differential equation for the conservation of momentum is here , the total pressure is the sum of the electron , ion and radiation pressures i.e. .[ [ momen ] ] can be discretized for the velocity at the end of the time step in terms of the pressures , in the th and th meshes respectively , after half time step as the velocity in the th mesh is determined by the pressure in the th and th meshes and hence all the meshes are connected .mass conservation equation can be used to eliminate the pressures at half time step to obtain an equation relating the present time step velocities in the adjacent meshes as follows : the equation describing conservation of mass is where is the mass density of the medium .this equation can be rewritten in terms of pressure using the relation , where is the adiabatic sound speed .therefore , eq . [ [ mass ] ] becomes this can be written for all the one dimensional co - ordinate systems as where for planar , cylindrical and spherical geometries .this equation can be discretized to obtain the change in total pressure along a lagrangian trajectory in terms of the velocity at the end of the time step : \frac{\delta t}{2}\end{aligned}\ ] ] and \frac{\delta t}{2}\end{aligned}\ ] ] here , is the quadratic von neumann and richtmyer artificial viscosity in the th mesh . where ( ) is a dimensionless constant . using eqs .[ [ pressure1 ] ] and [ [ pressure2 ] ] , in eq.[[momentum ] ] are eliminated to obtain a tridiagonal system of equations for . where \\ \mbox{with}\nonumber\\ ( \rho \delta r)_i=\rho_{i+1}(r_{i+1/2}-r_i)+\rho_i(r_i - r_{i-1/2})\end{aligned}\ ] ] the energy equations , for the ions and electrons , expressed in terms of temperature are =-\frac{p_{ion}}{v}\frac{\partial v}{\partial t}-p_{ie}\end{aligned}\ ] ] and = -\frac{(p_{elec}+p_{rad})}{v}\frac{\partial v}{\partial t}+\nonumber\\ \sigma_r(t_{elec})[e_r(r , t_{elec})-b(t_{elec})]+p_{ie}\end{aligned}\ ] ] where are the specific internal energies and is specific volume . is the rosseland opacity , is the radiation energy flux and is radiation emission rate . is the ion - electron energy exchange term given by with ion and electron temperatures expressed in kev .further , and are the number densities of electrons and ions , m is the mass number and z is the charge of the ions . herethe coulomb logarithm for ion - electron collision is ) \}\end{aligned}\ ] ] with expressed in ev .the discrete form of the energy equations for ions and electrons are and where with n and k denoting the time step and iteration index respectively .also , the constants a (= ) , and c denote the radiation constant , rosseland opacity and the speed of light respectively .stefan - boltzmann law , , has been used explicitly in these equations . in the gray approximation , or one group model , the time dependent radiation transport equation in a stationary mediumis where is the radiation energy flux , due to photons moving in the direction , at space point and time t. here is the one group radiation opacity , which is assumed to be calculated by rosseland weighing , at electron temperature ( the subscript of is dropped for convenience ) . as already mentioned , is the radiation energy flux emitted by the medium which is given by the stefan - boltzmann law .the radiation constant is if is in and in .this formula for the emission rate follows from the local thermodynamic equilibrium ( lte ) approximation , which is assumed in the present model .the scattering cross - section , representing thomson scattering is assumed to be isotropic and independent of temperature . in the lagrangian frameworkthe radiation transport equation for a planar medium is where is the radiation energy flux along a direction at an angle to the x axis .the term in this equation arises due to the lagrange scheme used in solving the hydrodynamic equations .backward difference formula for the time derivative gives ^{n , k}=\sigma_r^{n , k-1}b^{n , k-1 } \nonumber\\+\frac{\sigma_s}{2}\int_{-1}^{1 } i^{n , k}(\mu\prime ) d\mu\prime + \frac{\rho^{n , k-1}}{\rho^{n-1}}i^{n-1}(c\delta t)^{-1}\end{aligned}\ ] ] here , n andk denote the time step and iteration index for temperature respectively as earlier .this iteration arises because the opacity and the radiation emission rate are functions of the local temperature .the converged spatial temperature distribution is assumed to be known for the hydrodynamic cycle for the previous time step . starting with the corresponding values of and , denoted by and , the radiation energy fluxes are obtained from the solution of the transport equation eq .[ [ back ] ] .the method of solution , well known in neutron transport theory , is briefly discussed below .this is used in the electron energy equation of hydrodynamics eq .[ [ elece ] ] to obtain a new temperature distribution and corresponding values of and .the transport equation is again solved using these new estimates and the iterations are continued until the temperature distribution converges . finally the transport equation can be expressed in conservation form in spherical geometry as +\sigma i^{n , k } = q(r,\mu)\end{aligned}\ ] ] with where , the second term in eq .[ [ sph ] ] accounts for angular redistribution of photons during free flight .this term arises as a result of the local coordinate system used to describe the direction of propagation of photons . if this term is omitted , eq .[ [ sph ] ] reduces to that for planar medium and therefore a common method of solution can be applied . in the semi - implicit method to be discussed later , the transport equation is solved only once per time step .then , a slightly more accurate linearization can be introduced in eqs .[ [ back ] ] and [ [ sph ] ] by replacing with . a first order taylor expansion yields from which can be eliminated using eq .[ [ elece ] ] .however this modification is not necessary in the implicit method as the iterations are performed for converging the temperature distribution . to solve eq .[ [ sph ] ] , it is written in the discrete angle variable as where the indices n,k on i have been supressed .here m refers to a particular value of in the angular range [ -1,1 ] which is divided into m directions .the parameter is the weight attached to this direction whose value has been fixed according to the gauss quadrature and are the angular difference coefficients . and are the fluxes at the centers and the edges of the angular cell respectively .the angle integrated balance equation for photons is satisfied if the `` -coefficients '' obey the condition =0\end{aligned}\ ] ] as photons traversing along are not redistributed during the flight , the -coefficients also obey the boundary conditions for a spatially uniform and isotropic angular flux , eq . [ [ dis ] ] yields the recursion relation as the flux is a constant in this case . the finite difference version of eq . [ [ dis ] ] in space is derived by integrating over a cell of volume bounded by surfaces where and . the discrete form of the transport equation in space and angle is thus obtained as +\frac{2(a_{i+1/2}-a_{i-1/2})}{\omega_m v_i}\nonumber\\ \times [ \alpha_{m+1/2}i_{m+1/2,i}-\alpha_{m-1/2}i_{m-1/2,i}]+\sigma i_{m , i}=q_{m , i}\end{aligned}\ ] ] the cell average flux and source are given by and respectively , where specifies the spatial mesh .as mentioned earlier , planar geometry equations are obtained if the terms involving are omitted and the replacements and are made .thus , both geometries can be treated on the same lines using this approach .the difference scheme is completed by assuming that the flux varies exponentially between the two adjacent faces of a cell both spatially and angularly so that the centered flux can be expressed as : \\ i_{m , i}=i_{m , i+1/2}\ exp\ [ + \frac{1}{2}(r_{i+1/2}-r_{i-1/2})]\end{aligned}\ ] ] where the radii and are expressed in particle mean free paths .these relations show that for the spatial direction .similarly for the angular direction one gets use of these difference schemes guarantees positivity of all the angular fluxes if are positive .the symmetry of the flux at the centre of the sphere is enforced by the conditions dividing the spatial range into l intervals , for a vacuum boundary at , we have i.e , at the rightmost boundary the fluxes are zero for all directions pointing towards the medium .alternately , boundary sources , if present , can also be specified .an iterative method is used to solve the transport equation to treat the scattering term .the radiation densities at the centre of the meshes are taken from the previous time step , thereby providing the source explicitly .the fluxes for all meshes do not occur in eq .[ [ a ] ] as .then the fluxes are eliminated from this equation using the upwind scheme .starting from the boundary condition , viz .[ [ e ] ] , eq .[ [ a ] ] and [ [ b ] ] can be used to determine these two fluxes for all the spatial meshes . thereafter together with eq .[ [ c ] ] , the fluxes for all the negative values of can be solved for . at the center , the reflecting boundary condition given by eq .[ [ d ] ] provide the starting fluxes for the outward sweeps through all the spatial and angular meshes with positive values of .this completes one space - angle sweep providing new estimates of radiation energy flux at the mesh centers , given by : where the sum extends over all directions m. the mesh - angle sweeps are repeated until the scattering source distribution converges to a specified accuracy . the rate of radiation energy absorbed by unit mass of the material in the th mesh is / \rho^{n , k}_i\end{aligned}\ ] ] which determines the coupling between radiation transport and hydrodynamics .the sample volume is divided into l meshes of equal width .the initial position and velocity of all the vertices are defined according to the problem under consideration .also the initial pressure , temperature and internal energy of all the meshes are entered as input . for any time step ,the temperature of the incident radiation is obtained by interpolating the data for the radiation temperature as a function of time ( as in the case of shock propagation in aluminium sheet or an icf pellet implosion in a hohlraum ) .all the thermodynamic parameters for this time step are initialized using their corresponding values in the previous time step .it is important to note that the velocity in eqs .[ [ vel ] ] and [ [ di ] ] and position in eq . [ [ pos ] ] are the old variables and remain constant unless the pressure and temperature iterations for this time step converge .the temperature iterations begin by solving the radiation transport equation for all the meshes which gives the energy flowing from radiation to matter .the 1d lagrangian step is a leapfrog scheme where new radial velocities arise due to acceleration by pressure gradient evaluated at half time step .this leads to a time implicit algorithm .the first step in the pressure iteration starts by solving the tridiagonal system of equations for the velocity of all the vertices .the sound speed is obtained from the equation of state ( eos ) of that material . the new velocities and positions of all the verticesare obtained which are used to calculate the new density and change in volume of all the meshes .the total pressure is obtained by adding the von neumann and richtmeyer artificial viscosity to the ion , electron and radiation pressures and solving the energy equations which takes into account both the energy flow from radiation and the work done by ( or on ) the meshes due to expansion ( or contraction ) .the energy equations for ions and electrons are solved using the corresponding material eos which provides the pressure and the specific heat at constant volume of the material ( both ions and electrons ) .the hydrodynamic variables like the position , density , internal energy and velocity of all the meshes are updated . the convergence criterion for the total pressureis checked and if the relative error is greater than a fixed error criterion , the iteration for pressure continues , i.e , it goes back to solve the tridiagonal equations to obtain the velocities , positions , energies and so on .when the pressure converges according to the error criterion , the convergence for the electron temperature is checked in a similar manner .the maximum value of the error in electron temperature for all the meshes is noted and if this value exceeds the value acceptable by the error criterion , the temperature iterations continue , i.e , transport equation , tridiagonal system of equations for velocity , etc , are solved , until the error criterion is satisfied .thus the method is fully implicit as the velocities of all the vertices are obtained by solving a set of simultaneous equations .also both the temperature and pressure are converged simultaneously using the iterative method .once both the pressure and temperature converge , the position of the shock front is obtained by noting the pressure change and the new time step is estimated as follows : the time step is chosen so as to satisfy the courant condition which demands that it is less than the time for a sound signal with velocity to traverse the grid spacing , where the reduction factor c is referred to as the courant number .the stability analysis of von neumann introduces additional reduction in time step due to the material compressibility .the above procedure is repeated up to the time we are interested in following the evolution of the system .the solution method described above is clearly depicted in the flowchart given in fig .[ figure2 ] .the time step index is denoted by nh and dt is the time step taken .the iteration indices for electron temperature and total pressure are expressed as npt and npp respectively . error1 and error2 are the fractional errors in pressure and temperature respectively whereas eta1 and eta2 are those acceptable by the error criterion . in the semi - implicit scheme , eq . [ [ momentum ] ] is retained and is expressed as wherein is the pressure at the end of the time step .starting with the previous time step values for , the position and velocity of each mesh is obtained and is iteratively converged using the eos . as the variables are obtained explicitly from the known values , there is no need to solve the tridiagonal system of equations for the velocities of all the meshes .again , the energy flowing to the meshes as a result of radiation interaction is obtained by solving the transport equation once at the start of the time step , and hence the iterations leading to temperature convergence are absent .in the indirect drive inertial confinement fusion , high power laser beams are focused on the inner walls of high z cavities or hohlraums , converting the driver energy to x - rays which implode the capsule .if the x - ray from the hohlraum is allowed to fall on an aluminium foil over a hole in the cavity , the low z material absorbs the radiation and ablates generating a shock wave . using strong shock wave theory ,the radiation temperature in the cavity can be correlated to the shock velocity .the scaling law derived for aluminium is , where is in units of ev and is in units of cm / s for a temperature range of 100 - 250 ev . for the purpose of simulation , an aluminium foil of thickness 0.6 mm and unit cross sectionis chosen .it is subdivided into 300 meshes each of width cm .an initial guess value of is used for the time step .the equilibrium density of al is 2.71 gm / cc . in the discrete ordinates methodfour angles are chosen .as the temperature attained for this test problem is somewhat low , the total energy equation is solved assuming that electrons and ions are at the same temperature ( the material temperature ) .the equation of state ( eos ) and rosseland opacity for aluminium are given by these power law functions , of temperature and density , where are the fitting parameters , are quite accurate in the temperature range of interest .using the fully implicit radiation hydrodynamics code , a number of simulations are carried out for different values of time independent incident radiation fluxes or temperatures .corresponding shock velocities are then determined after the decay of initial transients . in fig .[ figure3 ] .we show the comparison between the numerically obtained shock velocities for different radiation temperatures ( dots ) and the scaling law for aluminium ( line ) mentioned earlier .good agreement is observed in the temperature range where the scaling law is valid .[ figure4 ]. shows the various thermodynamic variables like velocity , pressure , density and material temperature after 2.5 ns when the radiation profile shown in fig .[ figure5 ] . is incident on the outermost mesh .this radiation temperature profile ( fig .[ figure5 ] . )is chosen so as to achieve a nearly isentropic compression of the fuel pellet .the pulse is shaped in such a way that the pressure on the target surface gradually increases , so that the generated shock rises in strength . from fig .[ figure4 ] .we observe that the outer meshes have ablated outwards while a shock wave has propagated inwards . at 2.5 ns ,the shock is observed at 0.5 mm showing a peak in pressure and density .as the outer region has ablated , they move outwards with high velocities .the outermost mesh has moved to 1.2 mm .the meshes at the shock front move inwards showing negative velocities .also the temperature profile shows that the region behind the the shock gets heated to about 160 ev . in fig .[ figure6 ] .we plot the distance traversed by the shock front as a function of time for the above radiation temperature profile .the shock velocity changes from 3.54 to 5.46 at 1.5 ns when the incident radiation temperature increases to 200 ev . the performance of the implicit and semi - implicit schemes are compared by studying the convergence properties and the cpu cost for the problem of shock wave propagation in aluminium .the convergence properties are examined by obtaining the absolute -error in the respective thermodynamic variable profile versus the fixed time step value .the absolute -error in the variable ( velocity , pressure , density or temperature ) is defined as ^{1/2}\end{aligned}\ ] ] where the data constitute the exact solution for .this exact solution is chosen to be the result from a run using the implicit method with a small time step value of .the summation is taken over the values in all the meshes .[ figure7 ]. shows the absolute -error versus the time step value for velocity , pressure , density and temperature obtained using the implicit and semi - implicit radiation hydrodynamics codes respectively .the semi - implicit differencing scheme fails for time steps of and higher because of the violation of cfl criterion . for time steps ,the errors obtained from the two schemes are comparable . butas the time steps are reduced , the implicit scheme converges very fast , i.e. the errors reduce quickly , whereas the error becomes nearly constant in the semi - implicit scheme because of the spatial discretization error .an initial mesh width of cm is chosen for the above convergence study , which prevents further decrease in error in the semi - implicit scheme .hence a reduction in the mesh width as well as the time steps is expected to decrease the error . in fig .[ figure8 ] .the -error per mesh for velocity i.e. ^{1/2}$ ] , is plotted as a function of the time step by keeping the ratio of time step to mesh size i.e. constant at .the results obtained from the implicit method using a small time step of and mesh width of cm is chosen as the exact solution .both the implicit and semi - implicit schemes show linear convergence , though the convergence rate is faster for the implicit scheme showing its superiority in obtaining higher accuracies .[ figure9 ] .shows that the faster convergence in the implicit method ( fig .[ figure7 ] . )is attained at the cost of slightly higher cpu time .however the cost in cpu seconds become comparable in the two schemes for smaller time steps .all the runs in this study were done on a pentium(4 ) computer having 1 gb of ram operating at 3.4 ghz .the self similar problem of a strong point explosion was formulated and solved by sedov .the problem considers a perfect gas with constant specific heats and density in which a large amount of energy is liberated at a point instantaneously .the shock wave propagates through the gas starting from the point where the energy is released . for numerical simulation, the energy e is assumed to be liberated in the first two meshes .the process is considered at a larger time when the radius of the shock front , the radius of the region in which energy is released .it is also assumed that the process is sufficiently early so that the shock wave has not moved too far from the source .this ascertains that the shock strength is sufficiently large and it is possible to neglect the initial gas pressure or counter pressure in comparison with the pressure behind the shock wave . under the above assumptions the gas motionis determined by four independent variables , viz , amount of energy released , initial uniform density , distance from the centre of the explosion and the time .the dimensionless quantity serves as the similarity variable .the motion of the wavefront is governed by the relationship where is an independent variable .the propagation velocity of the shock wave is the parameters behind the shock front using the limiting formulas for a strong shock wave are where is the specific heat at constant volume and is the ratio of the specific heats .the distributions of velocity , pressure and density w.r.t .the radius are determined as functions of the dimensionless variable .since the motion is self - similar , the solution can be expressed in the form where are new dimensionless functions .the hydrodynamic equations , which are a system of three pde s , are transformed into a system of three ordinary first - order differential equations for the three unknown functions by substituting the expressions given by eq . [[ soln ] ] into the hydrodynamic equations for the spherically symmetric case and transforming from r and t to .the boundary condition satisfied by the solution at the shock front ( or ) is .the dimensionless parameter , which depends on the specific heat ratio is obtained from the condition of conservation of energy evaluated with the solution obtained . also , the distributions of velocity , pressure , density and temperature behind the shock front are generated numerically using the hydrodynamics code without taking radiation interaction into account .ideal d - t gas of density and is filled inside a sphere of 1 cm radius with the region divided into 100 radial meshes each of width .the initial internal energy per unit mass is chosen as for the first two meshes and zero for all the other meshes .an initial time step of is chosen and the thermodynamic variables are obtained after a time . as in the case of the problem of shock propagation in aluminium ,the total energy equation is solved assuming that electrons and ions are at the same temperature ( the material temperature ) . in fig .[ figure10 ] .we compare the distribution of the functions with respect to obtained exactly by solving the odes as explained above ( solid lines ) with the results generated from our code ( dots ) .good agreement between the numerical and theoretical results is observed .as is characteristic of a strong explosion , the gas density decreases extremely rapidly as we move away from the shock front towards the centre as seen from fig .[ figure10 ] . in the vicinity of the frontthe pressure decreases as we move towards the centre by a factor of 2 to 3 and then remains constant whereas the velocity curve rapidly becomes a straight line passing through the origin.the temperatures are very high at the centre and decreases smoothly at the shock front .as the particles at the centre are heated by a strong shock , they have very high entropy and hence high temperatures . the radiation interaction effects become prominent only at higher temperatures .the rosseland opacity of a d - t ( 50:50 ) gas in terms of the gas density and temperature is , .[ figure11 ] . shows the profiles of velocity , pressure , density and temperature after with an initial specific internal energy of deposited in the first two meshes ( i.e. total energy deposited is 3.351 tergs = ) , and 200 meshes each of width .the shock front moves faster in the pure hydrodynamic case as no energy is lost to the radiation keeping the shock stronger .also the effect of radiation interaction is to reduce the temperature at the centre as seen in fig . [figure11](d ) .the convergence study for the point explosion problem shows that the -error decreases much faster for the implicit method compared to the explicit scheme as observed in fig .[ figure12 ] . the exact solution in this case is chosen to be the result from a run using the implicit method with a small time step value of and an initial mesh width of . as the time step is decreased ,the non - linearities are iteratively converged for the implicit scheme , whereas for the explicit scheme , the spatial discretization error begins to dominate thereby preventing any appreciable decrease in the -error ( as observed in the problem of shock propagation in aluminium foil also ) . in the implicit method , faster convergence is attained at the cost of slightly higher cpu time as shown in fig .[ figure13 ] .in this paper we have developed and studied the performance of fully implicit radiation hydrodynamics scheme as compared to the semi - implicit scheme .the time dependent radiation transport equation is solved and energy transfer to the medium is accounted exactly without invoking approximation methods . to validate the code ,the results have been verified using the problem of shock propagation in al foil in the planar geometry and the point explosion problem in the spherical geometry .the simulation results show good agreement with the theoretical solutions .the convergence properties show that the -error keeps on decreasing on reducing the time step value for the implicit scheme .however for the semi - implicit scheme , the -error remains fixed below a certain time step because of the spatial discretization error showing that the implicit scheme is necessary if high accuracy is to be obtained . also the price to be paid for obtaining such high accuraciesis only slightly higher than the semi - implicit scheme in terms of the cpu time .the implicit scheme gives fairly accurate result even for large time steps whereas the semi - implicit scheme does not work there because of the cfl criterion .00 y.b.zeldovich and y.p.raizer , _ physics of shock waves and high - temperature hydrodynamic phenomena _ , vols i and ii ( academic press , new york , 1966 ) d.mihalas and b.w.mihalas , _ foundations of radiation hydrodynamics _ ( oxford univ.press , new york , 1984 ) j.w.bates , et.al . , on consistent time - integration methods for radiation hydrodynamics in the equilibrium diffusion limit : low - energy - density regime , _ j.comput.phys . _* 167 * , 99 ( 2001 ) w.dai and p.r.woodward , numerical simulation for radiation hydrodynamics i. diffusion limit , _j.comput.phys . _ * 142 * , 182 ( 1998 ) w.dai and p.r.woodward , numerical simulation for radiation hydrodynamics ii .transport limit , _j.comput.phys . _ * 157 * , 199 ( 2000 ) d.a.knoll , w.j.rider , g.l.olson , nonlinear convergence , accuracy , and time step control in nonequilibrium radiation diffusion , _ journal of quantitative spectroscopy and radiative transfer _ * 70 * , 25 ( 2001 ) c.c.ober and j.n.shadid , studies on the accuracy of time - integration methods for the radiation - diffusion equations , _ j.comput.phys . _* 195 * , 743 ( 2004 ) d.a.knoll , l.chacon , l.g.margolin , v.a.moussean , on balanced approximations for time integration of multiple time scale systems ,_ j.comput.phys . _ * 185 * , 583 ( 2003 ) d.de neim , e.kuhrt , u.motschmann , a volume - of - fluid method for simulation of compressible axisymmetric multi - material flow , _ comput .commun . _ * 176 * , 170 ( 2007 ) j. von neumann and r.d.richtmyer , a method for the numerical calculation of hydrodynamic shocks , _ j. appl ._ * 21 * , 232 ( 1950 ) huba j d. 2006 _ nrl plasma formulary _( washington : naval research lab . ) 35 e.larsen , a grey transport acceleration method for time - dependent radiative transfer problems , _ j.comput.phys . _ * 78 * , 459 ( 1988 ) e.e.lewis and w.f.miller , jr ., _ computational methods of neutron transport _ , ( john wiley and sons , new york , 1984 ) p.barbucci and f.di pasquantonio , exponential supplementary equations for methods : the one - dimensional case , _ nucl ._ * 63 * , 179 ( 1977 ) m.l.wilkins , _ computer simulation of dynamic phenomena _ ( springer - verlag , berlin , heidelberg , new york , 1999 ) isbn 3 - 540 - 63070 - 8 r.d.richtmyer and k.w.morton , _ difference methods for initial - value problems _ , second edition ( interscience publishers , new york , 1967 ) r.l.kauffman , et.al , high temperatures in inertial confinement fusion radiation cavities heated with light , _ phys .* 73 * , 2320 ( 1994 ) m.basko , an improved version of the view factor method for simulating inertial confinement fusion hohlraums , _ phys. plasmas _ * 3 * , 4148 ( 1996 ) l.i.sedov , _ similarity and dimensional methods in mechanics _( gostekhizdat , moscow , 4 th edition 1957 .english transl .( m.holt , ed . ) , academic press , new york , 1959 ) s.l.thompson and h.s.lauson , improvements in the chart d radiation - hydrodynamic code iii : revised analytic equation of state , sandia laboratories , * sc - rr-71 0714 * ( 1972 ) flowchart for the implicit 1d radiation hydrodynamics . here, nh is the time step index and dt is the time step taken . the iteration indices for electron temperature and total pressure are expressed as npt and npp respectively . error1 and error2 are the fractional errors in pressure and temperature respectively whereas eta1 and eta2 are those acceptable by the error criterion.,width=566 ] profiles of the thermodynamic variables like ( a ) velocity , ( b ) pressure , ( c ) density and ( d ) temperature in the region behind the shock as a function of the distance at t = 2.5 ns .the region ahead of the shock is undisturbed and retain the initial values of the variables .the incident radiation temperature on the al foil is shown in figure 5.,title="fig:",width=283 ] profiles of the thermodynamic variables like ( a ) velocity , ( b ) pressure , ( c ) density and ( d ) temperature in the region behind the shock as a function of the distance at t = 2.5 ns .the region ahead of the shock is undisturbed and retain the initial values of the variables .the incident radiation temperature on the al foil is shown in figure 5.,title="fig:",width=283 ] profiles of the thermodynamic variables like ( a ) velocity , ( b ) pressure , ( c ) density and ( d ) temperature in the region behind the shock as a function of the distance at t = 2.5 ns .the region ahead of the shock is undisturbed and retain the initial values of the variables .the incident radiation temperature on the al foil is shown in figure 5.,title="fig:",width=283 ] profiles of the thermodynamic variables like ( a ) velocity , ( b ) pressure , ( c ) density and ( d ) temperature in the region behind the shock as a function of the distance at t = 2.5 ns .the region ahead of the shock is undisturbed and retain the initial values of the variables .the incident radiation temperature on the al foil is shown in figure 5.,title="fig:",width=283 ] comparison of the -error vs. time step for the shock wave propagation problem in aluminium with mesh width cm .convergence is faster in the implicit scheme ., title="fig:",width=283 ] comparison of the -error vs. time step for the shock wave propagation problem in aluminium with mesh width cm .convergence is faster in the implicit scheme ., title="fig:",width=283 ] comparison of the -error vs. time step for the shock wave propagation problem in aluminium with mesh width cm .convergence is faster in the implicit scheme ., title="fig:",width=283 ] comparison of the -error vs. time step for the shock wave propagation problem in aluminium with mesh width cm .convergence is faster in the implicit scheme ., title="fig:",width=283 ] comparison of the simulation data obtained in the pure hydrodynamic case ( points ) with the self similar solutions ( lines ) for the point explosion problem with specific internal energy / gm deposited in the first two meshes and .] profiles of the thermodynamic variables with and without radiation interaction at for the point explosion problem with specific internal energy / gm deposited in the first two meshes.,title="fig:",width=283 ] profiles of the thermodynamic variables with and without radiation interaction at for the point explosion problem with specific internal energy / gm deposited in the first two meshes.,title="fig:",width=283 ] profiles of the thermodynamic variables with and without radiation interaction at for the point explosion problem with specific internal energy / gm deposited in the first two meshes.,title="fig:",width=283 ] profiles of the thermodynamic variables with and without radiation interaction at for the point explosion problem with specific internal energy / gm deposited in the first two meshes.,title="fig:",width=283 ] comparison of the -error vs. time step for the point explosion problem .implicit scheme converges faster.,title="fig:",width=283 ] comparison of the -error vs. time step for the point explosion problem .implicit scheme converges faster.,title="fig:",width=283 ] comparison of the -error vs. time step for the point explosion problem .implicit scheme converges faster.,title="fig:",width=283 ] comparison of the -error vs. time step for the point explosion problem .implicit scheme converges faster.,title="fig:",width=283 ] cpu cost for the point explosion problem with specific internal energy / gm deposited in the first two meshes for the implicit and semi - implicit schemes taking radiation interaction into account . ]
|
a fully implicit finite difference scheme has been developed to solve the hydrodynamic equations coupled with radiation transport . solution of the time dependent radiation transport equation is obtained using the discrete ordinates method and the energy flow into the lagrangian meshes as a result of radiation interaction is fully accounted for . a tridiagonal matrix system is solved at each time step to determine the hydrodynamic variables implicitly . the results obtained from this fully implicit radiation hydrodynamics code in the planar geometry agrees well with the scaling law for radiation driven strong shock propagation in aluminium . for the point explosion problem the self similar solutions are compared with results for pure hydrodynamic case in spherical geometry and the effect of radiation energy transfer is determined . having , thus , benchmarked the code , convergence of the method w.r.t . time step is studied in detail and compared with the results of commonly used semi - implicit method . it is shown that significant error reduction is feasible in the implicit method in comparison to the semi - implicit method , though at the cost of slightly more cpu time . keywords : implicit radiation hydrodynamics , lagrangian meshes , finite difference scheme , point explosion problem , self similar solutions pacs : 47.11.-j , 47.11.bc , 47.40.-x
|
a _ map projection _ is a systematic transformation of the latitudes and longitudes of positions on the surface of the earth to a flat sheet of paper , a map .more precisely , a map projection requires a transformation from a set of two independent coordinates on the model of the earth ( the latitude and longitude ) to a set of two independent coordinates on the map ( the cartesian coordinates and ) , i.e. , a transformation matrix such that =t\left [ \begin { array}{c } \phi \\ \lambda\end { array } \right]\!.\ ] ] however , since we are dealing with partial derivative and fundamental quantities ( to be defined later ) , it is impossible to find such a transformation explicitly .there are a number of techniques for map projection , yet in all of them distortion occurs in length , angle , shape , area or in a combination of these .carl friedrich gauss showed that a sphere s surface can not be represented on a map without distortion ( see ) .a _ terrestrial globe _ is a three dimensional scale model of the earth that does not distort the real shape and the real size of large futures of the earth .the term _ globe _ is used for those objects that are approximately spherical .the equation for spherical model of the earth with radius is an oblate _ ellipsoid _ or _ spheroid _ is a quadratic surface obtained by rotating an ellipse about its minor axis ( the axis that passes through the north pole and the south pole ) .the shape of the earth is appeared to be an oblate ellipsoid ( mean earth ellipsoid ) , and the geodetic latitudes and longitudes of positions on the surface of the earth coming from satellite observations are on this ellipsoid .the equation for spheroidal model of the earth is where is the semimajor axis , and is the semiminor axis of the spheroid of revolution .the spherical representation of the earth ( terrestrial globe ) must be modified to maintain accurate representation of either shape or size of the spheroidal representation of the earth .we discuss about these two representations in section [ section4 ] .there are three major types of map projections : * 1. equal - area projections . * these projectionspreserve the area ( the size ) between the map and the model of the earth .in other words , every section of the map keeps a constant ratio to the area of the earth which it represents .some of these projections are albers with one or two standard parallels ( the conical equal - area ) , the bonne , the azimuthal and lambert cylindrical equal - area which are best applied to a local area of the earth , and some of them are world maps such as the sinusoidal , the mollweide , the parabolic , the hammer - aitoff , the boggs eumorphic , and eckert iv . * 2 . conformal projections . *these projections maintain the shape of an area during transformation from the earth to a map .these projections include the mercator , the lambert conformal with one standard parallel , and the stereographic .these projections are only applicable to limited areas on the model of the earth for any one map .since there is no practical use for conformal world maps , conformal world maps are not considered .* 3 . conventional projections . *these projections are neither equal - area nor conformal , and they are designed based on some particular applications .some examples are the simple conic , the gnomonic , the azimuthal equidistant , the miller , the polyconic , the robinson , and the plate carree projections . in this paper , we only show the derivation of plotting equations on a map for the mercator and lambert cylindrical equal - area for a spherical model of the earth ( section [ section2 ] ) , the albers with one standard parallel and the azimuthal for a spherical model of the earth and the lambert conformal with one standard parallel for a spheroidal model of the earth ( section [ section5 ] ) , the sinusoidal ( section [ section6 ] ) , the simple conic and the plate carree projections ( section [ section7 ] ) . the methods to obtain other projections are similar to these projections , and the reader is referred to .suppose that a terrestrial glob is covered with infinitesimal circles . in order to show distortions in a map projection, one may look at the projection of these circles in a map which are ellipses whose axes are the two principal directions along which scale is maximal and minimal at that point on the map . this mathematical contrivance is called _ tissot sindicatrix_. usually tissot s indicatrices are placed across a map along the intersections of meridians and parallels to the equator , and they provide a good tool to calculate the magnitude of distortions at those points ( the intersections ) . in an equal - area projection , tissot s indicatrices change shape ( from circles to ellipses ) , whereas their areas remain the same . in conformal projection , however , the shape of circles preserves , and the area varies . in conventional projection , both shape and size of these circles change . in this paper , we portrayed the mercator , the lambert cylindrical equal - area , the sinusoidal and the plate carree maps with tissot s indicatrices . in section [ section8 ] , the equations for distortions of length , area and angle are derived , and distortion in length for the albers projection and in length and area for the mercator projection are calculated , .in this section , by an elementary method , we show the cylindrical method that mercator used to map from a spherical model of the earth to a flat sheet of paper . also , we give the plotting equations for the lambert cylindrical equal - area projection . then, in section [ section3 ] , we obtain the gaussian fundamental quantities , and show a routine mathematical way to find plotting equations for different map projections . this section is based mostly on .let be the globe , and be a circular cylinder tangent to along the equator , see fig .[ cylinder ] .projecting along the rays passing through the center of onto , and unrolling the cylinder onto a vertical strip in a plane is called _ central cylindrical projection_. clearly , each meridian on the sphere is mapped to a vertical line to the equator , and each parallel of the equator is mapped onto a circle on the cylinder and so a line parallel to the equator on the map .all methods discussed in this section and other sections are about central projection , i.e. , rays pass through the center of the earth to a cone or cylinder .methods for those projections that are not central are similar to central projections ( see ) .let be the width of the map .the scale of the map along the equator is that is the ratio of size of objects drawn in the map to actual size of the object it represents .the scale of the map usually is shown by three methods : arithmetical ( e.g. 1:6,000,000 ) , verbal ( e.g. 100 miles to the inch ) or geometrical . at latitude ,the parallel to the equator is a circle with circumference , so the scale of the map at this latitude is where the subscript stands for horizontal .assume that and are in radians , and the origin in the cartesian coordinate system corresponds to the intersection of the greenwich meridian ( ) and the equator ( ) .then every cylindrical projection is given explicitly by the following equations for instance , it can be seen from fig .[ cylinder ] that a central cylindrical projection is given by where for a map of width , a globe of radius is chosen . in a globe ,the arc length between latitudes of and ( in radians ) along a meridian is and the image on the map has the length .so the overall scale factor of this arc along the meridian when gets closer and closer to is where the subscript stands for vertical .the goal of mercator was to equate the horizontal scale with vertical scale at latitude , i.e. , .thus , from eqs . and , mercator was not be able to solve equation [ conmer ] precisely because logarithms were not invented ! but now , we know that the following is the solution to eq .( use to make the constant coming out from the integration equal to zero ) , thus , the equations for the mercator conformal projection ( central cylindrical conformal mapping ) are fig .[ mer11 ] shows the mercator projection with tissot s indicatrices that do not change their shape ( all of them are circles indicating a conformal projection ) while their size get larger and larger toward the poles .now if the goal is preserving size rather than shape , then we would make the horizontal and vertical scaling reciprocal , so the stretching in one direction will match shrinking in the other .thus , from eqs . and , we obtain or where is a constant . from eqs . and , we can choose in such away that for a given latitude , the map also preserves the shape in that area .for instance if , then we choose , and so the map near equator is conformal too . hence , the equations for the cylindrical equal - area projection ( one of lambert s maps ) are fig .[ lam11 ] shows the lambert projection with tissot s indicatrices that do not change their size ( indicating an equal - area projection ) while their shape are changing toward the poles .in this section , we derive the first fundamental form for a general surface that completely describes the metric properties of the surface , and it is a key in map projection , . the vector at any point on the surface is given by .if either of parameters or is held constant and the other one is varied , a space curve results , see fig .[ ff ] .the tangent vectors to -curve and -curve at point are respectively as follows : the total differential of is the first fundamental form ( e.g. , ) is defined as the dot product of eq . with itself : where , and are known as the gaussian fundamental quantities . * from eq ., the distance between two arbitrary points and on the surface can be calculated : * the angle between and is simply given by * incremental area is the magnitude of the cross product of and , i.e. , since we are dealing with latitudes and longitudes on a spherical or spheroidal model of the earth , the vectors and are orthogonal ( meridians are normal to equator parallels ) .also , in maps , we are dealing with the polar and cartesian coordinate systems in which their axes are perpendicular .thus , from eq . , because , one obtains . therefore , the first fundamental form in map projection will be deduced to the following form : [ ex1 ] the first fundamental form for a planar surface \1 . in the cartesian coordinate system ( a cylindrical surface )is where \2 . in the polar coordinate system ( a conical surface ) is where and \3 . in the spherical model of the earth , eq . , is where and , and \4 . in the spheroidal model of the earth , eq . , is where and in which is the radius of curvature in meridian and is the radius of curvature in prime vertical which are both functions of : see fig .[ ellips ] , and for the derivations of and . ., width=377,height=302 ] now suppose that and are the parameters of the model of the earth with the fundamental quantities , and .consider a two - dimensional projection with parametric curves defined by the parameters and .for instance , for the polar or conical coordinates , we have and .let and be its fundamental quantities .also , assume that on the plotting surface a second set of parameters , and , with the fundamental quantities , and .the relationship between the two sets of parameters on the plane is given by as an example , and for the polar and cartesian coordinates .the relationship between the parametric curves , , and is eq . must be unique and reversible , i.e. , a point on the earth must represent only one point on the map and vice versa . from eqs . and , we have from the definition of the gaussian first fundamental quantities , we have note that in here and in are replaced by and , respectively .similarly , we have as we mentioned earlier , since we are dealing with orthogonal curves , . using this fact and eqs . , and , the following relation can be derived ( see section x chapter 2 in ) : from eq ., a mapping from the earth to the plotting surface requires that from eqs ., , and using , one obtains where is the jacobian determinant of the transformation from the coordinate set and to the coordinate set and . by a theorem of differential geometry ( see ) , a mapping for the orthogonal curves is conformal if and only if this section , we describe how much the latitudes and longitudes of a spheroidal model of the earth will be effected once they are transformed to a spherical model , i.e. , how much distortion in shape and size happens when one projects a spheroidal model of the earth to a spherical model , .we distinguish two cases , equal - area transformation and conformal transformation .* case 1 . * a spherical model of the earth that has the same surface area as that of the reference ellipsoidis called the _ authalic sphere_. this sphere may be used as an intermediate step in the transformation from the ellipsoid to the mapping surface .let , and be the authalic radius , latitude and longitude , respectively .also , let and be the geodetic latitude and longitude , respectively . from example [ ex1 ] , we have , , and . by eqs . and, in the transformation from the ellipsoid to the authalic sphere , longitude is invariant , i.e. , .moreover , is independent of and so .. reduces to substitute the values of and ( given in example [ ex1 ] ) into eq . to obtain integrating the left hand side of eq . from to ( using binary expansion ) , and the right hand side from to , one obtains assuming when , eq .gives : substituting eq . into eq ., one obtains since the eccentricity is a small number , the above series are convergent .the relation between authalic and geodetic latitudes is equal at latitudes and , and the difference between them at other latitudes is about for the wgs-84 spheroid ( see for the definitions of the wgs-84 and wgs-72 spheroids ) .\1 . for the wgs-72 spheroid with m and ,the radius of the authalic sphere is \2 . for the i.u.g.g spheroid with , we have , and from eq ., for geodetic latitude , we have which gives * case 2 . *a conformal sphere is an sphere defined for conformal transformation from an ellipsoid , and similar to the authalic sphere may be used as an intermediate step in the transformation from the reference ellipsoid to a mapping surface .let , and be the conformal radius , latitude and longitude for the conformal sphere , respectively .let and be the same fundamental quantities as case 1 , and and .also , let and .thus , from eq ., combining eqs . and, one obtains that after integrating and simplifying with the condition for , it gives one can calculate from eq . which is a function of geodetic latitude .also , it can be shown that for a given latitude which in this case .we refer to chapter 5 section 3 in for the derivation .in this section , we describe the albers one standard parallel ( equal - area conic projection ) and lambert one standard parallel ( conformal conic projection ) at latitude which give good maps around that latitude ( cf . , ) .we start with some geometric properties in a cone tangent to a spherical model of the earth at latitude . in fig .[ cone ] , and are two meridians separated by a longitude difference of , and is an arc of the circle parallel to the equator .we have and and approximately .+ therefore , the first polar coordinate , , is a linear function of , i.e. , the second polar coordinate , , is a function of , i.e. , the constant of the cone , denoted , is defined from the relation between lengths on the developed cone on the earth .let the total angle on the cone , , corresponding to on the earth be , where is the circumference of the parallel circle to the equator at latitude , and .thus , and the constant of the cone is defined as . * case 1*. the albers projection .consider a spherical model of the earth .from example [ ex1 ] , we know that the first fundamental quantities for the sphere are and and for a cone ( the polar coordinate system ) are and . hence , from eqs . and , using eqs . and , eq . becomes solving eq . by knowing the fact that an increase in corresponds to a decrease in , one gets imposing the boundary condition into eq . , , and so after some simplifications , eq .becomes the cartesian plotting equations for a conical projection are defined as follows : where is the scale factor , and are given respectively by eqs .and , and . the origin of the projection has the coordinates ( the longitude of central meridian ) and . fig .[ al ] shows the albers projection with one standard parallel .if we let , then eqs . andreduce to that are the polar coordinates for the azimuthal equal - area projection , a special case of the albers projection , see fig .[ la ] .* case 2 . * the lambert projection . in this case, we consider a spheroidal model of the earth . from example[ ex1 ] , the fundamental quantities for this model are and , and the fundamental quantities for a cone are and . again using eqs . and, eq . becomes substituting these values in eq ., integrating , simplifying and noting that increases as decreases , one gets where the cartesian equations are the same as eq . with these new and .[ lamco ] shows the lambert projection with one standard parallel .in this section , we only discuss about the sinusoidal equal - area projection that is a projection of the entire model of the earth onto a single map , and it gives an adequate whole world coverage , .consider a spherical model of the earth with the fundamental quantities and .the first fundamental quantities on a planar mapping surface is . substituting these fundamental quantities into eq .( using eq . ) , one gets which by imposing the conditions and reduces to taking the positive square root of eq . and using the fact that and are independent , one obtains , and so by integrating . using the boundary condition when , one gets , and so the plotting equations for the sinusoidal projection become as follow ( and in radians ) : where is the scale factor .[ sinusoidal1 ] shows a normalized plot for the sinusoidal projection . in this map ,the meridians are sinusoidal curves except the central meridian which is a vertical line and they all meet each other in the poles .this is why this map is known as the sinusoidal map .the axis is also along the equator . the inverse transformation from the cartesian to geographic coordinatesis simply calculated from eq .in this section , we give the plotting equations for two conventional projections , the simple conic projection ( one standard parallel ) and the plate carree projection ( cf . , ) . as we mentioned earlier , these projections neither preserve the shape nor do they preserve the size , and they are usually used for simple portrayals of the world or regions with minimal geographic data such as index maps .the simple conic projection is a projection that the distances along every meridian are true scale .suppose that the conic is tangent to the spherical model of the earth at latitude , see fig .[ conventional ] . in this figure, we have .we want to have , but .thus the polar coordinates for this projection are replacing these values into eq .gives its cartesian coordinates .the plate carree , the equirectangular projection , is a conventional cylindrical projection that divides the meridians equally the same way as on the sphere .also , it divides the equator and its parallels equally .the plate carree plotting equations are very simple : where and are in radians . fig .[ platec ] shows the plate carree map with tissot s indicatrices which are changing their shape and size when moving toward the poles indicating that this map is neither equal - area nor conformal .in this section , we discuss about three types of distortions from differential geometry approach : distortions in length , area and angle , and we present them in term of the gaussian fundamental quantities ( cf . , ) .the distortion in length is defined as the ratio of a length of a line on a map to the length of the true line on a model of the earth .more precisely , from eq ., the distortion along the meridians ( ) is , and along the lines parallel to the equator ( ) is .the distortion in area is defined as the ratio of an area on a map to the true area on a model of the earth . from eq .( ) , the area on the map is , and the corresponding area on the model of the earth is .thus , the distortion in area is in equal - area map projections , from eq ., .* the distortion in angle is defined as ( in percentage ) : where is the angle on a model of the earth ( the azimuth ) , and is the projected angle on a map ( the azimuth on the map , cf . ,[ distortion ] ) . in order to obtain as a function of the fundamental quantities and ,we first calculate . from fig .[ distortion ] , we have hence , define now the goal is to find the roots of .this can be done by newton s iteration as follows : where the iteration is rapidly convergent by letting . in conformal mapping , from eq ., , and so the function will have a unique solution ( ) . in this example , we show the distortions in length in the albers projection with one standard parallel .from example [ ex1 ] , the first fundamental form for the map is and the first fundamental form for the spherical model of the earth is taking the derivatives of eqs . and , one obtains respectively .substitute the above equations into eq . to get substituting and in eq .gives the total length distortion .also , which are functions of .clearly , in this example , we first use the first fundamental form to obtain the plotting equations for the mercator projection , and then we show its length and area distortion . from example[ ex1 ] , the first fundamental form for the cylindrical surface ( the cartesian coordinate system ) is taking the derivative of eq . and substituting in eq ., one finds where is the scale of the map along the equator , and the first fundamental quantities for the spherical model of the earth are and . substituting these fundamental quantities in eq . andsimplifying , one obtains it is easy to see that integrating the above differential equation and applying the boundary condition , eq . follows . by eq ., therefore , substituting eqs . and in eq ., the length distortion will be it can be seen that , and so from eq . , the distortion in area for the mercator projection is hence , in the mercator projection both length and area distortions are functions of not .there are a number of map projections used for different purposes , and we discussed about three major classes of them , equal - area , conformal , and conventional .users may also create their own map based on their projects by starting with a base map of known projection and scale . in this paper , in cylindrical projections , we assume that the cylinder is tangent to the equator . making the cylinder tangent to other closed curves on the earthresults good maps in areas close to the tangency .this is also applied for conical and azimuthal projections . in all projections from a 3-d surface to a 2-d surface , there are distortions in length , shape or size that some of them can be removed ( not all ) or minimized from the map based on some specific applications .we also noticed in section [ section4 ] that projecting a spheroidal model of the earth to a spherical model of the earth will also distort length , shape and angle .davies , r. e. and foote , f. s. and kelly , j. e. , surveying : theory and practice , mcgraw - hill , new york ( 1966 ) deetz , c. h. and adams , o. s. , elements of map projection , spec . publ .68 , coast and geodetic survey , u. s. govt .printing office , washington , d. c. ( 1944 ) goetz , a. , introduction to differential geometry , addison - wesley , reading , ma ( 1958 ) osserman , r. , mathematical mapping from mercator to the millennium , mathematical sciences research institute ( 2004 ) pearson , f. , map projections , theory and applications , boca raton , florida ( 1999 ) richardus , p. and adler , r. k. , map projections for geodesists , cartographers , and geographers , north holland , amsterdam ( 1972 ) steers , j. a. , an introduction to the study of map projection , university of london ( 1962 ) thomas , p. d. , conformal projection in geodesy and cartography , spec .68 , coast and geodetic survey , u. s. govt .printing office , washington , d. c. ( 1952 ) vanicek p. and krakiwsky e. j. , geodesy the concepts , pp 697 .amsterdam the netherlands , university of new brunswick , canada ( 1986 )
|
in this paper , we introduce some known map projections from a model of the earth to a flat sheet of paper or map and derive the plotting equations for these projections . the first fundamental form and the gaussian fundamental quantities are defined and applied to obtain the plotting equations and distortions in length , shape and size for some of these map projections . the concepts , definitions and proofs in this work are chosen mostly from .
|
the problem of the so - called `` coffee - drop deposit '' has recently aroused great interest .the residue left when coffee dries on the countertop is usually darkest and hence most concentrated along the perimeter of the stain .ring - like stains , with the solute segregated to the edge of a drying drop , are not particular to coffee .mineral rings left on washed glassware , banded deposits of salt on the sidewalk during winter , and enhanced edges in water color paintings are all examples of the variety of physical systems displaying similar behavior and understood by coffee - drop deposit terminology . understanding the process of drying of such solutions is important for many scientific and industrial applications , where ability to control the distribution of the solute during drying process is at stake .for instance , in the paint industry , the pigment should be evenly dispersed after drying , and the segregation effects are highly undesirable .also , in the protein crystallography , attempts are made to assemble the two - dimensional crystals by using evaporation driven convection , and hence solute concentration gradients should be avoided . on the other hand , in the production of nanowires or in surfacepatterning perimeter - concentrated deposits may be of advantage .recent important applications of this phenomenon related to dna stretching in a flow have emerged as well .for instance , a high - throughput automatic dna mapping was suggested , where fluid flow induced by evaporation is used for both stretching dna molecules and depositing them onto a substrate .droplet drying is also important in the attempts to create arrays of dna spots for gene expression analysis .ring - like deposit patterns have been studied experimentally by a number of groups .difficulties of obtaining a uniform deposit , deformation of sessile drops due to a sol - gel transition of the solute at the contact line , stick - slip motion of the contact line of colloidal liquids , and the effect of ring formation on the evaporation of the sessile drops were all reported .the evaporation of the sessile drops ( regardless of solute presence ) has also been investigated extensively .constancy of the evaporation flux was demonstrated , and the change of the geometrical characteristics ( contact angle , drop height , contact - line radius ) during drying was measured in detail .the most recent and complete experimental effort to date on coffee - drop deposits was conducted by robert deegan _et al . _most experimental data referred to in this work originate from observations and measurements of this group .they reported extensive results on ring formation and demonstrated that these could be quantitatively accounted for .the main ideas of the theory of solute transfer in such physical systems have also been developed in their work .it was observed that the contact line of a drop of liquid remains pinned during most of the drying process . while the highest evaporation occurs at the edges , the bulk of the solvent is concentrated closer to the center of the drop . in order to replenish the liquid removed by evaporation at the edge , a flow from the inner to the outer regions must exist inside the drop . this flow is capable of transferring all of the solute to the contact line and thus accounts for the strong contact - line concentration of the residue left after complete drying .the idea of this theory is very robust since it is independent of the nature of the solute and only requires the pinning of the edge during drying ( which can occur in a number of possible ways : surface roughness , chemical heterogeneities _ etc _ ) .this theory accounts quantitatively for many phenomena observed experimentally ; among other things , we will reproduce its main conclusions in this work .mathematically , the most complicated part of this problem is related to determining the evaporation rate from the surface of the drop .an analogy between the diffusive concentration fields and the electrostatic potential fields was suggested , so that an equivalent electrostatic problem can be solved instead of the problem of evaporation of a sessile drop .important analytical solutions to this equivalent electrostatic problem in various geometries were first derived by lebedev .a number of useful consequences from these analytical results were later reported in ref . . in this work, we discuss the theory of solute transfer and deposit growth in evaporating sessile droplets on a substrate and provide quantitative account for many observed phenomena and measurement results .chapter 2 discusses the main ideas and the general theory ; all principal equations are derived in that chapter . while most of its equations have been reported previously , the derivation presented here is original and deals with some mathematical issues never fully addressed before in the context of the current problem .chapter 2 is the basis for all the following chapters of this work , and its content is required for all the other chapters .while the principal equations are fully derived and presented in chapter 2 , their solution depends heavily on the geometry of the drop .the flow pattern discussed in this work is a type of hydrodynamic flow that is sensitive to the perimeter shape , _i.e. _ the shape of the contact line .mathematically , solution to the differential equations depends on the boundary conditions .chapters 3 and 4 discuss the analytical solution to this problem in two important geometries .the two geometries are the drops with circular boundary ( round drops ) and the drops with angular boundary ( pointed drops ) .the choice of these two geometries is not accidental .an arbitrary boundary line can be represented as a sequence of smooth segments , which can be approximated by circular arcs , and fractures , which can be approximated by angular regions .thus , knowledge of analytical solution for both circular ( chapter 3 ) and angular ( chapter 4 ) boundary shapes fills out the quantitative picture of solute transfer and deposit growth for an arbitrary drop boundary . the case of the round dropsis the most important from the practical point of view and the easiest to deal with mathematically .this case allows for a full analytical solution , and this solution has been obtained earlier . here , in chapter 3 , we reproduce concisely the earlier results and report some new ones . its content is a prerequisite to chapter 5 , but is not essential for understanding chapter 4 ( although it is used for drawing some parallels and for comparison of the results in the two geometries ) .the case of the pointed drops ( chapter 4 ) , while also important , is much more complicated mathematically than the round - drop solution is .presence of the vertex of the angle introduces a singularity at this vertex in addition to the weaker singularity at the contact line .singularities govern the solutions to differential equations , and thus presence of the angle and its vertex changes the results substantially .also , an angular region , as a mathematical object , is infinite , while a circular region is always bounded .the real drops with a fracture must always have a third ( the furthest from the vertex ) side of its contact line , and therefore the overall solution depends on the shape of that furthest part of the boundary . at the same time , we are interested only in the universal features of the solution that are independent of that furthest part . keeping in mind this lust for universality and the mathematical complexity mentioned above , we specify only one boundary of the drop ( the vertex and the two sides of the angle ) leaving the remainder of the boundary curve unspecified .such an approach turns out to be sufficient to determine the universal features of the solution , and it allows us to find all the important singularities as power laws of distance from the vertex of the angle .most of the results of chapter 4 were originally obtained in our earlier works .chapters 4 and 5 are completely independent of each other .chapters 24 address the issue of the deposit mass accumulation at the drop boundary , however , they treat the solute particles as if they do not occupy any volume , and hence all the solute can be accommodated at the one - dimensional singularity of the contact line . in reality , the solute deposit accumulated at the perimeter of a drying drop has some thickness , for instance , the shape of the solute residue in a round drop is more like a ring rather than an infinitely thin circumference of the circle .we attribute this finite volume of the solute deposit simply to the finite size of the solute particles , _i.e. _ we assume the particles do occupy some volume and hence can not be packed denser than certain mass per unit volume .a model accounting for the finite size of the deposit and determining its geometric characteristics ( height , width ) is considered in chapter 5 .the model is solved for the simplest case of circular geometry , and its results are compared to the zero - volume results of chapter 3 and to the experimental results of refs .both the analytical and the numerical solutions are provided , and both compare well with the experimental data ( and with each other ) .results of chapter 5 are presented for the first time .a chapter of conclusions completes this work . before proceeding to the main matter, we would like to point out some features common to all our results and giving rise to the title of this work .flows found here are capable of transferring 100% of the solute to the contact line , and thus account for the strong perimeter concentration of the solute in all cases .many quantities scale as power laws ; some others follow different functional dependencies .however , both the exponents of the power laws and any elements of the other functional dependencies turn out to be _ universal _ within our model .they do not depend on any parameters of the system other than the system geometry . within the range of applicability of our theory ,there are no fitting parameters , no undetermined constants , and no unknown coefficients .thus , for instance , exponents of the power laws are as universal as the exponent of distance in the coulomb s law .we view this universality as one of the main advantages and one of the most exciting features of this theory .in this chapter we will present the general theory of solute transfer to the contact line that will subsequently be solved analytically for two important geometries .we consider a sessile droplet of solution on a horizontal surface ( substrate ) .the nature of the solute is not essential for the mechanism .the typical diameter of the solute particles in deegan s experiments was of the order of 0.11 m ; we will assume a similar order of magnitude throughout this work .for smaller particles diffusion becomes more important compared to the hydrodynamic flows of this work . for larger particles sedimentation may become important when particles exceed certain size .the droplet is bounded by the contact line in the plane of the substrate .the ( macroscopic ) contact line is defined as the common one - dimensional boundary of all three phases ( liquid , air and solid substrate ) .we will not specify the shape of the contact line in this chapter ; in the later chapters we will make this selection and describe the two qualitatively different limits circular drops and angular drops . as we explained in the introduction , these shapes account for the smooth and fractured segments of an arbitrary contact line .main equations are not sensitive to the particular geometry , only the boundary conditions are .we assume that the droplet is sufficiently small so that the surface tension is dominant , and the gravitational effects can be safely neglected .mathematically , this is controlled by the bond number , which accounts for the balance of surface tension and gravitational force on the surface shape . here is the density of fluid , is the gravitational constant , is a typical size of the drop in the plane of the substrate , is the maximal height of the drop at the beginning of the drying process , and is the surface tension at the liquid - air interface . for the typical experimental conditions the bond number is of the order of 0.020.05 , and thus gravity indeed is unimportant and surface shape is governed mostly by the surface tension . at the same time , we do _ not _ assume that the contact angle between the liquid - air interface and the plane of the substrate is constant along the contact line on the substrate , nor do we assume it is constant in time . to achieve a prescribed boundary shape ( other than a perfect circle on an ideal plane ) , the substrate must have scratches , grooves or other inhomogeneities ( sufficiently small compared to the dimensions of the droplet ) , which _ pin _ the contact line .a strongly pinned contact line can sustain a wide range of ( macroscopic ) contact angles .the contact angle is not fixed by the interfacial tensions as it is on a uniform surface ( fig .[ contactangleeps ] ) . throughout this workwe will deal with small contact angles ( ) as is almost always the case in the experimental realizations ( typically , .3 ) ; however , the general equations of this chapter do not rely on the smallness of the contact angle .we will use the cylindrical coordinates throughout this work , as most natural for both geometries of interest .coordinate is always normal to the plane of the substrate , and the plane itself is described by , with being positive on the droplet side of the space .coordinates are the polar radius and the azimuthal angle , respectively , in the plane of the substrate .the origin is chosen at the center of the circle in the circular geometry and at the vertex of the angle in the angular geometry .the geometry of the problem is quite complicated despite the visible simplicity . in both geometries of interest, we consider an object , whose symmetry does not match the symmetry of any simple orthogonal coordinate system of the three - dimensional space .for instance , solution of the laplace equation ( needed below ) requires introduction of the special coordinate systems ( the toroidal coordinates and the conical coordinates ) with heavy use of various special functions .similar difficulties related to the geometry arise in the other parts of the problem as well .we describe the surface shape of the drop by local mean curvature that is spatially uniform at any given moment of time , but changes with time as droplet dries .ideally , the surface shape should be considered dynamically together with the flow field inside the drop .however , as we show below , for flow velocities much lower than the characteristic velocity ( where is surface tension and is dynamic viscosity ) , which is about 24 m / s for water under normal conditions , one can consider the surface shape independently of the flow and use the equilibrium result at any given moment of time for finding the flow at that time .another way of expressing the same condition is to refer to the capillary number ( where is the characteristic value of the flow velocity of the order of 110 m/s ) , which is the ratio of viscous to capillary forces .this ratio is of the order of under typical experimental conditions , clearly demonstrating that capillary forces are by far the dominant ones in this system and that surface shape is practically equilibrium and depends on time adiabatically .we consider _ slow _ flows , _ i.e. _ flows with low reynolds numbers ( also known as `` creeping flows '' ) .this amounts to the neglect of the inertial terms in the navier - stokes equation .as all the conditions above are , this is well justified by the real experimental conditions .we also employ the so - called `` lubrication approximation '' .it is essentially based on the two conditions reflecting the thinness of the drop and resulting from the separation of the vertical and horizontal scales .one is that the pressure inside the drop does not depend on the coordinate normal to the substrate : .the other is related to the small slope of the free surface , which is equivalent to the dominance of the -derivatives of any component of flow velocity : ( index refers to the derivatives with respect to any coordinate in the plane of the substrate ) .the lubrication approximation is a standard simplifying procedure for this class of hydrodynamic problems . before proceeding to the main section of this chapter and formulating the main ideas of the theory, we will make a brief note on evaporation rate . in order to determine the flow caused by evaporation, one needs to know the flux profile of liquid leaving each point of the surface by evaporation .this quantity will be seen to be independent of the processes going on inside the drop and must be determined prior to considering any such processes . the functional form of the evaporation rate ( defined as evaporative mass loss per unit surface area per unit time ) depends on the rate - limiting step , which can , in principle , be either the transfer rate across the liquid - vapor interface or the diffusive relaxation of the saturated vapor layer immediately above the drop .we assume everywhere that the rate - limiting step is diffusion of liquid vapor ( fig .[ rlp ] ) and that evaporation rapidly attains a steady state .indeed , the transfer rate across the liquid - vapor interface is characterized by the time scale of the order of s , while the diffusion process has characteristic times of the order of ( where is the diffusion constant for vapor in air and is a characteristic size of the drop ) , which is of the order of seconds for water drops under typical drying conditions . also , the ratio of the time required for the vapor - phase water concentration to adjust to the changes in the droplet shape ( ) to the droplet evaporation time is of the order of , where is the density of saturated vapor just above the liquid - air interface and is the ambient vapor density .thus , indeed , vapor concentration adjusts rapidly compared to the time required for water evaporation , and the evaporation process can be considered quasi - steady .as the rate - limiting process is the diffusion , the density of vapor above the liquid - vapor interface obeys the diffusion equation .since diffusion rapidly attains a steady state , this diffusion equation reduces to the laplace equation this equation is to be solved together with the following boundary conditions dependent on the geometry of the drop : ( a ) along the surface of the drop the air is saturated with vapor and hence at the interface is the constant density of saturated vapor , ( b ) far away from the drop the density approaches the constant ambient vapor density , and ( c ) vapor can not penetrate the substrate and hence at the substrate outside of the drop . having found density of vapor , one can obtain the evaporation rate , where is the diffusion constant .this boundary problem is mathematically equivalent to that of a charged conductor of the same geometry at constant potential if we identify with the electrostatic potential and with the electric field .moreover , since there is no component of normal to the substrate , we can further simplify the boundary problem by considering a conductor of the shape of our drop plus its reflection in the plane of the substrate in the full space instead of viewing only the semi - infinite space bounded by the substrate ( fig .[ evaprateeps ] ) .this reduces the number of boundary conditions to only two : ( a ) on the surface of the conductor , and ( b ) at infinity .the shape of the conductor ( the drop and its reflection in the substrate ) is now symmetric with respect to the plane of the substrate .this plane - symmetric problem of finding the electric field around the conductor at constant potential in infinite space is much simpler than the original problem in the semi - infinite space , and this will be the problem we will actually be solving in order to find the evaporation rate for the two geometries of interest .particular solution in each of the geometries will be presented in the next two chapters .having formulated physical assumptions intrinsic to the theory , we are now in position to state its main ideas . the essential idea behind the theory has been developed in works of deegan _ et al . _it is an experimental observation that the contact line of a drop of liquid is pinned during most of the drying process . while the highest evaporation occurs at the edges , the bulk of the solvent is concentrated closer to the center of the drop . in order to replenish the liquid removed by evaporation at the edge , a flow from the inner to the outer regions must exist inside the drop .this flow is capable of transferring all of the solute to the contact line and thus accounts for the strong contact - line concentration of the residue left after complete drying .thus , a pinned contact line entails fluid flow toward that contact line .the `` elasticity '' of the liquid - air interface fixed at the contact line provides the force driving this flow . to develop this idea mathematically, we ignore for a moment any solute in the liquid .once the flow is found , one can track the motion of the suspended particles , since they are just carried along by the flow .the purpose of this section is to describe a generic method for finding the hydrodynamic flow of the liquid inside the drop , which is a prerequisite to knowing the details of solute transfer .we define depth - averaged flow velocity by where is the in - plane component of the local three - dimensional velocity .then we write the conservation of fluid mass in the form where is the time , is the density of the fluid , and each of the quantities , and is a function of , and .( we will drop the part of the second term everywhere in the following since it is always small compared to unity , as will be seen below . )this equation represents the fact that the rate of change of the amount of fluid in a volume element ( column ) above an infinitesimal area on the substrate ( third term ) is equal to the negative of the sum of the net flux of liquid out of the column ( first term ) and the amount of mass evaporated from the surface element on top of that column ( second term ) ; fig .[ consmasseps ] illustrates this idea .thus , this expression relates the depth - averaged velocity field to the liquid - vapor interface position and the evaporation rate .however , this is only one equation for two variables since vector has generally two components in the plane of the substrate .moreover , while the evaporation rate is indeed independent of flow , the free - surface shape should in general be determined simultaneously with .thus , there are actually three unknowns to be determined together ( and two components of velocity , say , and ) , and hence two more equations are needed .these additional equations will be of the hydrodynamic origin .we start with the navier - stokes equation with inertial terms omitted ( low reynolds numbers ) : where is the fluid pressure , is the dynamic viscosity , and is the velocity . applying lubrication - approximation conditions and , we arrive at the simplified form of this equation where index again refers to the vector components along the substrate . from now on we will suppress the subscript at the symbol of nabla - operator , and will assume for the rest of this work that this operator refers to the two - dimensional vector operations in the plane of the substrate .solution to the above equation with boundary conditions ( no slip at the substrate and no stress at the liquid - air interface ) yields or , after vertical averaging ( [ defv ] ) , this result is a variant of the darcy s law . note that since a curl of a gradient is always zero and is a constant , the preceding equation can be re - written as this condition is analogous to the condition of the potential flow ( ) , but with a quite unusual combination of the velocity and the surface height in place of the usual velocity .relation ( [ darcy ] ) provides the two sought equations in addition to the conservation of mass ( [ consmass ] ) .however , it contains one new variable , pressure , and hence another equation is needed .this last equation is provided by the condition of the mechanical equilibrium of the liquid - air interface ( also known as the young - laplace equation ) relating the pressure and the surface shape : here is the atmospheric pressure , is the surface tension , and is the mean curvature of the surface , uniquely related to the surface shape by differential geometry .note that this expression is independent of both the conservation of mass ( [ consmass ] ) and the darcy s law ( [ darcy ] ) .thus , the complete set of equations required to fully determine the four dynamic variables , , , and consists of four differential equations ( together with the appropriate boundary conditions at the contact line , which are dependent on the particular geometry of the drop ) : one equation of the conservation of mass ( [ consmass ] ) , two equations of the darcy s law ( [ darcy ] ) , and one equation of the mechanical equilibrium of the interface ( [ mechequil ] ) .they provide all the necessary conditions to solve the problem at least in principle . in practice , however , solution of these four _ coupled _ differential equations is not possible in most geometries of practical interest . at the same time , under normal drying conditions the viscous stress is negligible , or , equivalently , the typical velocities are much smaller than ( for water under normal conditions ) . as we show in the appendix , the four equations _ decouple _ under these conditions . as a result, one can employ the equilibrium result for the surface shape at any given moment of time , and then determine the pressure and the velocity fields for this fixed functional form of .mathematically , the original system of equations can be rewritten as : where , , and and are the leading and the first - order terms in the expansion of pressure in a small parameter inversely proportional to ( see the appendix for details ) .note that is independent of , although it does depend on time ( this time dependence will be determined later in this work ) .therefore , there is a profound difference between equations ( [ mechequil ] ) and ( [ laplace ] ) : the former is a local statement , with the right - hand side depending on the coordinates of a point within the drop , while the latter is a global condition of spatial constancy of the mean curvature throughout the drop .equation ( [ laplace ] ) defines the _ equilibrium _ surface shape for any given value of at any given moment of time , and moreover , can be solved independently of the other equations .thus , the procedure for finding the solution becomes significantly simplified : first find the equilibrium surface shape from condition ( [ laplace ] ) and independently specify the functional form of the evaporation rate from an equivalent electrostatic problem , then solve equation ( [ psi ] ) for the reduced pressure , and finally obtain the flow field according to prescription ( [ vpsi ] ) .the next two chapters will be devoted to the particular steps of this procedure for the two geometries of interest . in the next section, we will describe how knowledge of flow inside the drop allows one to find the rate of solute transfer to the contact line and determine the laws of deposit growth . with the velocity field inside the drop in hand, we can compute the rate of the deposit growth at the contact line .we assume that the suspended particles are carried along by the flow with velocity equal to the fluid velocity . integrating the velocity field : we find the streamline equation or , _ i.e. _ the trajectory of each particle as it moves with the fluid .this streamline equation is independent of the overall intensity of evaporation [ since both and depend on it as a multiplicative factor dropping out of eq .( [ vel - ratio ] ) ] , and thus the shape of the streamlines is universal for each geometry of the drop . physically , this indicates that solute particles move along the same trajectories independently of how fast evaporation occurs and hence how fast the flow is .given the shape of the streamlines , we can compute the time it takes an element of fluid ( having started from some initial point and moving along a streamline ) to reach the contact line .this time can be found by integrating both sides of either or with known dependences of or on the variables and known relation between and on the streamline .the integrations are to be conducted from or ( at time 0 ) to or ( at time ) , respectively , where is the terminal endpoint of the trajectory on the contact line .thus , the initial location of particles that reach the contact line at time is characterized by .first , only particles initially located near the contact line reach that contact line .as time goes by , particles initially located further away from the contact line and in the inner parts of the drop reach the contact line .finally , particles initially located in the innermost parts of the drop ( _ e.g. _ at the center of a round drop or near the bisector of an angular drop ) reach the contact line as well .the more time elapsed , the more particles reached the contact line and the larger the area is where they were spread around initially .one can view this process as inward propagation of the inner boundary of the set of initial locations of the particles that have reached the contact line by time .as is easy to understand , the velocity of this front is equal to the negative of the vector of fluid velocity at each point ( the fluid and the particles move towards the contact line while this front moves away from it , hence a minus sign ) . within the time computed in the preceding paragraph _ all _ the solute that lays on the way of an element of fluid as it moves toward the contact line becomes part of the deposit .now , we use our knowledge of the initial distribution of the solute , namely , that the solute has constant concentration everywhere in the drop at time , and compute the mass of the deposit accumulated at the contact line by that time .the mass of the deposit accumulated at the contact - line element of length can be found by integrating over area between two infinitesimally close streamlines ( terminating apart on the contact line ) swept by the infinitesimal element of fluid on its way to the contact line and multiplying the result by the initial concentration of the solute : obviously , this mass will depend of the initial location of the element of fluid .thus , both the time elapsed from the beginning of the drying process ( ) and the deposit mass accumulated at the contact line ( ) depend on the initial coordinates of the arriving element of fluid ( only one of the coordinates is actually an independent variable the other is constrained by the streamline equation ) . eliminating these coordinates from the expressions for time and mass ,one can finally obtain the deposit mass accumulated at the contact line as a function of time elapsed from the beginning of the drying process .thus , the procedure described allows one to find the rate of solute transfer and the laws of deposit growth .since we use depth - averaged velocity throughout this work , we implicitly assume that there is no vertical segregation of the solute .in this chapter we will provide the full solution to the problem of solute transfer in the case of round drops . in this geometry ,the contact line is the circumference of a circle ( and the origin of the cylindrical coordinates is located at the center of that circle ) .this case is of most importance from the point of view of practical applications and at the same time is the easiest to treat analytically .most results can be derived in a closed analytical form , and many of them have been obtained in earlier works .some of them , however , are presented for the first time , and some correct earlier expressions .we will follow the procedure explained in great detail in the preceding chapter .first , one needs to determine the equilibrium shape of the liquid - air interface . in circular geometrythis task is particularly easy .let be the radius of the circular projection of the drop onto the plane of the substrate and be the ( macroscopic ) contact angle .then the solution to eq .( [ laplace ] ) with boundary condition is just a spherical cap . in cylindrical coordinatesfunction is independent of and can be written as where the radius of the footprint of this cap and the contact angle are related via the right - hand side of eq .( [ laplace ] ) : both and change with time during drying process ; however , the radius of the footprint stays constant . in most experimental realizations andthe preceding expression takes even simpler form : thus , for small , quantity is indeed small ( since ) and can be safely neglected with respect to unity ( _ i.e. _ the free surface of the drop is nearly horizontal ) as was asserted in the preceding chapter .knowledge of the surface shape allows one to find all the necessary geometrical characteristics of the drop , for instance , its volume or the total mass of the water ( or any other fluid the drop is comprised of ) where is the density of the water and is a function of the contact angle : ( the last equality is an expansion in limit ) .the other prerequisite to determining the flow inside the drop is the evaporation rate from the free surface of the drop .this task involves solution of the equivalent electrostatic problem ( the laplace equation ) for the conductor of the shape of the drop plus its reflection in the plane of the substrate ( kept at constant potential , as a boundary condition ) . in the case of the round drop the shape of this conductorresembles a symmetrical double - convex lens comprised of two spherical caps .the system of orthogonal coordinates that matches the symmetry of this object ( so that one of the coordinate surfaces coincides with the surface of the lens ) is called the toroidal coordinates , where coordinates and are related to the cylindrical coordinates and by and the azimuthal angle has the same meaning as in cylindrical coordinates .solution to the laplace equation in toroidal coordinates involves the legendre functions of fractional degree and was derived in a book by lebedev .the expression for the electrostatic potential or vapor density in toroidal coordinates obtained in that book is independent of the azimuthal angle and reads here is the density of the saturated vapor just above the liquid - air interface ( or the potential of the conductor ) , is the ambient vapor density ( or the value of the potential at infinity ) , and are the legendre functions of the first kind ( despite the presence of in the index , these functions are real valued ) .the surface of the lens is described by the two coordinate surfaces and , and the derivative is normal to the surface .evaporation rate from the surface of the drop is therefore given by where is the diffusion constant and is the metric coefficient in coordinate .[ note that an incorrect expression for with a plus sign in the metric coefficient was used in eq .( a2 ) of ref . . ] carrying out the differentiation , one can obtain an exact analytical expression for the absolute value of the evaporation rate from the surface of the drop as a function of the polar coordinate : p_{-1/2 + i\tau}(\cosh\alpha ) \ , \tau d\tau\right ] , \label{j - circular}\ ] ] where the toroidal coordinate and the polar coordinate are uniquely related on the surface of the drop : thus , expression ( [ j - circular ] ) provides the exact analytical expression for the evaporation rate as a function of distance to the center of the drop for an arbitrary contact angle .this expression also corrects an earlier expression of ref . [ eq . ( 28 ) ] where a factor of in the second term inside the square bracket is missing .the expression for the evaporation rate is not operable analytically in most cases , as it represents an integral of a non - trivial special function ( which , in its turn , is an integral of some simpler elementary functions ) . in most cases, we will need to recourse to an asymptotic expansion for small contact angles in order to obtain any meaningful analytical expressions .however , there is one exception to this general statement ( not reported in the literature previously ) .an important quantity is the total rate of water mass loss by evaporation , which sets the time scale for all the processes .this total rate can be expressed as an integral of the evaporation rate ( defined as evaporative mass loss per unit surface area per unit time ) over the surface of the drop : where the first integration is over the substrate area occupied by the drop .this expression actually involves triple integration : one in the expression above as an integral of , another in expression for as an integral of the legedre function of the first kind , and the third as an integral representation of the legendre function in terms of elementary functions .however , one can significantly simplify the above expression and reduce the number of integrations from three to one . substituting and evaporation rate ( [ j - circular ] ) into eq .( [ dmdt - circular ] ) and changing variables and integration order a few times , one can obtain a substantially simpler result : \ , d\tau \right ] .\label{dmdtexact}\ ] ] [ note : eq .( 2.17.1.10 ) of ref . was used in this calculation . ]this result together with the time derivative of expression ( [ watermass - circular ] ) for total mass provides a direct method for finding the time dependence of for an _ arbitrary _ value of the contact angle . combining the last two equations , one can obtain a single differential equation for as a function of time : \ , d\tau \right ] .\label{dthetadtexact}\ ] ] having determined the dependence from this equation , one can obtain the time dependence of any other quantities dependent on the contact angle , for instance , the time dependence of mass from relation ( [ watermass - circular ] ) , or any other geometrical quantities of the preceding section . in practice , however , we will always use the limit of small contact angles in all the analytical calculations of this and the subsequent chapters .this is the limit of most practical importance , and the usage of this limit will help us keep the physical essence of the problem clear from the unnecessary mathematical elaborations .while for numerical purposes our exact expressions are absolutely adequate , the analytical calculations in a closed form can not be conducted any further for an arbitrary contact angle .expanding the right - hand side of eq .( [ dthetadtexact ] ) in small , we immediately obtain that the contact angle decreases _ linearly _ with time in the main order of this expansion : where we introduced the total drying time defined in terms of the initial contact angle : in the main order , the total rate of water mass loss is constant and water mass also decreases linearly with time : this linear time dependence during the vast majority of the drying process was directly confirmed in the experiments ; see fig .[ thetatimeeps ] .the dependence of the evaporation rate on radius ( linearity in ) was also confirmed experimentally and is known to hold true for the case of the diffusion - limited evaporation .before we entirely switch to the case of the small contact angles for the rest of this chapter , it is instructive to compare the small - angle analytical asymptotics of the preceding paragraph to the exact arbitrary - angle numerical results based on eqs .( [ dmdtexact ] ) and ( [ dthetadtexact ] ) . in figs .[ theta - teps ] and [ m - teps ] , we plot the numerical solutions for and , respectively , for several values of the initial contact angle and compare them to the small - angle asymptotics of eqs .( [ theta - circular ] ) and ( [ massofwater ] ) . in these figures , and are the initial contact angle and the initial mass of water in the drop , respectively .note that in these graphs is _ not _ the total drying time for each ; instead , it is just the combination of the problem parameters defined in eq .( [ dryingtime ] ) , which coincides with the total drying time only when . as is clear from these graphs , the total drying time_ increases _ with the increasing initial contact angle .however , it does not increase linearly as prescribed by the asymptotic expression ( [ dryingtime ] ) ; instead , it grows faster ( fig .[ tf - thetaieps ] ) .[ tf - thetaieps ] demonstrates that the small - angle approximation works amazingly well up to the angles as large as 45 degrees ( this can also be seen in figs . [ theta - teps ] and [ m - teps ] ) . therefore , working in the limit of small contact angles for the rest of this chapter , we will not loose any precision or generality for the typical experimental values of this parameter .lastly , we note that the large - angle corrections may be responsible for the observed non - linearity of the experimentally measured dependence , as is clear from the comparison of fig .[ m - teps ] ( theory ) and fig .[ thetatimeeps ] ( experiment ) . on time .different curves correspond to different initial contact angles ; values of parameter are shown at each curve .the analytical result [ eq .( [ theta - circular ] ) ] in limit is also provided ( the solid curve ) . ] on time .different curves correspond to different initial contact angles ; values of parameter are shown at each curve .the analytical result [ eq .( [ massofwater ] ) ] in limit is also provided ( the solid curve ) . ] ) ] ; the dashed curve represents the exact numerical result [ inferred from eq .( [ dthetadtexact ] ) ] . ]expression for the evaporation rate ( [ j - circular ] ) becomes particularly simple in the limit of small contact angles .employing one of the integral representations of the legendre function in terms of elementary functions [ eq . ( 7.4.7 ) of ref . ] , taking limit and conducting a number of integrations , it is relatively straightforward to obtain the following result : which , upon identification for , can be further reduced to thus , for thin drops the expression for the evaporation rate reduces to an extremely simple result featuring the square - root divergence near the edge of the drop . the same result could have been obtained directly if we solved an equivalent electrostatic problem for an infinitely thin disk instead of the double - convex lens .it is particularly rewarding that after all the laborious calculations the asymptotic of our result is in exact agreement with elementary textbook s predictions [ see ref . for the derivation of the one - over - the - square - root divergence of the electric field near the edge of a conducting plane in the three - dimensional space ] .( [ evaprate - circular ] ) will be widely used below for all thin drops of circular geometry . for the sake of completeness , it is also interesting to note the opposite limit of the expression ( [ j - circular ] ) , when the surface of the drop is a hemisphere ( ) . in this limit ,a similar calculation can be conducted [ also employing eq .( 7.4.7 ) of ref . and a couple of integrations ] , and the following simple result can be obtained : thus , a uniform evaporation rate is recovered .again , this result is in perfect agreement with the expectations ; the same result could have been obtained if we directly solved the laplace equation for a sphere ( the hemispherical drop and its reflection in the substrate ) .the uniform evaporation rate is a result of the full spherical symmetry of such a system .similar exact results can also be obtained for a few other discrete values of the contact angle ( _ e.g. _ for ) . with and in hand , we are in position to find the flow inside the drop according to prescription ( [ psi])-([vpsi ] ) . in circular geometrythis task is particularly easy , since , due to the symmetry , there is no component of the velocity ( _ i.e. _ the flow is directed radially outwards ) and is independent of .equations ( [ psi ] ) and ( [ vpsi ] ) can be combined to yield for the radial component of the velocity .plugging eqs .( [ h - circular ] ) , ( [ theta - circular ] ) , and ( [ evaprate - circular ] ) into the above equation , one can obtain the following expression for the flow inside the drop under assumption of small contact angle : \right ) .\label{v - circular}\ ] ] this is the final expression for the velocity that we were looking for .the velocity diverges near the edge of the drop with a one - over - the - square - root singularity in at the contact line .this could have been deduced directly from the conservation of mass ( [ consmass ] ) , where the divergent evaporation rate must be compensated by the divergent velocity ( since the free - surface height is a regular function of coordinates and , moreover , vanishes near the contact line ) .physically , change of volume near the edge becomes increasingly smaller as the contact line is approached and hence the outgoing vapor flux must be matched by an equally strong incoming flow of liquid .in addition , the velocity diverges at the end of the drying time .since the amount of liquid removed by evaporation from the immediate vicinity of the contact line of a thin drop remains approximately constant , the amount of incoming fluid must also stay approximately constant over the drying time .this flux to the region in the vicinity of the contact line is proportional to , and hence velocity must scale as in terms of its time dependence . therefore , since the height decreases linearly with time ( in the main order in ) , time dependence is to be expected for the velocity .the result above displays this expected behavior .shape of the streamlines in the highly symmetric case of circular drops can be predicted without any calculations : these are the straight lines from the center of the drop to the contact line along the radius .the parameter characterizing each of these streamlines is simply the polar angle of the corresponding radial direction , and the variable along the streamline is . the time it takes an element of fluid to reach the contact line can be inferred from the differential equation . since dependences on distance and time in of eq .( [ v - circular ] ) are separable ( factorize ) , the integration of one side of this differential equation from ( the initial location of an element of fluid ) to in distance and the integration of the other side from 0 to in time are straightforward to carry out .the resulting dependence of the time it takes an element of fluid to reach the contact line on the initial position of this element of fluid can be written as ^{3/2 } = 1 .\label{time - circular}\ ] ] clearly , when , and when , as expected from the physical intuition ; however , the intermediate behavior is quite non - trivial , and no intuition would be of much help . the mass of the deposit accumulated at the contact line by time is given by eq .( [ mass - def ] ) , which for the case of azimuthal symmetry can be rewritten as where refers to the expression for the equilibrium surface shape ( [ h - circular ] ) at the beginning of the drying process ( with at ) .simple integration yields ^ 2 , \label{depositmass - circular}\ ] ] where is the total mass of the solute present initially in the drop .when an element of fluid starts from a vicinity of the contact line ( ) , the accumulated deposit mass is still virtually zero when it arrives at the contact line .when an element of fluid starts from the center of the drop ( ) , virtually all the solute ( ) is already at the contact line by the time the element of fluid reaches it . eliminating from expressions ( [ time - circular ] ) and ( [ depositmass - circular ] ) ,one can finally obtain the dependence of the deposit accumulated at the contact line on drying time : ^{4/3},\ ] ] which can be rewritten in a more symmetric form ( we do not have a simple and intuitive explanation for this symmetry . ) no solute ( ) is at the contact line at the beginning of the drying process ( ) , and all the solute ( ) is accumulated at the contact line when the drop has completely dried ( ) . at early times , the deposit mass scales with the drying time as a power law with exponent : as we will show in the next chapter , this scaling with time is universal for early drying times and should be observed in any geometry of the drop ( as long as the contact line is locally smooth ) . at the end of the drying process, the rate of the deposit accumulation diverges as : \qquad\qquad(|t_f - t| \ll t_f).\ ] ] it is this final divergence that is responsible for the experimentally observed 100% transfer of the solute to the edge .thus , in the case of circular geometry , it is relatively simple to obtain the desired scaling of the deposit mass at the contact line with time .the situation is far more complex in the case of angular geometry .this case is considered in the following chapter .in this chapter we will provide the asymptotic solution to the problem of solute transfer in the case of angular drops . in this geometry , the droplet is bounded by an angle in the plane of the substrate ( fig . [ cohen ] ) .we choose the origin of the cylindrical coordinates at the vertex of the angle , so that the angle occupied by the drop on the substrate is and ( fig .[ geometry ] ) . given the complexity of the geometry ,we seek an approximate solution that captures the essential physical features and correct at least asymptotically . here, in the geometry of an angular sector , the only possible locations of singularities and divergences ( which normally govern properties of the solutions to differential equations ) are at the vertex of the angle ( _ i.e. _ at ) and at its sides ( _ i.e. _ at ) .therefore , the most important physical features will be correctly reflected if asymptotic results as and as are found .thus , we limit our task to determining analytically only the asymptotic power - law scaling for most quantities , and this task proves to be sufficiently challenging by itself. we will also provide some numerical results that do not rely on these assumptions .most of the results presented in this chapter have been previously published in two our earlier papers , and hence we will often refer to those works for further details . ( a )( b ) the boundary problem for the equilibrium surface shape consists of the differential equation ( [ laplace ] ) and the boundary conditions at the vertex and at the sides of the angle : equation ( [ laplace ] ) represents the fact that the local mean curvature is spatially uniform , but changes with time as the right - hand side ( ) changes during the drying process .the asymptotic solution to the boundary problem ( [ laplace ] ) , ( [ boundary ] ) was found in our earlier paper .the result turned out to have two qualitatively different regimes in opening angle ( acute and obtuse angles ) and can be written as here and is the only function of time in this expression ; exponent has a discontinuous derivative at and is shown in fig .[ nueps ] ; and the constant can not be determined without imposing boundary conditions on at some curve on the side of the drop furthest from the vertex of the angle .it is restricted by neither the equation nor the side boundary conditions , and thus , is not a universal feature of the solution near the vertex of the angle .the constant can ( and does ) depend on the opening angle . as we showed in the earlier paper, this constant must have the following diverging form near : where is independent of .we will adopt this form of ( with set to unity ) for all numerical estimates for obtuse opening angles . in the power law ( [ surfshap ] ) ] on opening angle , after ref . . ] two different values of corresponding to the acute and obtuse angles give rise to the two qualitatively different regimes for surface shape .this difference can best be seen from the fact that the principal curvatures of the surface stay finite as for acute angles and diverge as a power of for obtuse angles .this qualitative difference can be observed in a simple experimental demonstration , which we provided in our earlier work .we refer to that earlier work for further details and discussion .we only note here that the asymptotic at the vertex of the angle actually means ( which is typically of the order of a few millimeters for water under normal conditions ) , and that is indeed small for and can again be safely neglected with respect to unity ( _ i.e. _ the free surface is nearly horizontal in the vicinity of the tip of the angle ) as was asserted earlier . as we explained in chapter 2 , in order to determine the evaporation rate one needs to solve an equivalent electrostatic problem for a conductor of the shape of the drop and its reflection in the plane of the substrate kept at constant potential . in geometry of the angular sectorthis shape resembles a dagger blade , and one has to tackle the problem of finding the electric field around the tip of a dagger blade at constant potential in infinite space .if one decides to account for the thickness of the blade [ given by doubled of eq .( [ surfshap ] ) ] accurately , it becomes apparent that there is no hope for any analytical solution in this complex geometry .this is suggested by both the bulkiness of the expressions for a round drop with a non - zero contact angle in the preceding chapter and direct attempts to find the solution .however , taking into account that near the tip is very small and hence the thickness of the blade itself is very small , we can approximate our thick dagger blade with a dagger blade of thickness zero and the same opening angle ( _ i.e. _ with a flat angular sector ) . in the limit the contact angle scales with as and hence goes to zero .thus , only the flat blade can be considered up to the main order in .this approximation would not be adequate for determining the surface shape or the flow field , but it is perfectly adequate for finding the evaporation rate .we will discuss possible corrections to this result later in this section .the problem of finding the electric field and the potential for an infinitely thin angular sector in the three - dimensional space requires introduction of the so - called conical coordinates ( the orthogonal coordinates of the elliptic cone ) and heavily involves various special functions .luckily , it was studied extensively in the past , although the results can not be expressed in a closed form .an important conclusion from these studies is that the and dependences separate and that the electric field near the vertex of the sector scales with as a power law with an exponent depending on the opening angle : here and and are the eigenvalue and the eigenfunction , respectively , of the eigenvalue problem with dirichlet boundary conditions on the surface of an elliptic cone ( degenerating to an angular sector as ) . in the last relation the angular part of the laplacian in conical coordinates ( , , ) . on the surface of the sector ( _ i.e. _ at ) the relation between the conical coordinate and the usual polar coordinate is .we refer to work for further details . herewe notice only that neither nor can be expressed in a closed analytic form ; however , the exponent can be computed numerically and is shown in fig .[ mueps ] as a function of .note that this exponent is _ lower _ than similar exponents for corresponding angles for both a wedge ( a two - dimensional corner with an infinite third dimension ) and a circular cone , which are also shown in the figure .both these shapes ( wedge and cone ) allow simple analytical solutions but none of them would be appropriate for the zero - thickness sector . in the power law [ eqs .( [ j - def ] ) and ( [ evaprate ] ) ] on opening angle for an angular sector , after refs .the same dependencies for a wedge ( dashed line ) and a cone ( dotted line ) are also shown . ] despite the unavailability of an explicit analytical expression for , its analytic properties at and at are quite straightforward to infer .indeed , is an even function of ; therefore , ( as well as any other odd derivative on the bisector ) and for small .obviously , is positive . on the other hand , at the leading asymptotic of the evaporation rate ( or the electric field )is known to be with exponent corresponding to the edge of an infinitely thin half plane in the three - dimensional space .( we have introduced notation in the previous line . ) if one were to correct this asymptotic in order to reflect the non - zero contact angle at the edge of the sector , the asymptotic form at would have to be written as where is a positive constant and this result corresponds to the divergence of the electric field along the edge of a wedge of opening angle [ both the drop and its reflection contribute to the opening angle , hence a factor of 2 ] .however , accounting for the non - zero is a first - order correction to the main - order result .this can be seen from the expression for the contact angle : \propto \left(\frac{r}{r_0}\right)^{\nu-1}. \label{contactangle}\ ] ] for all opening angles ( except where ) .thus , the correction due to the non - zero contact angle can indeed be neglected in the main - order results , and should indeed be set to .nevertheless , we will keep the generic notation for this exponent in order to keep track of the origin of different parts of the final result and in order to account properly for the case in addition to the range of opening angles below .the numerical value of will be assumed to be in all estimates .the analytical results below will employ the asymptotics of this paragraph .thus , we will use the following expression for the evaporation rate : where function is defined in eq .( [ theta - def ] ) with asymptotics ( [ theta - bisector ] ) and ( [ theta - edge ] ) .here we broke down the constant prefactor into two pieces : a distance scale ( where is the substrate area occupied by the drop ) and all the rest ( which is of the dimensionality of the evaporation rate ) .trivially , is directly proportional to the product of the diffusion constant and the difference of the saturated and the ambient vapor densities , as was the case in the round - drop geometry .the evaporation rate above does not depend on time and the same form of applies during the entire drying process , since the diffusion process is steady and the variation of the contact angle with time does not influence the main order result ( [ evaprate ] ) .the same is true for the total rate of mass loss since where the integrations are over the substrate area occupied by the drop .we saw the same behavior in circular geometry .the constancy of this rate during most of the drying process was also confirmed experimentally .this fact can be used to determine the time dependence of the length scale of eq .( [ surfshap ] ) [ and hence of the pressure of eq .( [ laplace ] ) ] explicitly , as the mass of a sufficiently thin drop is inversely proportional to the mean radius of curvature : where we retained only the dimensional quantities and suppressed all the numerical prefactors sensitive to the details of the drop shape . from the last two equationsone can conclude that and remains constant during most of the drying process .hence , where is the initial mean radius of curvature ( ) and is the total drying time : thus , at early drying stages ( ) scale grows linearly with time ; this time dependence will be implicitly present in the results below .however , it is very weak at sufficiently early times and will be occasionally ignored ( by setting ) when only the main - order results are of interest . as in the case of circular geometry , with and in hand ,we proceed by solving eq .( [ psi ] ) for the reduced pressure .assuming power - law divergence of as and leaving only the main asymptotic ( which effectively means that we neglect the regular term with respect to the divergent one ) , we arrive at the following asymptotically - correct expression for : where time - dependence is implicitly present via and the function is a solution to the following differential equation : ( the combination is positive for all opening angles ) . computing according to prescription ( [ vpsi ] ), we obtain the depth - averaged flow field with components and thus , one needs to solve eq .( [ psi - equation ] ) with respect to in order to know the flow velocity .we were not able to find an exact analytical solution to this equation ; however , we succeeded in finding approximate solutions on the bisector ( ) and near the contact line ( ) , which represent the two opposite limits of the range of .( again , we define . )we describe these two solutions in great detail in our earlier work , and since the solutions themselves are not needed in this abbreviated account , we refer to this earlier work for further details . herewe will only mention the asymptotics of both solutions ( which can actually be inferred directly from the differential equation , without even solving it ) . near the contact line , in limit , the asymptotic of the solution is on the bisector , in the opposite limit , the asymptotic is \phi^2 .\label{psibisector}\ ] ] the reduced pressure on the bisector is positive .the value of can not be determined from the original differential equation ; one needs to employ an integral condition resulting from the equality of the total influx into a sector of radius by flow from the outer regions of the drop and the total outflux from this sector by evaporation : this condition is similar to an analogous condition for the circular drop ( which was written for the entire drop instead of only a certain part of it , since , unlike the infinite sector , the round drop was finite ) . upon simplificationthis condition reduces to the following equation defining the constant prefactor : \ , d\phi = 0 . \label{psi0}\ ] ] obviously , is proportional to . in order to compensate for the unavailability of the exact analytical solution to eq .( [ psi - equation ] ) we also approached this problem numerically .we will not describe the numerical procedure in full here as it was described in great detail in our earlier paper . here, we will only mention that we use two different trial forms of that are simple and at the same time satisfy proper asymptotics ( [ theta - bisector ] ) and ( [ theta - edge ] ) .these model forms are neither exact nor the only ones satisfying asymptotics , but they allow one to avoid the numerical solution of the eigenvalue problem ( [ eigenvalue ] ) and thus not to repeat the elaborate treatment of works .we use the results based on these trial forms only for illustrative purposes in order to picture general behavior of the solution for arbitrary values of the argument ( not only in the limiting cases ) . as we discussed in work , the difference between the numerical results based on the two model forms of did not exceed 1015% in most cases , and we have all reasons to believe that at least the orders of magnitude obtained by this approximation are correct .we would like to emphasize that only the _ numerical _ graphs based on the choice of are affected by these simplified forms ; all the _ analytical _ results below do not rely on a particular form of and use only the analytical asymptotics of the preceding section .the numerical solution to eq .( [ psi - equation ] ) satisfying conditions ( [ psi0 ] ) for and for was found for the two model forms of and for approximately 20 different values of the opening angle . in all casesperfect agreement between the numerical solution and the analytical asymptotics of this section was observed .two examples of the numerical solution together with the analytical asymptotics are provided in fig .[ psieps ] for opening angles and . for two values of opening angle .] for two values of opening angle . ]characteristic behavior of the velocity field ( [ vresult ] ) is shown in fig .[ flowfield ] for and .note that despite the fact that the exponent of the power law in is not a smooth function of , the qualitative behavior of the flow field does not visibly change as the opening angle increases past the right angle . at the point of arrow origin . ]the velocity diverges near the edge of the drop . at the sides of the angle , only the component diverges , as can be seen from fig . [ flowfield ] and expression ( [ vphi ] ) .this divergence near the sides is of exactly the same one - over - the - square - root dependence on distance as the divergence in the circular drop and has exactly the same origin .in addition , there is a divergence of both components of the velocity as when , as is apparent from expressions ( [ vr ] ) and ( [ vphi ] ) .this divergence is entirely new and is due to the presence of the vertex .the exponent of this divergence depends on the opening angle as both and do .this is the first in a set of indices for the pointed drop that are universal but depend on the geometry .similar indices will also be encountered for other physical quantities throughout this chapter .one must know the shape of the streamlines in order to be able to predict the scaling of the deposit growth , and now , with velocity field in hand , we have everything needed to compute it .integration of the velocity field ( [ vr])([vphi ] ) according to eq .( [ vel - ratio ] ) yields the streamline equation , _i.e. _ the trajectory of each particle as it moves with the fluid : , \label{streamline}\ ] ] where we assume that is positive here and everywhere below ( the generalization to the case of negative is obvious as all functions of are even ) .thus , when , so that is the distance from the terminal endpoint of the trajectory to the vertex . in limit the integral in the exponent goes to zero ( quadratically in ) and the streamline equation reduces to [ see ref . for details of all calculations in this section ] .the streamlines are perpendicular to the contact line ( up to the quadratic terms in ) .this is in good agreement with what one would expect near the edge of the drop , since the azimuthal component of the fluid velocity diverges at the side contact line while the radial component goes to zero . in limit the integral in the exponent diverges as ] are unimportant .substitution of into the darcy s law ( [ darcy ] ) yields where .upon further substitution into the conservation of mass ( [ consmass ] ) , we obtain + \frac{j}{\rho } + \partial_t h = 0 , \label{consmas1}\ ] ] which , together with eq .( [ v - h ] ) , constitutes the full system of equations for finding and .now , for water under normal conditions , and .hence , the velocity scale is of the order of obviously , this is a huge value compared to the characteristic velocities encountered in usual drying process .therefore , one can develop a systematic series expansion in small parameter ( where is some characteristic value of velocity , say , 1 or 10 m/s ) : and keep only the and terms at the end in order to describe the process up to the main order in .a similar expansion can also be constructed for pressure : where are related to by equation ( [ p - h ] ) : physically , condition is equivalent to the statement that the viscous stress is negligible and that the capillary forces dominate .let us understand what and physically correspond to .plugging the expansions for and into the system ( [ v - h])([consmas1 ] ) , one obtains a set of terms for each power of , starting from and up .equating terms of the main order in yields the following two equations = 0,\ ] ] which both can be satisfied if and only if is a function of time only .writing it as we immediately identify this equation with the statement of spatial constancy of the mean curvature of the interface , which describes the _ equilibrium _ surface shape at any given moment of time ( _ i.e. _ we obtained equation ( [ laplace ] ) with the desired properties of ) .thus , is indeed the equilibrium surface shape , and so is ( up to the corrections of the order of ) . repeating the same procedure for the terms of the next order in , we arrive at another two equations : + \frac{j}{\rho } + \partial_t h_0 = 0,\ ] ] which can be seen to be equivalent to the set of equations ( [ psi])([vpsi ] ) upon identification . knowing the equilibrium surface shape , one can solve the second equation above with respect to the reduced pressure , and then obtain velocity by differentiating the result according to the first equation .thus , up to the corrections of the order of , one can first find the equilibrium surface shape at any given moment of time , and then determine the pressure and the flow fields for this fixed functional form of , as was asserted in the main text .jing , j. reed , j. huang , x. hu , v. clarke , j. edington , d. housman , t.s .anantharaman , e.j .huff , b. mishra , b. porter , a. shenkeer , e. wolfson , c. hiort , r. kantor , c. aston , d.c .schwartz , _ proc ._ * 95 * , 8046 ( 1998 ) .
|
the theory of solute transfer and deposit growth in evaporating sessile drops on a plane substrate is presented . the main ideas and the principal equations are formulated . the problem is solved analytically for two important geometries : round drops ( drops with circular boundary ) and pointed drops ( drops with angular boundary ) . the surface shape , the evaporation rate , the flow field , and the mass of the solute deposited at the drop perimeter are obtained as functions of the drying time and the drop geometry . in addition , a model accounting for the spatial extent of the deposit arising from the non - zero volume of the solute particles is solved for round drops . the geometrical characteristics of the deposition patterns as functions of the drying time , the drop geometry , and the initial concentration of the solute are found analytically for small initial concentrations of solute and numerically for arbitrary initial concentrations of solute . the universality of the theoretical results is emphasized , and comparison to the experimental data is made . _ to denis _
|
let denote the output of node in a neural network .hebbian learning ( hebb 1949 ) is a type of unsupervised learning where the neural network connection strengths are reinforced whenever the products are large . if is the correlation matrix and the hebbian learning law is local , all the lines of the connection matrix will converge to the eigenvector of with the largest eigenvalue . to obtain other eigenvector directionsrequires non - local laws ( sanger 1989 , oja 1989 , 1992 , dente and vilela mendes 1996 ) . these principal component analysis ( pca )algorithms find the characteristic directions of the correlation matrix .if the data has zero mean ( ) they are the orthogonal directions along which the data has maximum variance .if the data is gaussian in each channel , it is distributed as a hyperellipsoid and the correlation matrix already contains all information about the statistical properties .this is because higher order moments of the data may be obtained from the second order moments .however , if the data is non - gaussian , the pca analysis is not complete and higher order correlations are needed to characterise the statistical properties .this led some authors ( softy and kammen 1991 , taylor and coombes 1993 ) to propose networks with higher order neurons to obtain the higher order statistical correlations of the data .an higher order neuron is one that is capable of accepting , in each of its input lines , data from two or more channels at once .there is then a set of adjustable strengths , , , , being the order of the neuron .networks with higher order neurons have interesting applications , for example in fitting data to a high - dimensional hypersurface .however there is a basic weakness in the characterisation of the statistical properties of non - gaussian data by higher order moments .existence of the moments of a distribution function depends on the behaviour of this function at infinity and it frequently happens that a distribution has moments up to a certain order , but no higher ones .a well - behaved probability distribution might even have no moments of order higher than one ( the mean ) .in addition a sequence of moments does not necessarily determine a probability distribution function uniquely ( lukacs 1970 ) .two different distributions may have the same set of moments .therefore , for non - gaussian data , the pca algorithms or higher order generalisations may lead to misleading results . as an example consider the two - dimensional signal shown in fig.1 .2 shows the evolution of the connection strengths and when this signal is passed through a typical pca algorithm .large oscillations appear and finally the algorithm overflows .smaller learning rates do not introduce qualitative modifications in this evolution .the values may at times appear to stabilise , but large spikes do occur .the reason for this behaviour is that the seemingly harmless data in fig.1 is generated by a linear combination of a gaussian with the following distribution which has first moment , but no moments of higher order. to be concerned with non - gaussian processes is not a pure academic exercise because , in many applications , adequate tools are needed to analyse such processes .for example , processes without higher order moments , in particular those associated with lvy statistics , are prominent in complex processes such as relaxation in glassy materials , chaotic phase diffusion in josephson junctions and turbulent diffusion ( shlesinger et al 1993 , zumofen and klafter 1993 , 1994 ) . moments of an arbitrary probability distribution may not exist .however , because every bounded and measurable function is integrable with respect to any distribution , the existence of the characteristic function is always assured ( lukacs 1970 ) . where and are n - dimensional vectors , is the data vector and its distribution function .the characteristic function is a compact and complete characterisation of the probability distribution of the signal .if , in addition , one wishes to describe the time correlations of the stochastic process , the corresponding quantity is the characteristic functional ( hida 1980 ) where is a smooth function and the scalar product is where is the probability measure over the sample paths of the process . in the followingwe develop an algorithm to compute the characteristic function from the data , by a learning process .the main idea is that in the end of the learning process we should have a neural network which is a representation of the characteristic function .this network is then available to provide all the required information on the probability distribution of the data being analysed . to obtain full information on the stochastic process, a similar algorithm might be used to construct the characteristic functional .however this turns out to be computationally very demanding .instead we develop a network to learn the transition function and from this the process may be characterised .suppose we want to learn the characteristic function ( eq . [ 1.2 ] ) of a one - dimensional signal in a domain ] .a similar network is constructed for the imaginary part of the characteristic function , where now for higher dimensional data the scheme is similar .the number of required nodes is for a -dimensional data vector .for example for the 2-dimensional data of fig.1 we have used a set of nodes ( fig.4 ) each node in the square lattice has two inputs for the two components and of the vector argument of .the learning laws are , as before \theta _ { ( ij)}\left ( t\right ) \chi _ { ( ij)}\left ( \overrightarrow{\alpha _ { ( kl)}}% \right ) \end{array}\ ] ] the pair denotes the position of the node in the square lattice and the radial basis function is two networks are used , one for the real part of the characteristic function , another for the imaginary part with , in eqs.([2.5 ] ) , replaced by . figs.5a - b shows the values computed by our algorithm for the real and imaginary parts of the characteristic function corresponding to the two - dimensional signal in fig.1 . on the left is a plot of the exact characteristic function and on the right the values learned by the network . in this case we show only the mesh corresponding to the values .one obtains a 2.0% accuracy for the real part and 4.5% accuracy for the imaginary part .the convergence of the learning process is fast and the approximation is reasonably good .notice in particular the slope discontinuity at the origin which reveals the non - existence of a second moment .the parameters used for the learning laws in this example were =0.00036 , =1.8 , =0.25 .the number of points in the training set is 25000 . for a second examplethe data was generated by a weierstrass random walk with probability distribution and b=1.31 , which is a process of the lvy flight type .the characteristic function , obtained by the network , is shown in fig . 6 .taking the the network output one obtains the scaling exponent 1.49 near =0 , close to the expected fractal dimension of the random walk path ( 1.5 ) .the parameters used for the learning laws in this example were =0.0005 , =1.75 , =0.1732 .the number of points in the training set is 80000 .these examples test the algorithm as a process identifier , in the sense that , after the learning process , the network is a dynamical representation of the characteristic function and may be used to perform all kinds of analysis of the statistics of the data .there are other ways to obtain the characteristic function of a probability distribution , which may be found in the statistical inference literature ( prakasa rao 1987 ) .our purpose in developing neural - like algorithms for this purpose was both to have a device that , after learning , is quick to evaluate and , at the same time , adjusts itself easily , through continuous learning , to changing statistics .as the pca algorithms that extract the full correlation matrix , our neural algorithm laws are also non - local . as a computation algorithmthis is not a serious issue , but for hardware implementations it might raise some problems .as we have stated before the full characterisation of the time structure of a stochastic process requires the knowledge of its characteristic functional ( eq . [ 1.3 ] ) for a dense set of functions . to construct an approximation to the characteristic functional we might discretize the time and the inner product in the exponential becomes a sum over the process sampled at a sequence of times . the problem would then be reduced to the construction of a multidimensional characteristic function as in section 2 . in practicewe would have to limit the time - depth of the functional to a maximum of time steps , being the maximum time - delay over which time correlations may be obtained . if is the number of different values for each , the algorithm developed in section 2 requires nodes in the intermediate layer and , for any reasonably large , this method becomes computationally explosive .an alternative and computationally simpler method consist in , restricting ourselves to markov processes , to characterise the process by the construction of networks to represent the transition function for fixed time intervals . from these networksthe differential chapman - kolmogorov equation may then be reconstructed .let be a one dimension markov process and its transition function , that is , the conditional probability of finding the value at time given at time .assume further that the process is stationary the network that configures itself to represent this function is similar to the one we used for the 2-dimensional characteristic function .it is sketched in fig.s 7a - b .it has a set of intermediate layer nodes with node parameters , the node with coordinates corresponding to the arguments in the transition function .the domain of both arguments is . for each pair in the data set ,the node parameters that are updated are those in the 4 columns containing the nearest neighbours of the point ( fig .the learning rule is where if is one of the nearest neighbours of the data point and zero otherwise . is a neighbourhood smoothing factor . is a column normalisation factor . in the limit of large learning timesthe node parameters approach the transition function as for the networks in section 2 , the intermediate layer nodes are equipped with a radial basis function ( eq . [ 2.6 ] ) and the connection strengths in the output additive node have a learning law identical to the second equation in ( [ 2.5 ] ) .the role of this part of the network is , as before , to obtain an interpolating effect . what the algorithm of eqs.([3.3 ] ) and( [ 3.4 ] ) does is to compute recursively the average number of transitions between points in the configuration space of the process .the spatial smoothing effect of the algorithm automatically insures a good representation of a continuous function from a finite data set .furthermore its recursive nature would be appropriate for the case of drifting statistics .for a stationary process , once the learning process has been achieved and if is chosen to be sufficiently small , the network itself may be used to simulate the stationary markov process . a complete characterisation of the process may also be obtained by training a few similar networks for different ( small ) values and computing the coefficient functions in the differential chapman - kolmogorov equation ( gardiner 1983 ) . the coefficients are obtained from the transition probabilities , noticing that for all is the jumping kernel , the drift and the diffusion coefficient . as an example we have considered a markov process with jumping , drift and diffusion .a typical sample path is shown in fig .three networks were trained on this process , to learn the transition function for , and ( ) .9 shows the transition function for and .10 shows two sections of the transition function for , that is , and .the networks were then used to extract the coefficient functions , and . to find the drift we use eq .( [ 3.8 ] ) .11 shows the computed drift function and a least square linear fit .also shown is a comparison with the exact drift function of the process . to obtain the diffusion coefficient we use ( [ 3.9 ] ) .12 shows the diffusion coefficient for different values . is the smallest time step used in the process simulation .therefore is our estimate for the diffusion coefficient . in this case , because the diffusion coefficient is found to be a constant , the value of the jumping kernel is easily obtained by integration around the local maxima of with . with .we conclude .the parameters used for the learning laws in this example were =0.48 , =0.00021 .the number of points in the training set is 1000000 .
|
principal component analysis ( pca ) algorithms use neural networks to extract the eigenvectors of the correlation matrix from the data . however , if the process is non - gaussian , pca algorithms or their higher order generalisations provide only incomplete or misleading information on the statistical properties of the data . to handle such situations we propose neural network algorithms , with an hybrid ( supervised and unsupervised ) learning scheme , which constructs the characteristic function of the probability distribution and the transition functions of the stochastic process . illustrative examples are presented , which include cauchy and lvy - type processes .
|
the last years , we observe an exponential increase in the demand to evaluate the color image quality . to assess the performance of different image processing techniques , one has to measure the impact of the degradation induced by the processing in terms of perceived visual quality . for this purpose ,subjective measures based essentially on human observer opinions have been introduced .these visual psychophysical judgments ( detection , discrimination and preference ) are made under controlled viewing conditions ( fixed lighting , viewing distance , etc . ) , generate highly reliable and repeatable data , and are used to optimize the design of imaging processing techniques .the test plan for subjective video quality assessment is well guided by video quality experts group ( vqeg ) including the test procedure and subjective data analysis .objective image quality assessment models can be classified according to the information they use about the original image that is assumed to have perfect quality .full - reference ( fr ) methods measure the deviation of the degraded image with the original one . in practice, they can not be used when the reference image is not available . while fr methods are based on the knowledge of the reference image , no - reference ( nr ) methodsare designed in order to grade the image quality independently of the reference . as they are designed for one or a set of predefined specific distortion types they are useful only when the types of distortions between the reference and distorted images are fixed and known , and unlikely to be generalized .reduced - reference ( rr ) methods are designed to predict the perceptual quality of distorted images with only partial information about the reference images .depending on the amount of information of the original image used both limiting cases ( fr and nr ) can be achieved . in practicethe goal is to develop a rr metric using the minimum amount of information about the reference because this side information must be always available in order to grade the quality and therefore it can be costly to transmit it .+ the structural similarity index ( ssim ) and the visual signal to noise ratio ( vsnr ) , are examples of successful full reference methods which have shown to be pretty effective in predicting the quality scores of human subjects .these conventional methods can be naturally extended to color images . recently a reduced reference method based on the estimate of the structural similarity has been proposed by rehman _et al . _. another recent work of a reduced reference entropic difference method ( rred ) has been proposed by soundararajan _et al . _the distortion measure was computed separately by the entropy difference of wavelet coefficients .this method has shown good performances in term of correlation with the human visual perception . however , a great quantity of information from the original image is needed in order to reach a high quality score .indeed , the quantity of information extracted from the original image is a critical parameter of the reduced - reference schemes .methods based on natural image statistics modeling present a good trade - off between the quality score and the quantity of side information needed .the method proposed originally by wang _et al . _ has been improved by li _et al . _ . in thissetting , a statistical model of steerable - pyramid coefficients ( a redundant transform of wavelets family ) is used .the distortion between the statistics of the original and the processed image is computed using the kullback - leibler divergence .these models can be naturally extended to color images using the same marginal distribution for each color component .however such an approach does not take into account the mutual information shared by the three color components . in order to overcome this drawback , in this paper, we propose a solution that uses an estimate of the joint distribution of the three color components . unlike the rred method, few information from the original image is needed to evaluate color image quality . in the following ,we concentrate on the rgb color space .this choice is motivated by its simplicity and its suitability to our model , however our approach can be extended to alternative color space models .note that the multivariate generalized gaussian distribution has been proposed for rgb color texture modeling . based on this model , the parameters matrixare estimated by maximum likelihood method .finally , the proposed quality metric is computed by the kullback - leibler divergence between two mggd models .+ this paper is organized as follows .section 2 gives a brief review of the multivariate generalized gaussian distribution ( mggd ) . in section 3 , we explain how we use this distribution to introduce the dependencies between the color components and we present the distortion measure .section 4 concerns the experimental results and finally a concluding remarks are presented in section 5 .in this work we consider a particular case of the multivariate generalized gaussian distribution introduced by kotz . upon multivariate extension of the univariate generalized gaussian distributiona multivariate elliptical function does not exist .therefore , the mggd is inspired from the univariate zero - mean ggd , which is described by the following expression : }\exp[-(\nicefrac{|x|}{\alpha})^{\beta}],\ ] ] where denotes the gamma function , is a scale parameter and is the shape parameter .+ the multivariate generalized gaussian distribution is defined by the following expression : ^{\beta}\right\ } , \end{split}\ ] ] where is the dimensionality of the probability space ( for color space ) , is the shape parameter to control the peakedness of the distribution and the heaviness of its tails and is the covariance matrix .note that , the multivariate generalized gaussian is also sometimes called the multivariate exponential power distribution .we use the ml method for estimating the parameters of the mggd .this involves setting the differential to zero of the logarithm of in ( 2 ) . arranging the wavelet coefficients for a single subband in three - dimensional column vectors ,we get the following equations for and , respectively . - 1\right\}=0,\ ] ] here, denotes the digamma function and is defined by .these equations were solved recursively .at the receiver side , we use the features sent from the reference image to calculate the distortion measure . in previous work ,a number of authors have pointed out the relationship between kld and mggd and used kld to compare image statistics . in this paper, we use kld to quantify the difference of mggs distributions between a distorted image and a perfect quality reference image .we then examine how this quantity correlates with perceptual image quality for a wide range of distortion types .first we introduce the kld between two distributions and that is denoted by do and vettterli ( 2002 ) have derived a closed - form expression for the kld between two univariate zero - mean generalized gaussian .the kld between two univariate generalized gaussian distribution characterized by and , where the dispersions ( reduced to in the univariate case ) is given by +\\ \left(\frac{2^{\frac{1}{2\beta_{1}}}\sigma_{1}}{2^{\frac{1}{2\beta_{2}}}\sigma_{2}}\right)^{2\beta_{2}}\frac{\gamma\left(\frac{2\beta_{2}+1}{2\beta_{1}}\right)}{\gamma\left(\frac{1}{2\beta_{1}}\right)}-\frac{1}{2\beta_{1}}. \end{split}\ ] ] when , the density reduces to the multivariate zero - mean gaussian distribution . in this case , the kld between multivariate zero - mean gaussian distributions is known since long ( kullback , 1968 ) .it s given by .\ ] ] where is the covariance matrix .to our knowledge there is no analytic expression for the kld between two multivariate zero - mean ggds .however , a closed form for the kld between two bivariate zero - mean ggds parameterized by and has been proposed ( verdoolaege and al , 2009 ) . this quantity denoted defined by : \\ + \ln \left[\frac{\beta_{1}}{\beta_{2}}\right]-\left(\frac{1}{\beta_{1}}\right)+2^{\frac{\beta_{2}}{\beta_{1}}-1}\frac{\gamma\left(\frac{\beta_{2}+1}{\beta_{1}}\right)}{\gamma\left(\frac{1}{\beta_{1}}\right)}\\ \times\left(\frac{\gamma_{1}+\gamma_{2}}{2 } \right)^{\beta_{2 } } f_{1}\left(\frac{1-\beta_{2}}{2},\frac{-\beta_{2}}{2};1;a^{2 } \right ) .\end{split}\ ] ] where , represents the gauss hypergeometric function ( abramowitz and stegun , 1965 ) which may be tabulated for and for realistic values of . is the eigenvalues of while .the shiftable multi - scale transforms is represented with three orientations and four scales to get twelve subbands .here , we propose to exploit conjointly the dependencies between the subbands and the dependencies between the color components .the multivariate generalized gaussian distribution ( mggd ) is used in order to model the joint statistics of color - image wavelet coefficients in mggd ( ) . in the following, we will assume that in ( 2 ) is a matrix with the size , where is the size of the subbands .so , automatically the size of is . to take in consideration the dependencies between the subbands and minimize the number of parameters to be transmitted and the execution time, we choose , two orientations from each scale , as shown in figure 1 . + the diagram summarizing our approachis presented in figure 2 . in the followingwe will consider to minimize the number of the parameters sent to the receiver side and we will use the kld only between the covariance matrices .+ finally , when we get the difference between the six selected subbands , we use the following equation to combine the 6 values : to produce an overall measure as follows : is a constant to control the magnitude of the distortion measure , and it is equal to 0.1 . the logarithmic function is involved here to reduce the difference between a high values and low values of , so that we can have values in the same order . [ cols="^,^,^,^,^,^,^",options="header " , ]in this section we evaluate the performances of the proposed method using tid 2008 database .this benchmark is intended for evaluation of full - reference image visual quality assessment metrics .it allows to estimate how a given metric correlates with mean human perception .for example , in accordance with tid 2008 , spearman correlation between the metric psnr ( peak signal to noise ratio ) and mean human perception ( mos , mean opinion score ) is 0.525 .tid 2008 contains 25 reference images and 1700 distorted images ( 25 reference images with 17 types of distortions and 4 levels of distortions ) .these distortions type are : additive gaussian noise , additive noise in color components , spatially correlated noise , masked noise , high frequency noise , impulse noise , quantization noise , gaussian blur , image denoising , jpeg compression , jpeg2000 compression , jpeg transmission errors , jpeg2000 transmission errors , non eccentricity pattern noise , local block - wise distortions of different intensity , mean shift ( intensity shift ) , contrast change .figure 3 shows one reference image from the tid 2008 benchmark with three distortion types .+ the quality of each image in tid 2008 has been graded by the mean opinion score(mos ) derived from psychophysical experiments .we use here the spearman rank correlation coefficient to estimate the correlation between the mos and its prediction according to the proposed metric , it is given is : where is the difference between the image rank in the subjective and objective evaluations . the spearman rank correlation coefficient ( srcc )is a non - parametric correlation metric , independent of any monotonic nonlinear mapping between the subjective and the objective score .+ we also use the pearson s linear correlation coefficient ( plcc ) defined by : where is the subjective score of the image , is the objective score of the image and ( , ) denote respectively the average of ( , ) .the predicted mos are computed from the values generated by the objective measure .we use a non - linear function proposed by the video quality expert group ( vqeg ) phase i fr - tv with five parameters .the expression of this logistic function is given by : table i and ii shows respectively the spearman rank correlation and the pearson s linear correlation coefficient for each types of distortion in the tid 2008 database for our method , three fr and two rr methods for comparative purposes .+ additionally , the number of features extracted from the original image by each method is reported in the row denoted by number of scalars in both tables in order to have a fair comparison ( l denotes the image size ) .we remark that the information quantity used for our method is smaller than the others , except for the wnism method .as reported , only 54 features ( six subbands and for each subband we have nine parameters ) are sufficient to reach a good level of performances .table i shows that our method offers a good trade off between the existing methods for a great variety of distortion types .for example , in the case of the gaussian blur , the quality score is higher than wnsim and vif and smaller than the others .our approach is also shown a highly efficient for the distortion type that attack the color components , such as additive noise in color components .if we compare our approach with the wnism , the results show the superiority of our method in the great majority of cases ( 12 times out of 15 ) .wnsim performs better only in the case of jpeg compression , jpeg transmission errors and contrast change artefacts .table ii , report the plcc values . as we can see , the correlation coefficients values of our method are close to the one obtained with the full - reference methods . and for the reduced - reference methods is clearly the superiority of our method against wnism , also for the rred method , the proposed metric outperforms for eight distortions .however , the proposed measure fails for the non eccentricity pattern noise distortion . for each type of distortion under evaluation the plcc and srccare computed from the scatter plots , where each point in the plots represent one test image .the vertical axe is the mos with the psychophysical experiments and the horizontal axe represent the values of the corresponding distortion measure .if the prediction is perfect , then the point should lie on the line .figure 4 shows the scatter plots for the distortion type named masked noisewith our method .we can observe that a great number of images the values is closed to the line and that some points are placed on the line .figure 5 illustrates the same type of results for the distortion quantization noisein tid 2008 . we can observe that for this type of distortion the points are more closed to the line , which puts in evidence the adequacy of the proposed metric in this situationin this paper , we proposed a rr measure based on multivariate generalized gaussian distribution .the mggs distribution is intended to handle the dependencies between the color components .the kullback - leibler divergence between two mggd is used to compute the visual quality degradation .results of an extensive comparative evaluation show that the method is very effective for a broad range of distortion types .furthermore just a limited quantity information from the reference image is required in order to compute the quality assessment measure . with only 54 parameters from the reference imagethe algorithm achieves a performance which is nearly as good as the best performing full reference quality assessment methods .future work will concern other color spaces and an extension to video quality assessment .z. wang , e. simoncelli , reduced - reference image quality assessment using a wavelet - domain natural image statistic model , in : proceedings of spie 5666 ( human vision and electronic imaging x ) , 2005 , pp .149 - 159 .q. li and z. wang , reduced - reference image quality assessment using divisive normalization - based image representation , ieee journal of selected topics in signal processing : special issue on visual media quality assessment , vol .3 , pp . 202 211 , 2009 . m. do and m. vetterli , wavelet - based texture retrieval using generalized gaussian density and kullback - leibler distance , ieee transactions on image processing , vol .2 , pp . 146158 ,february 2002 .geert verdoolaege , paul scheunders : geodesics on the manifold of multivariate generalized gaussian distributions with an application to multicomponent texture discrimination . international journal of computer vision 95(3 ) : 265 - 286 ( 2011 ) . n. ponomarenko , m. carli , v. lukin , k. egiazarian , j. astola , f. battisti color image database for evaluation of image quality metrics , int . workshop on multimedia signal processing , australia , oct .2008 .geert verdoolaege , yves rosseel , michiel lambrechts , paul scheunders : wavelet - based colour texture retrieval using the kullback - leibler divergence between bivariate generalized gaussian models .icip 2009 : 265 - 268 .
|
this paper deals with color image quality assessment in the reduced - reference framework based on natural scenes statistics . in this context , we propose to model the statistics of the steerable pyramid coefficients by a multivariate generalized gaussian distribution ( mggd ) . this model allows taking into account the high correlation between the components of the rgb color space . for each selected scale and orientation , we extract a parameter matrix from the three color components subbands . in order to quantify the visual degradation , we use a closed - form of kullback - leibler divergence ( kld ) between two mggds . + using tid 2008 benchmark , the proposed measure has been compared with the most influential methods according to the frtv1 vqeg framework . results demonstrates its effectiveness for a great variety of distortion type . among other benefits this measure uses only very little information about the original image . steerable pyramid , color space , natural scenes statistics , multivariate generalized gaussian distribution , maximum likelihood .
|
one of the salient features of the stochastic evolutionary game dynamics for finite populations is the _ fixation _ .that is , no matter how the initial strategies are distributed in a population , the system will eventually be fixated to only one strategy . in general , this phenomenon can be theoretically explained in terms of a markov process with absorbing state(s ) : the limiting theory of markov processes tells us that the finite - state markov chains with absorbing states will eventually be trapped into one of the absorbing states as time passes .the well - known wright - fisher model , moran process and the pairwise comparison processes all belong to this class .a further classification of darwinian selection scenarios based on the fixation descriptions has already been established in stochastic game dynamics .another important class consists of evolutionary dynamics with _ mutations _ where an ergodic mutation - selection equilibrium can be reached .the latter is out of the scope of our study ; however , the _ landscape _ introduced in the present work provides a unified perspective for both classes of processes .in addition to the ultimate fixation , attention should also be paid to the time - dependent , pre - fixation transient behavior for several reasons : on one hand , the process to fixation is intimately dependent upon both the transient movements before absorption and the one last step to fixation .that is , studying the transient dynamics provides important insights into the final fixation behavior . on the other hand ,sometimes the time for a true fixation is too long to be observed and the relevant time scale can be shorter . in this case , the transient dynamics provides a more appropriate description . furthermore , examinations of transients can yield a mechanistic understanding of the _ persistence _ and _ coexistence _ in complex biological dynamics , especially in ecosystems .indeed , it has been found that the pre - fixation transient dynamics could be an essential explanatory aspect of characterizing the stochastic fluctuations raised from finite populations .the theory of quasi - stationarity is a widely applied , standard technique of studying the pre - fixation process .it defines the subchain with the absorbing states removed .based on this approach , we present an extended analysis for the transient dynamics of the well - mixed frequency - dependent moran process .an ergodic _ conditional _ stationary distribution is used to characterize the pre - fixation process . as a result of the law of large numbers , this stationary distribution approaches to a singular distribution in the infinite population limit .the corresponding large deviation rate function , which is population - size independent , is shown to be a landscape .this _ transient landscape _ has a lyapunov property with respect to the corresponding deterministic replicator dynamics , providing a potential - like function for visualizing the transient stochastic dynamics .ideas related to the transient landscape of moran process have been discussed in the past : claussen and traulsen studied non - gaussian stochastic fluctuations based on the conditional stationary distribution .it is also a general feeling that one can use the negative logarithm of the stationary , or conditional stationary distribution as the potential in evolutionary dynamics , following an analogue to boltzmann s law in statistical mechanics .however , it is important to point out that a stationary distribution usually collapses to singular supports in the infinite population limit , while our large deviation rate function is supported on the whole space and it is independent of system s size . therefore , in terms of the analogue to boltzmann s law , we are effectively identifying the system s size as the inversed temperature which tends to infinity for a deterministic limit .even though our analysis is based on the one - dimensional moran process , this idea is general .it can be applied to many other multi - dimensional evolutionary game dynamics with finite populations , with or without detailed balance . for the latter case, the landscape itself is an _emergent property _ of the dynamics . with respect to moran process, also discovered the expression of from a different origin , via their approximated calculation of fixation probability for large population size .we shall show that this connection is a nice mathematical property of the function for the processes in one - dimensional case , but its generalization to multi - dimensional cases is not obvious . more specifically , for multi - dimensional systems with multiple alleles , the fixation probability does not naturally give a landscape .the large deviation rate function , however , can be generalized to multi - dimensional markov processes , as indicated by the freidlin - wentzell theory .the landscape we introduced is also consistent with the landscape theory for other population dynamics , e.g. , chemical , that is ergodic without fixation .there are two fundamentally different types of movements in this landscape that require separated attention .( ) `` downhill movements '' which have deterministic counterparts : the local minima ( _ transient attractors _ ) in this landscape correspond to the stable points in the replicator dynamics .that is , these transient attractors are in direct agreement with the _ evolutionarily stable strategies _( esss ) .( ) `` uphill movements '' which are rare and without a deterministic correspondence .in general , rare events take exponentially long time ; one needs to take multiple time scales into consideration in understanding the appropriate fluctuation descriptions for the transient dynamics as well as eventual fixation .this is particularly relevant in the anti - coordination games .furthermore , the concept of _ stochastic bistability _ is studied in the coordination games . in this case , the downhill and uphill movements in the landscape dominate `` intra - attractoral '' and `` inter - attractoral '' dynamics respectively .it is shown that a maxwell - type construction from classic phase transition theory in statistical physics is necessary as the population size tends to infinity , i.e. , only one of attractors should be singled out in such a construction | it corresponds to the global minimum of the landscape .this is not present in the bistable deterministic dynamics ; it raises the novel issue of _ ultimate fixation_. it did not escape our notice that it is the _ exponentially long - time search _ that ultimately finds the global minimum in a `` non - convex optimization '' .another important issue directly relating to the transient dynamics is the diffusion approximation . with the conventional truncation of kramers - moyal ( km ) expansion, the discrete stochastic moran process for large populations has been approximated by a stochastic differential equation , with absorbing dirichlet boundary conditions .if one replaces the absorbing boundary conditions with the reflecting ones , we can also derive an ergodic stationary distribution from the fokker - planck equation of this diffusion process .it will be shown that this stationary distribution is in fact the `` conditional '' stationary distribution for the process with absorbing boundary conditions . in a comparison of the transient dynamics between the original moran process and its continuous counterpart, it is shown that even though the km diffusion is valid in finite time as a local dynamical approximation , it could lead to incorrect approximation in global inter - attractoral dynamics .in bistable game systems , particularly , the km diffusion could single out a different stable point from that of the original process for large but finite populations .moreover , enlightened by hnggi et al.s work , we also consider their diffusion approximation that provides the correct global dynamics .however , this diffusion process gives incorrect finite time stochastic dynamics .now we have a _ diffusion s dilemma _ : the truncated km diffusion gives the correct finite time stochastic dynamics as the original moran process with large population size ( this is guaranteed both by the so called van kampen s system size expansion and kurtz s theorem ) , but wrong stationary distribution . on the other hand ,hnggi et al.s diffusion , which is unique in providing the correct stationary distribution as well as deterministic limit , is wrong for the finite time stochastic dynamics . to further illustrate this diffusion s dilemma ,a simple example is present . by investigating the first passage times, it is found that the failure of exponential approximation in the uphill movement could be the origin of the difficulties of diffusion approximation . in other words ,diffusion approximation is a second - order polynomial expansion for the kolmogorov forward equation of the original discrete process , which can give the correct gaussian dynamics near the stable point ; however , the inter - attractoral global dynamics , determined by the barrier crossing events with exponential small probabilities , should be approximated in the level of exponential asymptotics .this paper is organized as follows : in sec .ii , we introduce the frequency - dependent moran process .then we give the transient description of the moran process in sec .iii , where the transient landscape is constructed .it is shown that this landscape as a `` glue '' holds the deterministic replicator dynamics , the fixation and the problem of maxwell - type construction together .diffusion s dilemma is discussed in sec .the discussions are included in the last section .to study evolutionary game theory in finite populations , nowak et al . generalized moran s classical population genetic model by using frequency - dependent fitness .consider a population of individuals playing a symmetric game with strategies and , the payoff matrix is where all the entries in the matrix are assumed to be non - negative . if players follow strategy , and play , the the average payoff of an individual of is where self - interaction is excluded , and also for is fitness is assumed to be a linear combination of background fitness and the payoff as follows : where ] and increases on ] and decreases on ] .so ( or x=0 ) is the minimal point .another seemingly trivial case is when and , which is of limited interest in the deterministic dynamics .however , this neutral case with _ flat _landscape becomes important in the context of stochastic dynamics .an interesting result will be obtained for this case in connection to diffusion approximation ( sec .[ diff_app ] ) .note that the rescaled conditional stationary distribution can be expressed as the landscape visualizes the transient dynamics : the transient system should spend a majority of time around local minimal point(s ) in the landscape .so we term the minimal point(s ) as the _ transient attractor(s)_. in the literature of physics , the transient attractor(s ) show the properties of _ metastablity _ .that is , although the `` downhill movement '' towards the local minimum in the landscape maintains the stability of the attractor , the `` uphill movement '' of crossing the barrier will drive the system to move from the local attractor to another on a larger time scale . with this observation , we will discuss the fixation from the viewpoint of the transient landscape .it is known that the process to fixation is intimately dependent upon both the transient movements before absorption and the one last step to fixation .thus we have two cases : first , the fixation is an inherent result derived directly from the transient process .second , the fixation shows distinctly different behavior from the transient process , i.e. , the final fixation does not end up with attractive absorbing , but one last `` unnatural '' step to fixation .based on the different generic cases of the transient landscape , it is found that in the _ uphill _ or _ downhill _ case , one of the two absorbing states is located at the transient attractor , we term this kind of absorbing state as the _ attractive absorbing state _ ; the other absorbing state is called as the _ rare absorbing state_. this classification of the absorbing states is directly linked to the work by antal et al .we now denote the probability of fixation at , before reaching and starting from the initial state , by .similarly , fixation probability at starting from is denoted by .the explicit expression of , for example , can be derived from the following difference equation : with two boundary conditions then we have where . when is sufficiently large , where .therefore , eq .( [ eq_16 ] ) establishes a connection between our the transient landscape with fixation probability . as pointed out by , in the _ downhill_ case that and , where this result shows that the uphill fixation from to is a rare event with exponentially small probability , while the downhill fixation from to is a rather easy trip .this corresponds to our `` rare '' or `` attractive '' definition of the absorbing states .similarly , in the _ uni - barrier _ case , both and are the attractive absorbing states , whereas the barrier crossing probability from each side to another is exponentially small .in the _ uni - well _ case , however , the only transient attractor is located at the mixed state . in this case , the fixation is not an immediate result of the transient attraction .antal et al . shows that the fixation time in this case is exponentially large with population size ; while in the other two cases the fixation times have the same approximated order .this result is also completely in line with our classification of the fixation .the mismatch between the mixed transient attractor and the final absorbing fixation leads to multiple time scales issue in the process of evolution .comparative studies of the mean first passage time to the attractor and the fixation time have been carried out in , showing the separation of the transient attractive time scale and the fixation time scale .multiple time scales issue is of great importance in understanding evolutionary systems , especially in explaining the coexistence and extinction of species in ecological systems .it has been reported that the relevant time scale to explain the coexistence of species in plankton is found in the short term ( within a single season in their models ) .the time until species being extinct can be much longer than a single season .accordingly , the coexistence can be explained here as a transient phenomenon .the mixed transient attractor , as the stable equilibrium in the transient dynamics , should be more relevant within a reasonable time scale . to realize the final fixation , the system has to escape from the attractor through going uphill on the landscape , collecting many unfavorable moves consecutively , for an extremely long time .bistability ( or multistability ) is one of the most interesting phenomena in the nonlinear systems .for example , consider the replicator dynamics ,\ ] ] bistability arises when and . in this case , and are both stable , separated by the unstable fixed point .therefore , the characterizations of the bistability in the deterministic nonlinear systems should contain two things : one is _ where the attractors are _ , the other is the _ basins of attraction_. one major problem in evolutionary game theory is the selection of multiple evolutionary stable strategies . in the bistability case of the deterministic dynamics ,the limiting behavior is determined by its initial state .that is , the measurement of the stability is closely dependent on the basins of attraction .the stable point with the larger basin of attraction can be seen as the _ risk - dominant _ strategy . in the context of stochastic evolutionary game dynamics, we can also discuss the noise - induced bistable phenomenon .the bistability in the replicator dynamics corresponds to the _ uni - barrier _ case in the transient landscape , where both and are the local minimal points in this landscape , separated by the barrier .we term this case with two transient attractors as the _stochastic bistability_. furthermore , not only does the landscape cover the characterizations of the bistability in the replicator dynamics , but we can also give a straightforward comparison to these two stable states based on this landscape . from eq .( [ m - potential ] ) ,\ ] ] so and \\ & = \int_0 ^ 1\ln(1-w+w((c - d)y+d))dy-\int_0 ^ 1\ln(1-w+w((a - b)y+b))dy.\end{aligned}\ ] ] without loss of generality , we set , then =\int_0 ^ 1\ln\left[cy+d(1-y)\right]dy \\ & \longleftrightarrow \frac{b\ln b - a\ln a}{b - a}=\frac{d\ln d - c\ln c}{d - c}.\end{aligned}\ ] ] we term this condition as the _ maxwell - type construction _ .note that so when , when , thereby , even a slight difference between and in the transient landscape can leads to a extreme disparity in the distribution ( see fig .it is observed that except for the critical case , the system will select only one attractor , the global one , as the unique stable state with the increase of the population size . in other words, the maxwell - type construction always singles out the global minimum in the system , providing another useful criterion for the equilibrium selection .discrete markov chain treatment of biological population systems is necessary for relatively small populations . for large populationsit is convenient and desirable to apply a continuous approximation . beyond the replicator deterministic dynamics as a continuous limit, a diffusion - type process has long been much sought after . however , an important problem arising is the relation between the original discrete markov chain and its approximated representation in term of a diffusion process .the perspective of multiple time - scale dynamics in the previous section provides a better understanding of this important problem .the insights we gained from the transient descriptions leads naturally to a comparative study of the original discrete - state moran process and its continuous - path counterpart .it is known from the kramers - moyal diffusion theory in physics that the moran process for large population size can be approximated by a stochastic differential equation .moran process is a discrete - time , discrete - state markov process ; its kolmogorov forward equation ( sometimes called master equation ) has the form : when is large , we take the scalings , , and the probability density is ( we still write as ) . by performing the truncated kramers - moyal ( km ) expansion of eq . ([ master equ ] ) , we have the following approximated fokker - planck equation : this corresponds to the stochastic differential equation where is a brownian motion . in the limit of infinite population size , it is easy to see that eq .( [ sde ] ) becomes the deterministic replicator dynamics in this way , this diffusion approximation links the stochastic moran process and the macroscopic nonlinear equation .we should note that the above truncated km expansion is performed by taking the same scaling step of time and space with .however , when and , i.e. the _ neutrality _ case , the transient landscape is flat with for any in this case , the above scaling step is not valid any more . as a modification , we take , then by performing the truncated km expansion , we will have this can be well explained by van kampen s size expansion ( see the appendix c ) , which indicates that the scaling of the deterministic drift part should be different from that of the fluctuated diffusion part | a well - known fact for the law of large numbers and the central limit theorem .we now consider the stationary distribution of the diffusion process in eq .( [ fpe ] ) .the equation satisfied by the stationary distribution should be if the boundary conditions are reflecting , the stationary distribution can be given by where .\label{d - potential}\ ] ] however , for the moran process with absorbing boundaries and , the diffusion approximation should have corresponding absorbing boundaries : in this way , the stationary distribution in eq .( [ d - stationary ] ) is not a real final limiting , but a transient description of the diffusion process .we term ( [ d - potential ] ) as _ diffusive landscape_. we now discuss the validation of km diffusion by comparing the transient landscape and the diffusive landscape .consider the derivation of , without loss of generality we set , then =-\ln\left[\frac{(a - b)x+b}{(c - d)x+d}\right]=0,\ ] ] where is the only solution , and is stable when , then near we have meanwhile , its only solution is also , and interestingly from the above comparison , we find that and share the same extremal point and the curvature near .that is , the gaussian variance of is equal to that of , implying that the local movements near the extremal point in the diffusion process are in agreement with that in the original moran process for large populations .van kampen s expansion gives a formal argument to the local validation of diffusion approximation .consider the vk diffusion ( [ vk ] ) near , then eq . ( [ vk ] ) reduces to a time - homogeneous fokker - planck equation the gaussian process defined by ( [ linear vk ] ) yields to the following linear stochastic differential equation where both and are constant .this process is called ornstein - uhlenbeck ( ou ) process , whose stationary variance is given by this is accordance with our result that diffusion approximation gives the same local dynamics as the original moran process for large populations .we now realize that not only does km diffusion theory give the deterministic nonlinear dynamical approximation to the moran process , but it also gives a good approximation to the intra - attractoral stochastic dynamics . until now, it has been shown that the km diffusion approximation correctly describe two kinds of dynamics : 1 ) deterministic nonlinear dynamics ; 2 ) local stochastic dynamics . in this section, we will further investigate the diffusion approximation for global dynamics .consider the _ uni - barrier _ case when and .in this bistable game system , the comparison of different stable strategies is intimately related to the maxwell - type construction , which is dependent on the global inter - attractoral dynamics .the maxwell - type construction indicates that except for the critical case , only one strategy should be selected as the unique stable one .therefore , different constructions could lead to different global dynamical behavior . for , it has been shown that for the diffusive landscape , however , =0\\ & \longleftrightarrow 2\ln\left[\frac{a+c}{b+d}\right ] = \frac{(a - b - c+d)(a - b+c - d)}{(a - b)d-(c - d)b}.\end{aligned}\ ] ] fig .3 shows a simple example : with the payoff parameters that make , . in this case, the original moran process and its diffusion approximated process will select different transient attractors in large population size .different global minimum searches lead to different strategy selections .therefore , for the evolutionary game systems with multiple stable equilibria , the validity of this diffusion approximation becomes questionable in global dynamics .in fact , it is not very surprised to see the global dynamical inconsistency between the moran process for large populations and km diffusion , since their different large deviation functions result in different exponential tails of their stationary distributions , which is intimately related to the inter - attractoral dynamics consisting of barrier crossing movements from one attractor to another . to illustrate this problem, we consider a simple birth - death process with birth rate and death rate , i.e. the transition rates are independent of the states . has a reflecting boundary at and an absorbing boundary at .we are interested in the first passage time from to . in this simple model , there are three kinds of movements from to ( let ) : downhill ( ) , uphill ( ) and flat ( ) .it is not difficult to have that let the space step between to be , and let and , but , then we have a fokker - planck equation where and .the corresponding first passage time for ( [ fp ] ) is .\ ] ] now discretizing as , we have comparing and more specifically , we have and \frac{1}{2 } & if~\theta>1 \end{array}\right.\ ] ] from the above comparison of and , we find that they both approach to in the downhill dynamics ; while in the uphill dynamics , both and share the exponential form of , but different exponential parameters .this is the heart of our example .we should note that , for the bistable systems , the maxwell - type constructions are determined by the jump processes between these two attractors ( back and forth ) , which are both rare events with exponentially long time to happen . according to the above disparity between and in the uphill dynamics, km diffusion approximation can not give the exponent correctly , and then results in representing the inter - attractoral dynamical inaccurately .we suggest this as the reason for the invalidity of the diffusion approximation for the global dynamics and landscape . according to kurtz s theorem , km s diffusion theory can be mathematically justified only for any finite time . in other words ,( [ fpe ] ) correctly approximates the finite - time moran process for large but finite populations , whereas it is not guaranteed that they share the same long - term stationary behavior .therefore , the difficulty encountered by km s diffusion in bistable game systems stems from the fact that exchanging the limits of population size and time is problematic .it concerns with non - uniform convergence of kurtz s result .a natural question is whether one can find a diffusion process that gives both satisfactory finite - time and stationary dynamical approximation .hnggi et al . proposed a very different diffusion process in the context of chemical master equation : the heuristic derivation of eq .( [ fpe-2 ] ) is based on onsager s theory .then the stochastic potential for the system should be the transient landscape , and the _ thermodynamic force _ is therefore , the macroscopic ordinary differential equation should be so and the diffusion coefficient proportional to . in order to distinguish hnggi et al.s from km s , we term eq .( [ fpe-2 ] ) as hgtt s diffusion .it is easy to show that eq .( [ fpe-2 ] ) gives the same large deviation function as the original moran process .moreover , by comparing the drift coefficients of ( [ fpe ] ) and ( [ fpe-2 ] ) , hgtt s and km s describe the same ode when tends to infinity . for the diffusion coefficients : it is easy to find that hgtt s diffusion coefficient is always smaller than that of km s ( see fig .4 ) , except when near . so away from the extremal point , hgtt s diffusion shows different finite - time stochastic dynamics from km s .note that km s diffusion gives the correct finite - time dynamical approximation of the original moran process , hgtt s could then show a wrong short - term dynamics for most of the initial states .therefore , our diffusion dilemma can be stated as follows : can we find an approximated diffusion process correctly describe the whole three dynamical regimes : ( a ) the deterministic limit ; ( b ) the short time stochastic dynamics ; ( c ) long time global dynamics ?for truncated km approximation ( and van kampen s expansion ) , the ( a ) and ( b ) are correct for each and very attractor , but ( c ) is wrong . for hgtt s diffusion , ( a ) and ( c )are correct , but ( b ) is wrong .so we can not find a diffusion process that provides all the three correctly .stochastic dynamics have become a fundamental theory in understanding darwinian evolutionary theory . besides _ nonlinearity _ , _stochasticity _ has been shown as another basic feature of complexity in biological world , especially within the scale of cellular dynamics .stochastic evolutionary game dynamics , as _ agent - based _ models to describe the kinetics in polymorphic population systems , offer a framework to study the _ frequency - dependent selection _ in evolution .the present paper discuss the well - mixed stochastic evolutionary game dynamics from the viewpoint of the _ transients_. the transient landscape , as a potential - like representation of the pre - fixation dynamics , has been constructed via the conditional stationary distribution in the theory of quasi - stationarity in terms of the large deviation rate function .the involvement of large deviation theory from probability is essential here , for without it , the landscape would be system s size dependent .it has been shown that this transient landscape can play a central role in connecting the deterministic replicator dynamics , the final fixation behavior and diffusion approximation . as a lyapunov function of the replicator dynamics, the transient landscape visually captures the infinite - population nonlinear behavior .the downhill movements in this landscape corresponds to the dynamics of its deterministic counterpart , whereas the rare uphill movements arising from the random fluctuations are of more interest in stochastic evolutionary systems . to capture the eventual fixation behavior from the transient perspective, we have classified the absorbing states into two cases : the attractive absorbing state which is located at the transient attractor ; the other rare absorbing state which is located at the top of the landscape .the former is an inherent result of the transient downhill dynamics , while the latter is related to the multiple time scale issue , that is , the final fixation time scale is separated from the transient coexistence quasi - stationarity .furthermore , the maxwell - type construction and diffusion approximation are both important problems linking to the transient dynamics .the maxwell - type construction is a global description of nonlinear bistable stochastic dynamics , which is not present in deterministic dynamics .this construction always searches the global minimum in the landscape , so it is a direct result of inter - attractoral dynamics .the comparison of the maxwell - type constructions between the original transient landscape and its diffusion counterpart indicates that the truncated km diffusion approximation could result in different global dynamics , that is , the original moran process for large populations and its diffusion counterpart could select different global stable points . in order to solve this problem ,another hgtt s diffusion has been constructed for giving the correct long - term asymptotic dynamics .however , this diffusion gives the wrong finite time stochastic dynamics . by investigating the first passage times in the simple birth - death process, it has been found that the failure of exponential approximation in the uphill movement could be a reason for our diffusion s problem .mathematically , the diffusion approximation is just a second - order polynomial expansion of the master equation , which only offers the second - order precision for the original process .accordingly , this approach can give the correct deterministic dynamics ( first order ) and gaussian dynamics near the stable point ( second - order ) .however , the inter - attractoral dynamics is determined by the rare barrier crossing movements with exponentially small probabilities , so the maxwell - type construction should be approximated in the level of exponential asymptotics , which could be out of any finite order expansions league . in the theory of probability , this is the domain of the large deviation theory .it is believed that discrete stochastic dynamics offers a new perspective on biological dynamics . besides the conventional concentrations on maximum - likelihood events , more attention should be paid to _rare events_. evolution itself is a process with the accumulations of various rare events , such as genetic or epigenetic mutations and ecological catastrophes .so the stochasticity is not just fluctuations near the most probable macroscopic states , but an important source of complexity , i.e. , `` innovation '' , especially on an evolutionary time scale .we thank tibor antal , ping ao and hao ge for reading the manuscript and helpful comments .discussions with jiazeng wang , bin wu and michael q. zhang are gratefully acknowledged .dz also wish to acknowledge support by the national natural science foundation of china ( 10625101 ) , and the 973 fund ( 2006cb805900 ) .quasi - stationarity is a series of stochastic mathematical techniques for analyzing the markov processes with absorbing states .the basic idea of the quasi - stationarity is to find some _ effective _distributions for characterizing the transient behavior of the process .there are basically two kinds of quasi - stationarities : conditional stationary distribution and stationary conditional one . herewe only consider the former , see for more details . in order to introduce the conditional stationary distribution ,we now add small mutations to the original moran process as follows : in this case , the process has become irreducible .further , the stationary distribution of the new chain reads : where is the normalized constant .consider it is not difficult to have that is independent of , and is just the same as in eq .( [ csd ] ) .it has been shown in that is proportional to the expected time of visits to state before absorption when started in the revival distribution .that is , characterizes the occupation time distribution of the transient dynamics .the larger , the longer the process stays at state before absorption .here we should emphasize that , given a markov chain with absorbing states , the pre - fixation occupation time distribution depends on the distribution of the states in which the chain is revived . for the birth - death process here , it is natural to choose the reviving states as neighboring the absorbing states . in this section we will show that the definition of the _ transient landscape _ in eq .( [ m - potential ] ) can be extended to more general cases .consider a multi - dimensional birth - death process with absorbing states , i.e. . the state space of this process is a -dimensional vector space , denoted as . in the generalized moran process , for instance , is the number of strategies , and is the number of individuals with strategy at time .it has been shown that still has the lyapunov property with respect to its thermodynamic limit .suppose the thermodynamic limit of can be described as the following deterministic differential equations in particular , for the generalized moran process , where is the frequency - dependent probability that an strategist is replaced by a strategist . from , we have van kampen s expansion provides another systematic method of diffusion approximation .the idea of vk expansion is that , in large population size , the number we are interested in ( e.g. the number of strategy ) is expected to consist of two parts : deterministic and fluctuations parts .consider the continuous time birth - death process here ( the discrete time case is similar ) , for any state , we have where is of order , is of .define the shift operators as and , so the master equation can be written as where is the birth rate , and is the death rate .now we denote the distribution of as .in fact , and we have we take the taylor expansions : where so the terms of order on either side will vanish if satisfies the equation which is just the deterministic replicator dynamics . if consider the terms of order , should obeys this is a linear fokker - planck equation whose coefficients only depend on .so van kampen s approach gives the correct dynamics conditioned on the deterministic solution .if we substitute , we can find that eq .( [ vk ] ) is exactly the same as eq .( [ fpe ] ) .( [ eq_16 ] ) shows that the fixation probabilities , and our transient landscape have the following relation : to further illustrate this , let us consider a similar relation in a diffusion process with the following stochastic differential equation with absorbing boundary conditions . as shown in sec .iv , the conditional stationary distribution can be obtained by solving the kolmogorov forward equation where the stationary distribution is and the transient landscape is .\ ] ] on the other hand , the fixation probability from to is the solution of the backward equation with boundary conditions it is not difficult to show that so we also have we now attempt to generalize the above relation in eq .( [ 2 ] ) to the more general multi - dimensional cases .consider an -dimensional diffusion process with forward equation where the absorbing boundary of is denoted as .for any , the fixation probability density at from also satisfies the backward equation its boundary condition is where is the dirac - delta function for .the conditional stationary distribution solves the in eq .( [ 48 ] ) , where .detailed balance , however , further dictates .therefore , , \label{50}\ ] ] where is our transient landscape . now consider eq .( [ 49 ] ) in the light of ( [ 50 ] ) .first we denote .it satisfies \zeta_i(\vec{x } ) + \frac{1}{2n}\sum_{i , j}^{n}b_{ij}(\vec{x})\frac{\partial}{\partial x_j } \zeta_i(\vec{x } ) \nonumber\\ & \approx & -\frac{1}{2 } \sum_{i , j}^{n } \left[b_{ij}(\vec{x})\frac{\partial } { \partial x_j}\phi(\vec{x } ) \right ] \zeta_i(\vec{x } ) + \frac{1}{2n}\sum_{i , j}^{n}b_{ij}(\vec{x})\frac{\partial}{\partial x_j } \zeta_i(\vec{x } ) . \nonumber\\ & = & -\frac{1}{2 } \sum_{i , j}^{n } \zeta_i(\vec{x})b_{ij}(\vec{x})\frac{\partial } { \partial x_j } \left [ \phi(\vec{x } ) -\frac{1}{n}\ln \zeta_i(\vec{x } ) \right].\end{aligned}\ ] ] we see a hint of eq .( [ 2 ] ) in the square bracket . for multi - dimensional problems ,the gradient of is a vector while is a scalar .therefore , it seems to us , even with detailed balance condition , the relation in eq .( [ 2 ] ) can not be generalized to multi - dimensional case . on the other hand, the definition of can be generalized to multi - dimensional case ( see appendix b ) , even though finding it will be hard .00 m. a. nowak , _ evolutionary dynamics _ ( harvard university press , cambridge , ma , 2006 ) .a. traulsen and c. hauert , in _ reviews of nonlinear dynamics and complexity _ , edited by h. g. schuster ( wiley - vch , weinheim , 2009 ) . c. p. roca , j. a. cuesta and a. snchez , phys .6 , 208 ( 2009 ) .r. durrett , _ probability : theory and examples _( duxbury press , belmont , 1996 ) , second ed . s. karlin and h. m. a. taylor ,_ a first course in stochastic processes _ ( academic , london , 1975 ) , second ed .r. a. fisher , proc .42 , 321 ( 1922 ) .s. wright , genetics . 16 , 97 ( 1931 ) .p. a. moran , _ the statistical processes of evolutionary theory _ ( clarendon , oxford , 1962 ) .m. a. nowak , a. sasaki , c. taylor , and d. fudenberg , nature ( london ) 428 , 646 ( 2004 ) .a. traulsen , j. m. pacheco and m. a. nowak , j. theor . biol .246 , 522 ( 2007 ) .f. fu , m. a. nowak and c. hauert , j. theor .266 , 358 ( 2010 ) . c. taylor , d. fudenberg , a. sasaki and m. a. nowak , bull .66 , 1621 ( 2004 ) .t. antal and i. scheuring , bull .68 , 1923 ( 2006 ) . c. taylor , y. iwasa , and m. a. nowak , j. theor .243 , 245 ( 2006 ) .h. ohtsuki , p. bordalo , and m. a. nowak , j. theor .249 , 289 ( 2007 ) .p. m. altrock and a. traulsen , new j. phys .11 , 013012 ( 2009 ) .b. wu , p. m. altrock , l. wang and a. traulsen , phys .e. 82 , 046106 ( 2010 ) .d. fudenberg , m. a. nowak , c. taylor and l. a. imhof , theor .70 , 352 ( 2006 ) .t. antal , m. a. nowak and a. traulsen , j. theor .257 , 340 ( 2009 ) .t. antal , a. traulsen , h. ohtsuki , c. e. tarnita and m. a. nowak , j. theor .258 , 614 ( 2009 ) .g. szab and g. fth , phys .446 , 97 ( 2007 ) .m. droz , j. szwabiski , and g. szab , eur .j. b 71 , 579 ( 2009 ) .g. szab , a. szolnoki , m. varga , and l. hanusovszky , phys .e 82 , 026110 ( 2010 ) .j. huisman and f. j. weissing , nature ( london ) 402 , 407 ( 1999 ) .j. huisman and f. j. weissing , ecology 82 , 2682 ( 2001 ) .j. n. darroch and e. seneta , j. appl .prob . 2 , 88 ( 1965 ) .a. hastings , trend ecol . evol .19 , 39 ( 2004 ) . g. block and l. j. s. allen , bull .62 , 199 ( 2000 ) .j. c. claussen and a. traulsen , phys .e 71 , 025101(r ) ( 2005 ) .y. tao and r. cressman , bull .69 , 1377 ( 2007 ) .m. vellela and h. qian , bull .69 , 1727 ( 2007 ) .s. g. ficici and j. b. pollack , j. theor .247 , 426 ( 2007 ) . c. m. grinstead and j. l. snell , _ introduction to probability _( american mathematical society , providence , 1997 ) , 2nd edition .m. i. freidlin and a. d. wentzell , _ random perturbations of dynamical systems _( springer , new york , 1984 ) . a. dembo and o. zeitouni ,_ large deviations techniques and applications _ ( springer - verlag , new york , 1998 ) .h. touchette , phys .478 , 1 ( 2010 ) .h. qian , nonlinearily .24 , 19 ( 2011 ) .j. wang , l. xu and e. k. wang , proc .105 , 12271 ( 2008 ) .p. d. taylor and l. jonker , math .40 , 145 ( 1978 ) .d. zhou , b. wu , h. ge , j. theor .264 , 874 ( 2010 ) .j. maynard smith , _ evolution and the theory of games _ ( cambridge university press , cambridge , 1982 ) .h. ge and h. qian , phys .103 , 148103 ( 2009 ) .r. g. strongin and y. d. sergeyev , _ global optimization with non - convex constraints : sequential and parallel algorithms _( kluwer academic publishers , dordrecht , 2000 ) .w. feller , trans .77 , 1 ( 1954 ) . c. w. gardiner , _ a handbook of stochastic methods: for physics , chemistry and the natural sciences _( springer , berlin , 1983 ) . w. y. tan , _ stochastic modeling of aids epidemiology and hiv pathogenesis _ ( world scientific pub , singapore , 2000 ) .a. traulsen , j. c. claussen , and c. hauert , phys .95 , 238701 ( 2005 ) .f. a. c. c. chalub and m. o. souza , math .47 , 743 ( 2008 ) .p. hnggi , h. grabert , p. talkner and h. thomas , phys .a 29 , 371 ( 1984 ) .n. g. van kampen , _stochastic processes in physics and chemistry _ ( elsevier science , north - holland , 1992 ) . t. g. kurtz , math .stud . 5 , 67 ( 1976 ) .t. g. kurtz , stoch .appl . 6 , 223 ( 1978 ) .h. qian , prot .11 , 1 ( 2002 ) .i. nsell , math .156 , 21 ( 1999 ) .h. ge and h. qian , j. roy .soc . interface. 8 , 107 ( 2011 ) .j. m. t. thompson and h. b. stewart , _ nonlinear dynamics and chaos _ ( john wiley , new york , 1986 ) .h. k. khalil , _ nonlinear systems _ ( prentice hall , upper saddle river nj , 2002 ) .h. eyring , j. chem .phys . 3 , 107 ( 1935 ) .h. a. kramers , physica . 7 , 284 ( 1949 ) .d. y. chen , j. f. feng and m. p. qian , science in china ( series a ) , 39 , 7 ( 1996 ) .m. vellela and h. qian , j. r. soc .interf . 6 , 925 ( 2009 ) .h. qian , p. z. shi and j. xing , phys .11 , 4861 ( 2009 ) .m. kimura , cold spring harbour symp .20 , 33 ( 1955 ) .a. j. mckane , d. waxman , j. theor .247 , 849 ( 2007 ) .d. waxman , j. theor .269 , 79 ( 2011 ) .w. h. sandholm , _ population games and evolutionary dynamics _( mit press , cambridge , ma , 2011 ) .ao , phys . life rev . 2 , 117 ( 2005 ) .ao , commun .49 , 1073 ( 2008 ) .l. j. s. allen , _ an introduction to stochastic processes with applications to biology _( prentice hall , upper saddle river , nj , 2003 ) .m. b. elowitz , a. j. levine , e. d. siggia and p. s. swain , science 297 , 1183 ( 2002 ) .l. cai , n. friedman and x. s. xie , nature ( london ) 440 , 358 ( 2006 ) .d. a. beard and h. qian , _ chemical biophysics : quantitative analysis of cellular systems _( cambridge university press , cambridge , 2008 ) .a. traulsen , j. c. claussen and c. hauert , phys .e 74 , 011901 ( 2006 ) .g. hu , zeit .b , 65 , 103 ( 1986 ) .figure 1 ( color online ) : transient landscapes and conditional stationary distributions : ( a ) uni - well case : the small window shows the transient landscape ; the large window shows the conditional stationary distribution with different population size .parameters are , , , , and .( b ) uni - barrier case , with parameters , , , and .( c ) uphill case , with parameters , , , .figure 2 ( color online ) : maxwell - type construction for the bistable moran process : ( a ) when the critical condition is satisfied ( , , , ) , .both are equally important .( b ) with parameters , , , , .then , and even for large population size .figure 3 ( color online ) : the original moran process and its km approximated diffusion process show different maxwell - type constructions . in this example , , but .( the figure is magnified and we focus on the region near ) figure 4 ( color online ) : the hgtt s diffusion coefficient is always smaller than km s , except at where they are both equal to each other .
|
agent - based stochastic models for finite populations have recently received much attention in the game theory of evolutionary dynamics . both the ultimate fixation and the pre - fixation transient behavior are important to a full understanding of the dynamics . in this paper , we study the transient dynamics of the well - mixed moran process through constructing a _ landscape _ function . it is shown that the landscape playing a central theoretical `` device '' that integrates several lines of inquiries : the stable behavior of the replicator dynamics , the long - time fixation , and continuous diffusion approximation associated with asymptotically large population . several issues relating to the transient dynamics are discussed : 1 ) multiple time scales phenomenon associated with intra- and inter - attractoral dynamics ; 2 ) discontinuous transition in stochastically stationary process akin to maxwell construction in equilibrium statistical physics ; and 3 ) the dilemma diffusion approximation facing as a continuous approximation of the discrete evolutionary dynamics . it is found that _ rare events _ with exponentially small probabilities , corresponding to the uphill movements and barrier crossing in the landscape with multiple wells that are made possible by strong nonlinear dynamics , plays an important role in understanding the origin of the complexity in evolutionary , nonlinear biological systems .
|
the precise meaning of velocity excitations has recently received renewed interest .this interest came out of comparative use of digital waveguide synthesis and finite differencing methods .this comparison has revealed a number of related subtle difficulties that were nt overtly taken into account in these comparisons .for details see .the current paper addresses excitation mechanisms for the purpose of velocity excitations directly and by example . while basic notions have long been established defined algorithms and functional properties havent been discussed yet . specifically comparisons and notions of equivalenceusually do nt discuss excitations and their specific implementations .here we discuss three basic methods which relate to prior published non - algorithmic suggestions and unpublished common wisdom in the field .overall , published discussions of integration algorithms in practice are rather rare . within the fieldthe two sources which are most explicit about velocity excitations to waveguides are smith ( * ? ? ?* or comparable sources by the same author ) and bank s thesis . we will in turn discuss the cases of the infinite string , the string with fixed boundary conditions , the string with open boundary conditions , behavior with respect to loop filters , and computational cost for all three models .we will always use the continuous case as comparative reference .then we will discuss interpretive issues with localized interpretation .physically correct behaviors have been discussed by the author .related examples have been derived , some rederived , recently by smith using a novel state - space transformation .there are two related ways to derive the following algorithm . onis to consider a discretization of the fundamental solution ( equation ( 9 ) of the fundamental solution ) . the other is to find a discrete implementation of the continuous situation as described by smith , citing morse .the latter will assume an arbitrary excitation distribution over which one can integrate .the first assumes impulsive excitations .however any arbitrary excitation distribution can be seen as the sum of time - shifted impulses , hence these two are closely related .the first variant of the algorithm reads as follows : rescale the impulse . add the impulse to all taps left of the excitation point to the right - going rail of the waveguide .subtract ( or add with an inverted sign ) to all the taps left of the excitation point to the left - going rail of the waveguide .repeat for all excitation positions and impulses at the current time - step .the second variant of the algorithm reads : rescale the distribution .starting at the right - most position of the string , integrate rescaled excitation distribution to position and add the result to the right - going rail and subtract the result from the left - going rail .using either of these algorithms we get for a center excitation ( compare with , using smith s notation ) : the upper two rows are right- and left - going traveling - wave components . by convention , and following smith , the upper rail will move right and the lower rail will move left .the bottom row is their sum which is the total displacement .the lines marks the excitation point . if we observe that we get the correct picture .we can even revert the direction of propagation and get the correct time - asymmetric case of a spreading square pulse with negative sign ( compare ( * ? ? ? * eq .( 47 ) ) ) .as this solution comes about as the difference of two heaviside step - functions , i ll call it the _ heaviside integration _ method for waveguides for the unbounded string .we shall see that it readily extends to the bounded case .hence this is one way of loading a waveguide that is physically accurate ( throughout this paper `` physical '' or `` physically accurate '' will mean , `` comparable results to the continuous solution of the wave equation '' ) .very few papers discuss velocity excitations explicitly .bank is an exception .he employs what i will call `` input - side integration '' .the idea is to integrate a velocity input before feeding it into a waveguide to arrive at a velocity excitation . a procedure that is suggested by the integral relationship of the two .if we interpret the waveguide to be a spatial discretization of the string with both rails sharing the same spatial position and we excite at this spatial point , we get the following result to an impulsive excitation : we see a peak at the excitation point that does nt exist in the correct simulation discussed earlier and the continuous simulation .it s called _bank s anomaly _after balazs bank who was the first to point it out .this anomaly specifically appears at the point of excitation , which is exactly how bank found it .one can show that it also appears at the center - symmetric position on the string due to constructive interference .a non - linear hammer coupling needs to know the local displacement .a question remains to answered , which is whether the anomaly disappears when the excitation is completed .but we see an immediate way to resolve it .if the excitation point is between spatial sampling points , the anomaly disappears .hence we can not naively chose excitations on the spatial sampling point without taking the anomaly into account .see bank for a number of possible resolutions .these yet lack a clear physical interpretation . _bank anomaly _ points at the difficulty of spatial representation of excitation points , a topic yet to be explored in detail .i will not attempt to address it here . to get the correct time - asymmetric pattern upon inverting the direction of the reals , we need to invert the signs of the excitation .finally one can consider `` output - side integration '' .here we integrate the sum of rails carrying velocity waves to get one accumulated displacement representation .the following diagram contains a fourth row , which contains the integration of the sum above it . hence we see that input - side and output - side integrations behave comparably .input - side integration requires only one integrator whereas the output - side case requires one for each spatial point .if a read - out point is local , then this can however also reduced to one integrator .we too see that bank s anomaly persists for excitations on a spatial point .it can be resolved by subtracting half the excitation from the integrator at the excitation point at the moment of excitation .the output - side integration also suffers from the numerical weaknesses of non - leaky integrators [ draft note : need discussion of leaky versus non - leaky integrators .see . ] . in the input - side casethis problem is contained .this we will discuss when introducing boundary conditions .these two approaches have , however a crucial difference .the _ content of the traveling waves differ _ and hence in general the _ filters in the loop differ _ if they wants to achieve the same final result .after all in one case the filter will see impulses as input whereas in the other case it will be step functions .a number of hybrid approaches have been proposed .i will not attempt to discuss them here , as the goal is the understand velocity excitations in a purely waveguide setting .see for example .next we will introduce boundary conditions .this raises another question .how do the respective approaches to integration compare with respect to boundary conditions .first we refer to for the ground truth .the solution is periodically extended at boundaries which create mirror images of the solution ( hence the name `` method of images '' in the theory of partial differential equations ) . the image is so chosen as to satisfy the boundary condition . in the case of a string with fixed ends ( dirichlet boundary conditions )it is well - known that for displacement waves the boundary inverts the sign , hence the image is sign - inverted with respect to the original .see figure 3 in for a center excitation .if the excitation is off - center we get a parallelogram with maximum width less than the length of the string . in either casethere are three possible states : unit positive , unit negative and zero extension .the three states alternate with a zero extension state always between different sign unit extension ones .[ draft note : add figure for off - center case , comparable to figure 3 in to illustrate this . ] also we observe that the transition between the vanishing of negative to positive extension looks like the discrete case illustrated in eq .( 47 ) of and hence corresponds to the case also observed under time - reversal for the heaviside integration method . in the case of a string with open ends ( neumann boundary conditions )the displacement waves do not invert sign at the boundary . herewe get a linear accumulation with every reflection .the geometric picture is the same as the dirichlet case , except that former zero extension states have even increasing accumulation , and the other states for odd increasing accumulation .see also for a formal derivation of this property .inverting boundary conditions should create the right image of traveling waves .we will thus use these conditions and observe the various methods . these are the respective results of the heaviside integrator for time steps equivalent to half a string length .the diagram is as before except that vertical lines at each side denote the boundary .the excitation is at the midpoint : observe that the pattern repeats and matches the continuous case .the off - center case can readily be plugged in for similar results .the excitation is placed in the middle of the string and constitutes loading the result of a non - leaky integrator fed by an impulse to equal parts left and right of the excitation - point into the respective traveling waves . observe that integration never stopped going through the full period .however if integration is stopped at any point the pattern will be inconsistent with the continuous case .also we see that after a full period the integration still needs to continue , as we have returned to the original state .hence there is no finite - length loading using the input - side excitation method .again , off - center excitation follow the same pattern without major differences. same excitation as before . hence we see that the output - side integration yields the correct pattern .the integration continues indefinitely by definition .however compared to the input - side integrator we observe , that non - trivial addition ( adding zero ) in the integrator happen twice per period for one impulsive excitation , whereas the integrator on the input side only needs to store the impulse and hence has no further non - trivial additions after its initial loading .this means that under sparse input conditions , _ input - side integration is numerically favorable_. more precisely , in the worst case an output - side integrator will see indefinite non - trivial additions at every time - step even for excitations of a maximum length of twice the string .input - side integrators will see non - trivial additions only at every time - step when the excitation changes .if the excitation is indefinitely non - trivial , the two methods are comparable with respect to addition inaccuracies .output - side integration for lossless strings is impractical because any numerical error in the addition will accumulate , though only linearly , as the error is not fed back into the waveguide iteration .input - side integrators will feed numerical errors into the string , yet again only linearly , as the content of the waveguide is not feed back into the integrator .this changes in case of non - linear coupling mechanisms and additional care must be taken .next we discuss integration methods for strings with loose ends .it is well known that this corresponds to reflections without sign inversion at the boundary for displacement waves .these are the respective results of the heaviside integrator for time steps equivalent to half a string length .the excitation is at the midpoint : hence we see that the heaviside integration does not yield the correct accumulation of displacement as is seen in the continuous case .why this simulation breaks down , remains to be explored .the excitation is placed in the middle of the string and constitutes loading the result of a non - leaky integrator fed by an impulse to equal parts left and right of the excitation - point into the respective traveling waves . observe that integration never stopped going through the full period and again needs to continue .we observe the correct accumulation of linear displacement .same excitation as before . hence we see that the output - side integration yields the correct pattern .the integration continues indefinitely by definition .finally i want to discuss the impact of filters , in particular the most basic case of linear damping filters , corresponding to a gain multiplication in the loop , on the schemes discussed .it is easy to see that the heaviside integration method is well - behaved with respect to such loop filter .all the data is present and at least linear damping will dissipate all information without trouble . the impact of phase delay and its relationship to physically observable effects on the waveform on the string is more complicated and should be investigated separately .the same in fact holds for non - integrating simulation , where the exact relationship between phase delay and physically correct wave form is usually phenomenologically treated .the output - side integration case is troubled by a linear gain loop filter .amplitude in the waveguide is dissipated , but prior states in the output integrator may still have older higher amplitudes . hence subtractingthem will leave an incorrect remainder in the integrator proportional to where is a positive gain factor less than 1 .a potential solution is to introduce a matched leak to the integrator .however , a precise match is critical to avoid numerical inaccuracies .the input - side integration case is difficult to damp with loop filters , as the integrator will indefinitely feed input into the loop .hence the filters needs to be matched at the input to avoid this problem .overall , of all the integration methods studied , only the heaviside integration method is numerically well - behaved , and easily adaptable to loop filters as customary in waveguide synthesis .however it can not be used for a neumann simulation in its current form .the other methods have to be handled with care for they use non - leaky integrators , which are numerically unstable .the computational cost of the heaviside integration method is dependent on the excitation position .if the excitation position is at the far end of the string the maximum integration length of twice the string length ( once for the each rail ) is reached . if a choice of rail direction is permissible in the particular implementation , this can be reduced to one string length as the positive rail is chosen to correspond to the shorter side of the string .this integration has to be performed per impulsive velocity excitation .hence we get an overall computational complexity and if is treated as a constant .this is independent and in addition to the complexity of the waveguide iteration .we denote by the complexity of the waveguide computation accumulated over time steps .hence we get the total complexity of .the computational cost of input - side integration is one non - trivial loading per time step of the waveguide iteration .additionally a constant amount of operations are necessary for changes in the integrator on non - trivial input .hence the complexity is .the computational cost of output - side integration depends on the spatial distribution of the integration .typically only one observation point is of interest .then one integration per time step is necessary , independent of the output .the complexity is thus .if the full length of the string is required this becomes .we observe that the local , output - side integration is computationally most efficient , while numerically least desirable .the heaviside integrator is never cheaper than the input - side integrator , the extra cost depending on the length of the string and the position of the excitation .yet this is bought at desirable numerical properties and easy of use .it is a repeated belief that traveling velocity waves can be calculated from displacement waves by calculating their difference instead of their sum .it is considered a form of localized velocity excitation .difficulties with this belief has in essence already been addressed by the author elsewhere . herei would like to discuss this difficulty based on the given examples .assume that one wants to simulate a local velocity impulse and it s effect on displacement .using the above prescription , one might implement the following algorithm : [ draft note : needs introduction to notation .this relates to ] and , which is the difference of the standard displacement impulse and , ignoring potential needs for rescaling .initially there is no displacement in accordance with expectations for a velocity excitation .however , as time evolves on sees an isolated negative impulse traveling left and an isolated positive impulse traveling right ( see also eq .( 45 ) ) .however if we excite the lossless wave equation with a velocity impulse , we get a different picture .we get a spreading square pulse . clearly the simple use of the relationship between displacement and velocity waves in waveguides gives an incorrect result .the interpretation of difference of displacement waves can also be seen in the simulations provided here .observe equations ( [ eq : h1]-[eq : h3 ] ) .we see that these pictures do not violate the naive interpretation of the relation of velocity to displacement waves .indeed we have zero displacement everywhere and the data present make the displacement zero by using the difference .however , it does give us a clue as to the difficulty in using the interpretation that lead us to the naive approach in the first place .if we accept that the difference between rails gives velocity , and we observe that a step function has been loaded into the waveguide , we have to conclude that the waveguide contains an semi - infinitely extended velocity . rather than the string moving upward on the whole of the semi - infinite half - line , it starts to spread only from the initial excitation point .clearly there is a difficulty in using the naive interpretation .clearly the difference between the two traveling wave rails can not be velocity .there is another reason for this , which is dimensionality .the sum and the difference of two variable of the same dimensionality will stay of that dimensionality .the sum and difference of displacement stays a displacement .so one has to not only take the difference to get displacement waves , one has to also integrate inertial velocities to make sure they have the correct dimensionality .this integration creates the step functions that we observe above .this is indeed peculiar , as step functions are non - zero out into infinity , hence are non - local .it has in fact been pointed out that variables in waveguides in some constellations have this non - local property .the string , like any mechanical system , requires displacement and velocity for full specification so we observe that one of them has a non - local property when compared to the other . hence `` localized velocity excitations '' should be considered a displacement excitation in terms of dimensionality of the quantities involved .it also shares the time - symmetric properties of displacement excitations ( see eq . ( 49 ) in ) .this indicates that one can in fact not so readily go from displacement to velocity .one is non - local to the other and the conversion is not only difference but also integration or differentiation , see .we discussed three integration method for velocity excitations in displacement waveguide simulations .they differ in terms of numerical properties , relation to loop filters , computational cost and generality . for most situationsthe heaviside integration algorithm is most desirable , except for strings with loose ends , when this methods is incorrect . of the remaining methods ,input - side integration is generally more desirable , than output side integration , for numerical reasons and for the impact on loop filters .localized velocity excitations will generally yields results different from the wave equation .loop filters not designed with the integrating behavior of displacement waves in mind , may inaccurately model the desired behavior .hence sufficient care must be taken .it is worthwhile to note , that the excitation algorithms presented here do nt constitute a complete excitation description with respect to the wave equation . in generalboth displacement and velocity waves are present at the same time and hence excitations of the both types can occur in any linear mixture .i am grateful for discussions with matti karjalainen , cumhur erkut and for comments of two anonymous referees of an earlier manuscript in review which all contributed to my thinking on the subject of this manuscript .g. essl , unpublished manuscript .preprint http://arxiv.org/abs/physics/0401065 .revised version of this manuscript currently under review for publication .( unpublished ) .j. o. smith , unpublished manuscript .preprint http://arxiv.org/abs/physics/0407032 ( unpublished ). j. o. smith , `` acoustic modeling using digital waveguides , '' in _ musical signal processing _ , edited by c. roads , s. t. pope , a. piccialli , and g. de poli ( swets , lisse , netherlands , 1997 ) , chap . 7 , pp .221263 . w. f. ames , _ numerical methods for partial differential equations _ , 3 ed .( academic press , san diego , 1992 ) . b. bank , `` physics - based sound synthesis of the piano , '' master s thesis , budapest university of technology and economics , 2000 , also available at helsinki university of technology , laboratory of acoustics and audio signal processing , report 54 . b. bank , `` nonlinear interaction in the digital waveguide with the application to piano sound synthesis , '' in _ proceedings of the international computer music conference ( icmc-2000 ) _ ( icma , berlin , germany , 2000 ) , pp .. m. karjalainen , `` mixed physical modeling : dwg + fdtd + wdf , '' in _ proceedings of the 2003 ieee workshop on applications of signal processing to audio and acoustics _ ( ieee , new paltz , new york , 2003 ) , pp .. m. karjalainen and c. erkut , `` digital waveguides vs. finite difference structures : equivalence and mixed modeling , '' manuscript , accepted for publication in eurasip j. appl . signal process .( unpublished ) .a. krishnaswamy and j. o. smith , `` methods for simulating string collisions with rigid spatial obstacles , '' in _ proceedings of the 2003 ieee workshop on applications of signal processing to audio and acoustics _( ieee , new paltz , new york , 2003 ) , pp .m. e. taylor , _ partial differential equations i : basic theory _ ( springer , new york , 1996 ) . k. f. graff , _ wave motion in elastic solids _ ( dover , new york , 1991 ). j. o. smith , `` digital waveguide modeling of musical instruments , '' draft of online manuscript , available at http://ccrma-www.stanford.edu/~jos/waveguide/ ( unpublished ) .
|
elementary integration methods for waveguides are compared . one using non - local loading rules based on heaviside step functions , one using input side integration and one using integration of the output of traveling velocity waves . we show that most approaches can be made consistent with the wave equation in principle , under proper circumstances . of all methods studied the heaviside method is the only method shown to not suffer from undesirable numerical difficulties and amendable to standard waveguide loop filtering practices , yet it gives incorrect results for neumann boundary conditions . we also discuss localized velocity excitations , time - limited input - side excitations and the relation of loop filters to wave variable type .
|
quantum key distribution is the art of allowing two distant parties alice and bob to remotely establish a secret key combining with an authenticated classical channel and a quantum channel .the unconditional security of quantum key distribution bases on fundamental laws of quantum mechanics .the unconditional security of qkd protocol with perfect experimental setup ( the source is perfect single photon source and so on ) has been respectively given by lo , chau , shor , preskill , renner et al .furthermore , gottesman , lo , lukenhaus and preskill ( gllp ) have given an analysis of security of qkd bases on practical source .in their security analysis , the final secret key bit can be generated if we can estimate the lower bound of the secret key bits generated by the single - photon pulse .more recently , scarani has analyzed security of qkd with finite resources .meanwhile , the experimental realization about qkd also has a rapid progress in recent years . in practical quantum key distribution realizations ,umzi method is commonly used . however , the phase modulator ( pm ) in the interferometer is not perfect , which means it will introduce much more loss than the arm has no pm . as a result , in this case photon states emitted by alices side is not the standard bb84 state ( we call it unbalanced states in the next section ) , which means security of quantum key distribution based on this states can not be satisfied with gllp formula . of course , one can give a simple security proof when the loss of the pm is considered as an operation controlled by eve and then gllp can be used to calculate the final secret key rate .however , the secret key rate in this case is not optimal . to give an optimal security proof of qkd with unbalanced bb84 states , we propose that the real - life source can be replaced by a virtual source without lowering security of the protocol ,then the final secret key rate can be improved obviously .in this section , we will introduce the general qkd setup with umzi method , a schematic of this qkd setup can be illustrated as in fig . 1 . in alices half - interferometer , there is a phase modulator ( pm ) in the long arm .correspondingly , the weak coherent state entering the quantum channel can be divided into coherent state from the short arm and from the long arm respectively , both of them can be given as the following , where , is the weak coherent states in the short arm , is the weak coherent states in the long arm after the pm , is the photon state in the short arm , is the photon state in the long arm after the pm , denotes the mean photon number of the short arm , denotes the mean photon number of the long arm , is the phase modulated by the pm in the long arm . is selected uniformly at random because of eve has no prior knowledge of the phase . combining with lo and preskill s method of phase randomization in ref . , the density matrix of the state emitted by alice is where , is the mean photon number in the quantum channel emitted by alice , is the creation operator .we have given the practical state as in equations ( [ munu1],[munu2 ] ) , thus the practical single photon state after the pm is ( the phase of the photon from the long arm is randomly modulated by , , , ) the long arm and short arm have the same loss in the ideal case , which means pm in alice s side is perfect , thus we can get .however , the long arm will introduce much more loss than the short arm in practical side , which means in practice , the state emitted by alice s side becomes unbalanced correspondingly . for simplicity, the practical pm can be replaced by a perfect pm plus an unbalanced attenuator , the unbalanced attenuator only attenuate the photon from the long arm , while the photon from the short arm can be past without any attenuation .since the single photon state in this case is not the perfect bb84 state for the eavesdropper eve , formula for the final secret key rate ( gllp ) can not be satisfied in this case . to solve this problem, we will analyze the final secret key rate with practical umzi method . for improving the final secret key rate ,an unitary transformation will be proposed in the next section .in this section , we will first give a very simple security analysis of qkd with practical unbalanced mach - zehnder interferometer .as mentioned in section [ qkd with unbalanced mach - zehnder interferometer ] , the half - interferometer in alice s side can be seen as an unbalanced attenuator in the quantum channel .a simple security proof in this case can be illustrated as the following .we can simply assume the unbalanced attenuator is not controlled by alice , which is part of the quantum channel controlled by eve , then the quantum state emitted by alice is standard bb84 states , and the final secret key rate can be calculated . based on this assumption , combining with security analysis of the ideal decoy method qkd protocol , the upper bound of secret key bit rate generated by standard single - photon bb84 states can be given by where , is the probability distribution of the single photon state , is the loss efficiency in the quantum channel , is the fiber length , is the pass efficiency in alice s side , is the pass efficiency in bob s side .therefore , according to secret key rate formula gllp , the final secret key bit rate is just the key rate with ideal pm but is lowered by the attenuation constant .however , the final secret key rate is low in this case . for improving the final secret key rate ,an unitary transformation will be proposed as in fig .2 . we will prove the practical state , as in equation ( [ practical states ] ) , is as security as standard bb84 states by considering the practical state is equal to the standard bb84 state combining with an unitary transformation .the unitary transformation in our paper is an virtual setup , which does not need to be implemented in practical qkd experimental realization , the detailed illustration of the unitary transformation is given as the following , where , , are mutually orthogonal states in alice s system , which are unknown to alice .the pass efficiency of the single photon states is .the practical photon state in alice s side can be seen as the standard bb84 state emitted by a virtual source combining with the unitary transformation .then security of practical qkd setup is equal to security of qkd with the virtual source . combining with the method given in ref . , we can prove the unconditional security of new source qkd in the following . as mentioned in section [ qkd with unbalanced mach - zehnder interferometer ] , the probability that single photon unbalanced states emitted by alice s side is .combining with the unitary transformation , the probability distribution of the single photon state of the virtual source emitted by alice s side is similarly , the probability distribution of the vacuum state and multi - photon state of the virtual source can be given by obviously , the real - life setup of alice can be replaced by the virtual source and the basis - independent unitary transformation equivalently . according to gllp, the basis - independent unitary transformation can not lower the security of the qkd with the virtual source , then we can calculate the upper bound of the secret key bit rate , that is comparing with equation ( [ practicalkey ] ) , upper bound of the secret key bit rate can be improved times with our security analysis . from equation ( [ ourkey ] ) , we can see the real - life unbalanced single photon states can be generated from standard single photon states safely .therefore , security of the real - life unbalanced single photon states is the same as standard single photon states , and gllp formula is satisfied in this situation . combining with our security analysis , one click of one real - life unbalanced single photon states can bring one key bit safely .one may argue that the problem can be taken away by adding the same pm on the short arm , which can be controlled by alice .however , the same pm should be added in bob s short arm for lowing the bit error rate in this case , thus the secret key rate is where is the pass efficiency in alice s side , is the pass efficiency in bob s side .obviously , equation ( [ practicalkey2 ] ) is the same as equation ( [ practicalkey ] ) , thus the secret key rate is lower comparing with our situation . on the other hand , all the unbalanced multi - photons emitted by alice s side can not carry any secret key bit , thus we can conclude that our key bit rate is optimal .similar to the method in ref . , we will give the simulation result of the lower bound of the final secret key rate by considering equations ( [ practical1 ] ) and ( [ practical2 ] ) in this section .the mean photon number of is and the mean photon number of is in the simulation . combining with gys parameters ,the simulation result can be shown in fig .3 . from the simulation result, we can see that the longest transmission distance can be improved obviously with our security analysis .we have analyzed security of umzi method qkd with practical phase modulator by introducing a virtual source and an unitary transformation in this paper .correspondingly , the optimal key bit rate has been given with our security analysis . from the simulation result, we can see that our method can improve the final secret key rate obviously . quite interestingly ,if the different loss was compensated actively , the final secret key rate will be lower comparing with no compensation .this work was supported by national fundamental research program of china ( 2006cb921900 ) , national natural science foundation of china ( 60537020 , 60621064 ) and the innovation funds of chinese academy of sciences . whom correspondence should be addressed , email : zfhan.edu.cn .hwang , phys .91 , 057901 ( 2003 ) .lo , x. ma , and k. chen , phys .rev . lett .94 , 230504 ( 2005 ) .wang , phys .94 , 230503 ( 2005 ) .xiang - bin wang , c .- z .peng , j. zhang , l. yang , jian - wei pan , phys .a 77 , 042311 ( 2008 ) .x. ma , b. qi , y. zhao , h .- k .lo , phys .a 72 , 012326 ( 2005 ) .
|
security proof of practical quantum key distribution ( qkd ) has attracted a lot of attentions in recent years . most of real - life qkd implementations are based on phase - coding bb84 protocol , which usually uses unbalanced mach - zehnder interferometer ( umzi ) as the information coder and decoder . however , the long arm and short arm of umzi will introduce different loss in practical experimental realizations , the state emitted by alice s side is nolonger standard bb84 states . in this paper , we will give a security analysis in this situation . counterintuitively , active compensation for this different loss will only lower the secret key bit rate .
|
entanglement , as a term of joint quantum coherence , is one of the most intriguing elements of quantum mechanics and it is crucial in quantum information tasks .however the existence of an interacting reservoir or environment that leads to decoherence and/or disentanglement places an obstacle to the maintenance of joint quantum coherence during any dynamical process . thus the study and control of entanglement dynamics has received wide attention in recent years ( see reviews ) .there have been studies of entanglement dynamics from many points of view .examples involve open system treatments or closed quantum scenarios such as cavity qed systems , spin systems , etc .many interesting and sometimes surprising findings such as entanglement sudden death , sudden birth , revivals , dynamical relations with quantum state transfer , and other exotic types of entanglement evolution have been reported .such interesting phenomena accompany the idea of tracking entanglement as a carrier of quantum information , a generalization of entanglement swapping .one consequence has been the discovery of examples of non - trivial information conservation " among three or more parties , arising in cases of sufficiently symmetric interaction hamiltonians , or special initial states , or reservoirs that are sufficiently small that their state evolution can be followed in detail , such as in perfect - mirror closed - system cavity qed and in spin systems .however true reservoirs are complex and difficult to follow , especially if mixed state considerations are important .no general closed - form rules of entanglement transfer are known in such cases . in this paperwe revisit quantum information flow from a different perspective and derive a new class of entanglement constants of motion .our approach employs amplitude channel dynamics and avoids information loss by tracing , while remaining open to non - markovian as well as markovian reservoir behavior .we note that the system of experimental interest , which may be one or more qubits , is almost always prepared in a pure state if possible , and frequently the method of preparation produces a pure entangled state .this means that the system itself is in a mixed rather than a pure state .we assume that the entanglement during state preparation , causing the mixedness , arose via interactions that have ceased prior to the beginning of a period of interest at .this period of interest could simply be intended for quantum memory preservation or for specific state manipulations .the static disengaged nature of the prior entanglement partner , and also its lack of specificity in our treatment , reduce it to a vague background object in any further qubit evolution , and for this reason we label it the moon " . andleaves everything else out .the dashed line indicates its entanglement , but not interaction , with the unspecified background moon " .the arrow represents interaction between and an arbitrarily - dimensioned unit , which can be the quantum vacuum reservoir , a single mode cavity , an xy spin chain , etc.,width=151 ] a general sketch of our scenario is given in fig .unit is taken as a two - level system ( qubit ) and unit as a separate quantum system of arbitrary dimension interacting with it , nominally a reservoir .the moon , i.e. , the non - interacting , unspecified , and completely static background , is entangled via an earlier preparation stage with .there obviously remains a wide choice for systems acting as environments that promote evolution of the system of interest after .we will illustrate a range of possibilities with concrete results in various specific interaction contexts : spontaneous emission , jaynes - cummings ( jc ) cavity dynamics , and xy spin chain interactions .these present very different physical situations and interaction mechanisms , and lead to distinct entanglement dynamics , but they all react similarly to the initial moon entanglement .our linked information constants arise from amplitude channel dynamics but do not rely on symmetries of the hamiltonian or of any special initial state , in contrast to the cases in some previous work .in this section we address our approach to entangled state analysis .the hamiltonian of our scheme reads where , and are the hamiltonians of the qubit system , reservoir " unit and the previously - interacting moon respectively ; and denotes the only existing interaction , that between and .we start from the - entangled preparation state , i.e. , the joint superposition state where and are the excited and ground states of our qubit system , and defined as a complete basis set for .it need not be the case generally , but we assume in our example that the moon states and are orthogonal , and because is not interacting with either or they are effectively static : we adopt a conventional approach to the interacting partner system labelled , assuming that it is separable from the qubit at , just as in the conventional treatment of a reservoir in quantum open system dynamics . therefore the entire initial state can be written as where is a normalized state of unit , and is a complete basis for .usually is the ground state of part .we note that since is not interacting , the evolution of the states and are driven only by the hamiltonian , i.e. , the dynamics of the - part can be separated from . therefore we will only need to focus on the - dynamics when we study the time evolution . before we proceed to the time dependent state in various specific models in the following sections, we will first discuss the initial moon entanglement . as we know , the pure - state relation between the two sides of any bi - partition is an dimensional matrix ( where and can be any numbers or infinite ) that may connote entanglement , but in any event permits a schmidt - type decomposition of the joint state .we use the schmidt parameter introduced by grobe , et al . , as our quantitative measure of entanglement , where is not simply the dimension of the space but rather relates to the number of schmidt modes that make a significant contribution to the state .therefore we name this parameter the schmidt weight " from now on .the range of this schmidt weight , ( is the effective dimension of the space ) corresponds to the concurrence range , when concurrence is also applicable .the upper and lower ends of both ranges denote maximal and zero entanglement , respectively . the schmidt weight between two parties and of a general pure state is defined as ^{-1 } , \label{schmidt}\ ] ] where these are the non - zero eigenvalues of the reduced density matrix for either system , or : =\mathrm{tr}_{\alpha } \big[|\psi_{\alpha\beta}\rangle\langle\psi_{\alpha\beta}|\big ] = cc^{\dag}.\ ] ] here is the coefficient matrix connecting the two separate arbitrary complete bases and of systems and respectively , with latexmath:[\[|\psi_{\alpha\beta}\rangle=\sum_{n,\mu}c(n,\mu)|n_{\alpha}\rangle\otimes are also the coefficients of the usual schmidt decomposition where and are the orthonormal schmidt states satisfying .since a general pure state is usually in some arbitrary basis other than the schmidt basis , it is natural for us to follow the coefficient matrix procedure to calculate the schmidt weight ( [ schmidt ] ) .accordingly we note from the initial state ( [ initial ] ) that the coefficient matrix for the moon in the basis of , , , ... , and the interacting partner in the basis , is an matrix which is given as where the two - dot sign " represents the elements for all the other , while the three - dot sign " represents empty rows or columns of zeros for the infinite number of remaining matrix elements .the reduced density matrix however is simply we note that the qubit has reduced the effective interaction space of the moon to a two dimensional subspace , which means that in this context a two - state subspace of the moon is in fact quite general .the non - zero eigenvalues of the above matrix are obvious , and the resulting schmidt weight denoting the entanglement between the moon the remainder is given as since the moon is not interacting , its internal dynamics only amount to a local unitary transformation , which will not affect the entanglement between and the rest , so is independent of time .we note that there is no moon entanglement ( ) when or 0 . in these casesthe initial state is a trivial product state .otherwise .it is particularly interesting when the moon restricts the - dynamics and acts as a monitor of the entire entanglement flow .the following sections take a few specific examples of the - interaction to show the role of moon entanglement in their particular entanglement information dynamics .in this case the qubit system is a two level atom and unit is the quantum vacuum reservoir consisting of the continuum of photon modes .the atom will of course decay to its ground state asymptotically and irreversibly , while one photon is emitted .we write the hamiltonian in the usual way as a sum of atom and reservoir contributions : here is the usual pauli matrix , and the usual boson operators represent the reservoir with a continuum of modes , where and denote the standard creation and annihilation operators respectively , and and are the atom and reservoir frequencies . here , labels the infinitely many modes .the interaction hamiltonian is also standard : where , are the usual raising and lowering pauli operators for the two level system , and the are coupling constants between the reservoir and the atom , for which fundamental expressions are well known here is the angle between the atomic dipole moment and the electric field polarization vector , and is the quantization volume . according to our generic description in eq .( [ initial ] ) , the initial state can be rewritten in the spontaneous emission case as where and are the excited and ground state of the two level atom , and we have defined , indicating that all the reservoir modes are in their vacuum states . with the help of the weisskopf - wigner treatment we will find where the coefficient with as the natural line width , denotes that there is one photon in the reservoir mode while all the rest of the modes are empty and the coefficients are time dependent . the probability to find the atom in the excited state is , which decays to zero asymptotically and irreversibly . with the dynamical state ( [ spon state ] ) we can begin to calculate the schmidt weight , or , representing the entanglement between qubit , or vacuum reservoir , and their corresponding remainders .as defined in the last section , the coefficient matrix between and the remainder for the time dependent state ( [ spon state ] ) is given as where the two - dot sign " represents for all the other and the three - dot sign " again represents empty rows or columns of zeros .then the reduced density matrix is simply a form now according to the definition in eq .( [ schmidt ] ) , we immediately have the qubit entanglement given by the expression ^{2}+1}. \label{sponk_a}\ ] ] we note that at , a finite number that is naturally the same as the constant moon entanglement ( [ moonentanglement ] ) . as time goes on , the probability of the atom in the excited state decays gradually , and at the probability is completely transferred to the ground state and leaves the atom in a product state with its remainder system , which means eventually is disentangled from the rest of the universe , and the schmidt weight .[ figse ] illustrates the behavior as a function of at four different values .we note that in the region when , starts from a finite value , evolves to a local maximum and then decays irreversibly to as is shown in fig .[ figse ] ( a ) and ( b ) .however when as is shown in fig .[ figse ] ( c ) and ( d ) , decays directly and irreversibly to .now let us focus on the reservoir entanglement . from the timedependent state ( [ spon state ] ) we see that the coefficient matrix of reservoir is given as then the reduced density matrix is given by with an infinite number of non - zero eigenvalues for .now from the definition ( [ schmidt ] ) we find the reservoir schmidt weight : ^{2}+ 1}. \label{sponk_a}\ ] ] obviously the reservoir is initially not entangled .then its entanglement gradually increases , and at time we find .when time goes to infinity we note that is , the final reservoir entanglement equals the initial qubit entanglement .[ figse ] plots the behavior of as a function of for various values .we note that in the region when as is shown in fig .[ figse ] ( a ) and ( b ) , starts from zero entanglement , reaches a maximum and then evolves to a finite value in the end . in the opposite region of as shown in fig .[ figse ] ( c ) and ( d ) , it increases directly and irreversibly to the value .to summarize , the qubit entanglement starts at a finite value and decays completely to zero entanglement , while starts from no entanglement and eventually inherits the exact amount of the qubit s initial entanglement .this is exactly equal to the moon entanglement , so one can see that the unknown moon s entanglement ( [ moonentanglement ] ) has jumped into the picture .it is constant itself , but it acts as a kind of buffer to restrict information flow to and from . in another way of speaking , we could say that there is only a certain amount of free " entanglement able to be exchanged , which is determined by the moon . to take a further step and without loss of generality, we now assume for convenience .then we find two equalities connecting each of and to : we note that both and are controlled by the moon in a non - linear way .this control leads to a novel conservation relation between and in a pairwise fashion : although and are time dependent quantities they combine in this way to a constant determined only by the moon entanglement .because of the restriction of the moon entanglement we note from the conservation relation ( [ sponinvariant ] ) that in this region or $ ] , the decreasing of is accompanied by the increasing of as is shown in fig .[ figse ] ( c ) and ( d ) . to see quantitativelythis complementary relation let us take as an example .then the time dependent qubit and reservoir schmidt weights simplify to obviously these two equations for and depend on time in an opposite way .it is interesting to note that the parameter , which represents a collective coupling between the atom and the reservoir modes , is also controlling the two entanglements inversely .this is because of the moon entanglement , which acts as a buffer to both entanglements and but substantially in a opposite way through ( see eqs .( [ sponrestrict_a ] ) and ( [ sponrestrict_a ] ) ) .we remark that for , a similar relation to eq .( [ sponinvariant ] ) can be achieved with only a modification of signs . in this case and are not always complementary any more as fig .[ figse ] ( a ) and ( b ) show for wide regions of .however and still stand in a time - invariant relation similar to ( [ sponinvariant ] ) and are connected only by .spontaneous emission is an example of an irreversible process . in this sectionwe will turn to a simple example when the - interaction is reversible , following the jc model .thus qubit is still a two level atom while unit is now simply a single mode lossless cavity .local entanglement dynamics between the atom and the field in the jc model was first studied by phoenix and knight by expressing the entangled atom - field state in terms of the eigenstates and eigenvalues of both the field and atomic operators , and revival physics played a key role in the dynamics .later , it was shown by son , et al ., that entanglement between two non - interacting qubits can be generated through the qubits local interactions with their corresponding jc cavities that are initially in an entangled two - mode squeezed state .lee , et al ., showed that there is actually entanglement reciprocation between the two qubits and their corresponding continuous - variable systems such as jc cavities .recently , the jc model was revisited by yna , et al ., , and by sainz and bjrk , to illustrate the entanglement sudden death phenomenon , as well as to track the entanglement flow , and conservation relations were found by both groups . herewe will continue to track the entanglement information in the jc dynamics , but in addition will account quantitatively for the role of the non - interacting unknown moon .the jc hamiltonian is given as where , are the usual pauli matrices describing the two level atom , while and denote the standard creation and annihilation operators for the single mode cavity .the atom and cavity frequencies are and , respectively , and is the coupling constant between the atom and the cavity . for conveniencewe take the resonant condition when . now from the generic expression ( [ initial ] ) the initial state for the jc model can be written as where and are the excited and ground state of the two level atom , and we have defined as the zero photon state of the cavity . from the jaynes - cummings treatment will have the time dependent state as where means that there is one photon in the cavity .if we follow the same schmidt calculations as in the last section , we will have entanglement between atom and the rest of the universe as ^{2}+1}. \label{jck_a}\ ] ] that is , we find expression ( [ sponk_a ] ) again , except that has been replaced by .this is just the replacement of one formula for excited state probability by another , as the nature of the amplitude decay channel requires .while the initial qubit entanglement again has a value equal to the moon entanglement ( [ moonentanglement ] ) , in the jc dynamics has a period of instead of decaying irreversibly as in the spontaneous emission case .we see at the half period time the atom loses all of its entanglement : .then it evolves to the initial value at .[ figjc ] shows this periodic behavior of plotted as a function of at different values .recovery of atom - field and atom - atom entanglement in the jc dynamics was already shown previously in refs .however , here our result shows a different type of entanglement recovery , because denotes another type of entanglement , this time including the unspecified non - interacting moon as well as the cavity . the cavity entanglement , ^{2}+1 } , \label{jck_a}\ ] ]is also predictable if we look to ( [ sponk_a ] ) and see that should be converted to because both are expressions for the ground state probability .we note that is also periodic .it is initially not entangled with its remainder ( ) , and then increases with time . at , we have , and at the half period time we see that exactly the same as entanglement . again fig .[ figjc ] illustrates the periodic behavior of as a function of at various values .when compared with the behavior of the qubit entanglement we see that the amount of entanglement has been completely transferred from to at the half period time . after this , however , the entanglement is repeatedly transferred back and forth between and .this is the major difference from the spontaneous emission case where the reversible process is absent .again we work in the sector when for convenience and see that both of the entanglements and are restricted by the constant moon entanglement in the following non - linear time - dependent way : this periodic time dependent control of the two schmidt weights by the moon entanglement is different from the spontaneous emission case .however , the two equalities also lead to the same generic entanglement conservation relation therefore in the jc model case the time dependent schmidt weights and are also restricted by the constant moon entanglement .we see clearly here that the decrease of is accompanied by the increase of and vice versa as is shown in fig .[ figjc ] ( c ) and ( d ) . to show quantitatively we again take as in fig .[ figjc ] ( c ) to follow this complementary relation of the two entanglements : which are the exact analogs of ( [ sponcomplk_a ] ) and ( [ sponcomplk_a ] ) .we now move to a condensed matter context and take a final example when the - connection is a heisenberg exchange interaction or spin - spin interaction . herequbit system is a spin one - half particle while unit is now an -spin xy chain ( see fig .[ figxy0 ] ) , a simplified model for strongly correlated materials such as ferromagnets , antiferromagnets , etc .the first studies of entanglement flow in spin chains focused on few - qubit chains ( ) and the state , and also entanglement dispersion in long chains ( ) .amico , et al . studied the propagation of a pairwise entangled state through an xy spin chain , and found that singlet - like states are transmitted with higher fidelity than other maximally entangled states . herewe also focus on the entanglement dynamics , not to transport the entanglement , but to track the information flow by taking into account the role of the entangled moon .the interaction hamiltonian of our scheme is given as where are the usual pauli matrices describing the spins , is the coupling constant between spin and the first spin of the xy chain , and is the coupling constant between the nearest neighbor sites inside the xy chain .now we take for convenience .this model can be transformed through a jordan - wigner transformation into a set of free fermions ( see for example ref . ) and thus can be solved exactly . from the perspective of the jordan - wigner transformation , the xy model is equivalent here to a free fermion hopping model or tight - binding model describing phonon systems . for the xy hamiltonian the exact eigenstatesare given as \left\vert\downarrow\right\rangle and are the spin up and down states for our qubit , and , are the states of the xy spin chain with indicating there is a spin up at the site while all the rest are in the spin down state and meaning all the sites from site to site are in the down state . here represent the eigenstates .the corresponding eigenvalues are then the evolution operator can be written as again from the generic initial state ( [ initial ] ) we have here for the xy model where we have defined to represent all the spins in the xy chain that are in the down state . then the time dependent state can be achieved as where we have defined since and are complicated expressions for arbitrary number , here we take as an example to illustrate their properties . then we have .\end{aligned}\ ] ] we note that the five cosine functions have five different periods and the ratio of any two periods is irrationaltherefore the five quantities will not have a common period , which means that will oscillate all the time but without a fixed period .now we define and note that it can vary from to .there are infinitely many solutions for as a function of time , say , with .if we follow the same schmidt calculations as in the last two sections we will find the schmidt weight between the qubit spin and the remainder as ^{2}+1}. \label{xyk_a}\ ] ] we note that the qubit entanglement is also oscillating as determined by . as the amplitude channel requires , it starts at the familiar same value , and in this example evolves to zero entanglement at the time points .after each of these zeros , will increase to a local maximum point and then decay to again at the next time point . fig .[ figxy ] illustrates this particular behavior of as a function of at different values of .such aperiodic behavior is intermediate to the previous two examples showing irreversible decay and periodic oscillation , and is expected on the basis of the irrationally related spin - chain eigenfrequencies .now we come to the xy chain entanglement representing the entanglement between the chain and its remainder , i.e. , the end spin and the moon .it is related to in the usual way .we just replace by and obtain : ^{2}+ 1}. \label{xyk_a}\ ] ] so the chain entanglement also oscillates with .in general , the entanglement will be transferred back and forth between and with the upper limit of entanglement that can be transferred just as the previous two cases .as expected , restrictions on entanglement flow follow the previous examples . in the regime can simply repeat relations ( [ jcrestrict_a ] ) and ( [ jcrestrict_a ] ) by replacing with : \cos ^{2}\theta , \label{xyrestric_a}\ ] ] naturally , the same non - linear conservation relation ( [ jcinvariant ] ) is recovered , and again the and entanglements behave complementarily , this time as a function of as shown in fig . [ figxy ] ( c ) and ( d ) .in summary we have studied entanglement information flow from the perspective of a dynamical qubit in an initially mixed state , a state that was generated by an entanglement associated with a prior process , which we can loosely assign to an experimental preparation stage . using schmidt - decomposition rather than master - equation analysis , we derived conservation statements for the separate degrees of quantum entanglement of the qubit and of its interacting reservoir , and showed their relation to the entanglement of the unspecified background party we called the moon , which was initially entangled but at ceased to interact with either the qubit or its environment .the new forms of entanglement conservation relations are nonlinear connections between quantum memories , dependent on the restrictions implied by amplitude flow channel dynamics .one can say that the channel s enforcement of excitation number conservation in the qubit - reservoir interaction is the root cause of the entanglement and its flow .this is closely analogous to the continuous entanglement between transverse momenta in spontaneous parametric down conversion , which arises from the enforcement of simultaneous momentum and energy conservation on the two - photon amplitude in the creation of the signal and idler photons .although unspecified , and ignored in previous open system analyses , the moon can be assigned responsibility for the initial impurity of the qubit state . the three - part total universe ( + + ) was bi - partitioned three ways in order to evaluate the respective schmidt weights , as indicators of entanglements in three specific interaction models ( spontaneous emission , jc interaction , and xy spin chain ) .these were analyzed to illustrate the flow of quantum information in different contexts , including both discrete and continuous versions of the reservoir system labelled .although the influences on individual entanglements differ in various ways , the amplitude flow common to them produces entanglement conservation relations in the same form .one can say that the non - specified moon retains a kind of influence on the system of interest whether we are looking " ( through interaction ) at it or not .the qubit can feel , through the entanglement conservation relation but not through interaction , that the moon is there . there can be interesting consequences when the moon also has a significant dynamical evolution , although still not interacting with , because its entanglement with can then be assigned to part rather than all of it .this discussion will be undertaken elsewhere .finally we would like to comment on the inverse dependence of and on the interaction parameters as discussed at the end of our three examples .it will be particularly interesting if , for some systems , the interaction constant can be adjustable ( e.g. , the coupling constant of a spin - spin interaction ) . especially in the thermodynamic limit interesting phenomena such as quantum phase transitions arise from changes of the interaction parameter .the behavior of the entanglements in the vicinity of the critical point will be extremely interesting ( see for example and references therein ) .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _( cambridge univ .press , 2000 ) ; and j. preskill , _ quantum information and computation _ , caltech lecture notes for ph219/cs219 .a. k. rajagopal and r. w. rendell , , 022116 ( 2001 ) ; k. zyczkowski , p. horodecki , m. horodecki , and r. horodecki , , 012101 ( 2001 ) ; t. yu and j.h .eberly , , 193306 ( 2002 ) ; p. j. dodd and j. j. halliwell , , 052105 ( 2004 ) . s.j.d . phoenix , and p.l .knight , , 6023 ( 1991 ) .s. bose , i. fuentes - guridi , p. l. knight , and v. vedral , ,050401 ( 2001 ) . w. son , m.s .kim , j. lee , and d. ahn , , 1739 ( 2002 ) .m. yna , t. yu , and j.h .eberly , , s621 ( 2006 ) .m. yna , t. yu , and j.h .eberly , , s45 ( 2007 ) .v. subrahmanyam , , 034304 ( 2004 ) .qian , y. li , y. li , z. song , and c.p .sun , , 062329 ( 2005 ) .s. yang , z. song , and c. p. sun , , 022317 ( 2006 ) . t. s. cubitt , f. verstraete , and j. i. cirac , , 052308 ( 2005 ) .s. chan , m. d. reid , and z. ficek , arxiv : 0811.4466 ( 2008 ) .p. jordan and e. wigner , z. phys .* 47 * , 631 ( 1928 ) ; s. katsura , phys . rev . * 127 * , 1508 ( 1962 ) ; n. nagaosa , _ quantum field theory in strongly correlated electronic systems _( springer - verlag , berlin , 1999 ) .s. sachdev , _ quantum phase transitions _( cambridge university press , cambridge , uk , 2000 ) .a. osterloh , l. amico , g. falci , and r. fazio , , 608 ( 2002 ) .qian , t. shi , y. li , z. song , and c.p .sun , , 012333 ( 2005 ) .
|
we report an approach to quantum open system dynamics that leads to novel nonlinear constant relations governing information flow among the participants . our treatment is for mixed state systems entangled in a pure state fashion with an unspecified party that was involved in preparing the system for an experimental test , but no longer interacts after . evolution due to subsequent interaction with another party is treated as an amplitude flow channel and uses schmidt - type bipartite decomposition of the evolving state . we illustrate this with three examples , including both reversible and irreversible information flows , and give formulas for the new nonlinear constraints in each case .
|
discrete vortex methods ( dvm s ) are lagrangian methods for solving for rotational fluid flow in which the vorticity field is partitioned into a finite number of discrete vortex elements , and the evolution of the flow field is determined by following the motion of the vortex elements .one advantage of dvm s is that the computational effort is applied only in the regions of most interest .also , for flow past bodies , the far field conditions are automatically satisfied as the vorticity field will decay to zero away from the body , avoiding the problems that can occur when truncating the domain in grid based methods . for two - dimensional inviscid flow, there is a single component of vorticity , and for inviscid flow , the elements are convected with constant strength at the local velocity of the fluid . for viscous flow ,some method of modelling the viscous diffusion must to added to the numerical scheme .a number of schemes have been developed .one of the first applications of a dvm for viscous flow used a random walk to model the viscous effects .a random walk is simple to apply but has relatively low resolution and produces noisy results .a more sophisticated method is the particle strength exchange ( pse ) method , introduced in , which models the viscous effects using integral operators .a related method is the vorticity redistribution method of shankar and van dommelen which , like pse , involves redistributing circulation between the elements , but does not require a regular mesh , introducing elements locally as required . a more recent redistribution .based on time dependent gaussian cores , is given by .a description of many aspects of vortex methods can be found in the book by cottet and koumoutsakos .also , a comparison of four viscous methods which do not require the introduction of a grid can be found in .many papers on vortex use the total force ( coefficients ) to assess the accuracy of the method but do not present details of the pointwise surface forces .these are frequently noisy due to the irregular distribution of vortex particles , as can be seen in , who use random walk for the diffusion .in fact , the total force can also be noisy and require smoothing , as in .in one of the benchmark papers using vortex methods , koumoutsakos and leonard do present plots of the surface pressure and vorticity , although they observe high frequency oscillations in the surface vorticity .koumoutsakos and leonard use the pse method for diffusion .pse involves the introduction of a grid to the scheme with frequent remeshing in order to maintain the accuracy of the calculation .it appears that the order introduced into the method by the remeshing helps smooth out the spurious high frequency oscillation in the forces that are found in some other methods .hence , a possible way of reducing the noise would be to remesh every time step with a pse method . however , an alternative is to combine the remeshing and diffusion operation in single step .this can be done by making the redistribution of the circulation satisfy certain constraints which have a physical meaning .this is a variation of the redistribution method of shankar and van dommelen , but rather than redistributing circulation between vortex elements , the circulation contained in each discrete vortex element is independently transferred onto a small set of neighbouring nodes .the solution for this redistribution problem can be written explicitly as a set of algebraic equations .the result is a simple , efficient scheme which maintains a regular distribution of vortex elements .the surface force can be calculated directly from local variables , showing very good agreement with test data .also , since this scheme works with the circulation of a vortex element , independent of the vorticity distribution in the element , there is no requirement for overlap of cores , as in the pse method .the paper is structured as follows ; first , an outline of the basic method is presented .this is followed by a description of the redistribution scheme .means of calculating the body forces are then discussed .a brief description of the finite - spectral code used to generate test data is given .test results are then presented for impulsively started flow past a cylinder and a square .finally , some conclusions are given .the two - dimensional incompressible navier - stokes equation in vorticity form is where are the velocity components in cartesian coordinates , is the time , the vorticity , the reynolds number , and the two - dimensional laplace operator .the velocity is normalised by the free stream velocity , the length scales by a characteristic length , and time by .the reynolds number is given by with and fluid density and its dynamic viscosity .consider an individual vortex element centred on where gives the coordinates in complex form , with a vorticity distribution where , where so that the distribution function has unit circulation . the vorticity fieldis represented by discrete vortex elements so that where is the strength of vortex .the velocity generated by the vortex elements is given by where a number of different functions can be used for the vorticity distribution . in general , point vortices given by a delta function are not used .instead , a smooth distribution which does not have the numerical problems associated with point vortices is adopted .a standard distribution , used here , is the gaussian vortex , where is a measure of the core size of a vortex .this gives \label{lvv}\ ] ] as usual , the boundary conditions at the surface of the body are satisfied by the use of a vortex panel method .both standard straight , constant strength , vortex panels and the higher order panels given in in equation 16 ) is from a definite integral and should be the difference between the two terms not the sum of them . ]were investigated .the latter method uses overlapping curved panels with a linear distribution of vorticity along the panels .this allows an accurate representation of a smooth body and produces a velocity distribution which is singularity free . however , there was little difference in the results for the different panels for flow past a circular cylinder if enough panels were used .the cylinder results ( section [ cylinder_results ] ) presented below use the high order panels , but , because of the sharp corners , constant strength , straight panels were used for flow past a square ( section [ square_results ] ) .the velocity now consists of three components , where is the fluid velocity , is the free stream velocity , is the velocity generated by the vortex elements , and generated by the panels used to satisfy the boundary conditions at the surface of the body .numerically , an operator splitting method is used , with inviscid and viscous sub steps , satisfying and respectively .the equation for the inviscid sub step represents the fact that for two - dimensional inviscid flow vorticity is convected by the flow .numerically , the element vortices are moved at the local fluid velocity , i.e. a second order runge - kutta method is used to move the vortices at each time step a number of methods exist for the viscous sub step ( [ sub_v ] ) .the method developed here is based on the vorticity redistribution scheme of shankar and van dommelen . in this method , at each time the circulation of vortex elements are updated through where represents the fraction of the circulation of vortex transferred to vortex by diffusion during time step .the summation is over a group of vortices local to , the position of vortex .the fractions are calculated to satisfy the following constraints where is the characteristic diffusion distance over time .stability requires . equations ( [ vr_2]-[vr_5 ] ) enforce conservation of vorticity , the centre of vorticity , and linear and angular momentum .the redistribution is performed over all vortices within a distance of vortex , i.e. such that the accuracy of the method depends on the value of and a minimum value of is required for a first or second order solution to exist . was used in .a solution may not always exist , for example if there are less than six vortices within the region ( [ vr_6 ] ) .if no solution is found , new vortices are introduced a distance from the centre ( ) until a solution exists .further details of the method and its theoretical basis can be found in . in this scheme ,all vortex elements satisfying ( [ vr_6 ] ) must be identified .the redistribution problem can then be solved using a linear programming method , for example the revised simplex scheme found in .the redistribution scheme above ( [ vr_1]-[vr_5 ] ) operates by transferring circulation between vortex elements , introducing new elements as required .however , an alternative is to transfer the circulation onto a set of nodes at known positions . consider a one - dimensional unsteady diffusion problem , with a uniform grid with grid step .suppose there is a vortex of strength placed at where , and .let then a solution of the redistribution equations is where circulation is transferred to the grid point .all other are zero .a second solution is given by with for the other , and . in principle either of these solutions could be used .however , consider the case when the element is midway between grid points , i.e. . on physical grounds, the redistribution would be expected to be symmetric , but both of the solutions are asymmetric . a simple way of producing a symmetric solution in this case is to use the average of the two solutions , i.e. , .this gives the same solution as would be obtained by using the four points to and assuming symmetry ( and ) .more generally , a linear combination of the two basic solutions , ( [ rd2]-[rd4 ] ) and ( [ rd5]-[rd7 ] ) , can be used this produces a symmetric three point solution if the element is at a grid point , and a symmetric four point solution if the element is midway between grid points , with a smooth change between these two extreme cases .it is the simplest solution which satisfies both the redistribution equations and symmetry .some test calculations have been performed for a circular cylinder case using using only three point formula ( ( [ rd2]-[rd4 ] ) for and ( [ rd5]-[rd7 ] ) for ) , but these produced undesirable short scale variations in the solution when a vortex element moved across a midpoint and the redistribution changed bias .this did not occur when using the combination ( [ rd8 ] ) .stability requires that the redistribution fractions are positive .the most restrictive case when using ( [ rd8 ] ) is when the element is at a grid point .the condition is then as simple test problem is that of one dimensional diffusion starting from a point distribution of vorticity at at .figure [ 1d_test ] shows the analytic and redistribution solutions for the vorticity for a test case at with , and total circulation of .the calculation was started from the analytic solution at .the grid step was and the time step was .two different grids were used in the calculation .there was an offset between the grids and they were used alternatively at successive time steps .a number of different offsets , ranging from to ) were tested , and the good agreement with the analytic solution shown in figure [ 1d_test ] is typical . with , and .the solid line is the analytical solution and the symbols are from the redistribution scheme . ]consider now the two dimensional case in with grid and and a vortex element located at where and .the one dimensional redistribution in the direction is given by where and the and are obtained as in ( [ rd2]-[rd7 ] ) .the two dimensional redistribution scheme is given by these weights satisfy all of the constraints in the original redistribution scheme ( [ vr_2]-[vr_5 ] ) .this solution of the redistribution problem satisfies shankar and van dommelen condition for the existence of a solution ; the simplest two - dimensional solution has a nine point stencil and the stability condition implies that the corner points of the stencil must be a distance of at least from the centre point .figure [ 2d_test ] shows typical constant vorticity contours for the two dimensional diffusion problem .again , two offset grids were used .the contours shown are from the numerical solution . at the scaleshown they are identical to the contours from the analytic solution .with , and . from the centre ,the contours are for 0.8 , 0.6 , 0.4 , 0.2 , 0.1 , and 0.025 . ]the standard case of flow past a circular cylinder will be used as a test for the method . the obvious grid to use in this case is one using polar coordinates . however, the original redistribution equations must be converted to the appropriate form .transforming between polar and cartesian coordinates in standard form , , write substituting ( [ polar_1 ] ) into ( [ vr_3]-[vr_5 ] ) and expanding , while retaining all terms ) or greater , produces the constraints equations ( [ polar_2]-[polar_5 ] ) are analogous to the original constants ( [ vr_3]-[vr_5 ] ) with the cartesian lengths replaced by the polar ones ( and ) apart from the linear radial constraint ( [ polar_2 ] ) which has a nonzero right side .this term arises from the in the polar laplace operator ; setting the right side of ( [ polar_2 ] ) to zero would produce a nonphysical result in which the diffusion operator is .the redistribution formula for the radial one dimensional problem are where the radial grid is given by , , and the vortex element is at where . the second redistribution solution involving the points , and follows in the obvious manner. the azimuthal redistribution solution is obtained from ( [ rd2]-[rd6 ] ) with the lengths replaced by . again , linear combinations of the redistribution solutions are formed , and the two dimensional redistribution scheme obtained as in ( [ rd11 ] ) . above the redistribution methodhas been presented for cartesian and polar coordinates system .the method extends to other , more general , systems .for example , for a transformation given by , the condition for first moment in becomes where and the other equations are obtained by replacing by .further , the redistribution problem can formulated for a general grid with fixed nodes by calculating the appropriate terms , a procedure equivalent to calculating the metric terms scaling the derivatives in the diffusion operator in a finite volume method .the full scheme is a fractional step algorithm consisting of the following steps 1 .redistribution onto a fixed grid over a time step of .convection of the vortex elements using the two - step runge - kutta scheme ( [ rk_step1],[rk_step2 ] ) .3 . redistribution onto a fixed grid over a time step of a method for diffusing vorticity from the surface where it is created must be incorporated into this scheme .a number of methods can be found in the literature .the simplest is to create new vortex elements a small distance off the boundary , as in .in contrast , solve a diffusion problem using the flux of vorticity from the surface as a boundary condition . in ,the redistribution method is used to diffuse vorticity from the vortex sheet on the boundary into the interior .the last approach is the one used here , in the simplest manner consistent with the algorithm .the circulation from the vortex panels created at the start of step 2 is placed on a set of points on the boundary ( the control points used to calculated the strengths of the vortex panels ) , and these are used as new vortex elements added to the redistribution of step 3 .all circulation distributed across the boundary during the redistribution steps is reflected back across the boundary , ensuring conservation of circulation and imposing a no flux boundary condition .the standard way to calculate the lift and drag with a dvm is to use the impulse ( see e.g. ) - where and are the drag and lift normalised by where is a reference length . the summation is performed over the entire domain for all circulation carrying elements .the surface pressure can be related to the strength of the vortex panels through where is the circulation carried by the relevant portion of the wall .the wall shear stress is obtained by using a finite difference formula with velocity components evaluated at fixed points near the surface . for the circular cylinder ,the redistribution grid has where is the radius of the cylinder .the shear is calculated using the velocity at midpoints through the one sided , second order formula where is the azimuthal velocity .there are a large number of papers which use flow past a circular cylinder as a test case .however , in most of these only the lift and drag ( coefficients ) are presented . to providedetailed data for comparison , a finite difference - spectral ( fds ) code was used .the streamfunction - vorticity formulation is used , with governing equations the vorticity transport equation ( [ ns ] ) , and the poisson equation for the streamfunction fourier modes were used in , and second order central difference formula in the radial direction .a one sided backwards difference formula , similar to ( [ ss_force ] ) , was used for the time derivative , except for the first time step where a backwards euler scheme was used .the code is fully implicit , iterating to obtain the solution at each time step .the radial grid was stretched to give a fine grid near the cylinder , and place the outer boundary of the computational domain a long way from the surface .the non - linear terms were handled in the usual pseudo - spectral manner . with an impulsive startthe boundary layer grows as .this scaling was used for the earlier part of the computation , with the calculation was switched to a fixed grid at , using the radial distribution from ( [ t_scale_1 ] ) at this time .this produces a relatively simple but efficient code in which high accuracy can be obtained by using a large number of fourier modes and radial grid points and a small time step .grids of up to 1024 complex fourier modes , 2000 radial points , and a time step of were used for the data presented below .grid independence was checked for all reynolds numbers . on the surface of the cylinder which can be used to calculate the surface pressure using a reference value of zero at the front of the cylinder .this equation is analogous to ( [ press_force ] ) for the dvm .both relate the flux of vorticity from the surface to the pressure gradient .the results produced by this code compare well with those found in other high resolution simulation ( e.g. ) .also , they agree with the short time series solutions given by .as a test case for the effects of the numerical parameters and grid on the accuracy of the solution , the drag for the flow past an impulsively started cylinder for short time will be used .lengths are scaled or the diameter of the cylinder so that the surface of the cylinder is at , the vorticity is scaled by where is the free stream velocity , and the time is scaled by .the reynolds number is where is the kinematic viscosity .a body fitted polar grid is used in the region .this is embedded in a uniform cartesian mesh for .the inner grid is arranged so that the surface of the cylinder falls midway between radial grid points , with where is the radial grid step , so that the surface is at .azimuthally , the grid is placed at uniformly spaced points with grid step where is the number of vortex panels .the end of the vortex panels are at , with the control points for the evaluation of the boundary velocity at . the method does not explicitly allow for the behaviour for small time , so the solution can not be expected to be accurate over the first few time steps .however , with an appropriate choice of parameters , solutions which achieve high accuracy after a few time steps can be obtained .there are six numerical parameters which must be chosen .the grid steps in and for the inner grid , the grid step for the outer grid ( a square grid is used here although this is not required ) , the region for the inner grid ( ) , the time step and the core size .the maximum time step is fixed by the stability of the redistribution scheme .since the algorithm has two redistribution substeps over time , the stability condition becomes , or , where is the smallest of the three grid steps , and .the effects of the core size were investigated in , and they concluded that taking where is the ( average ) length of a vortex panel was a suitable choice .this works well here also , giving .the effect of the grid on the solution was investigated by fixing the number of vortex panels and varying .figure [ drag_re550_400 ] shows the drag calculated from the impulse ( [ imp_force ] ) for short time for an impulsive start with , 400 panels , , and , and .for the smallest value of , .apart from the first few steps , the middle value ( ) gives good agreement from the drag for the fds scheme , while for the smaller value ( ) the drag approaches that from the fds scheme from below .for , the drag is too large . for clarity , only every fourth point is shown for the solutions for and .all points are used for , and no filtering or smoothing has been applied for this figure . the smoothness is typical of the results obtained when using a body fitted redistribution mesh. cylinder at with 400 vortex panels , , and : dashes , ; , ; , . solid line , fds solution . ] figure [ impulse_re550_400 ] shows the streamwise component of the impulse , , for the same cases as in figure [ drag_re550_400 ] . for an impulsive start for flow past a cylinder, the initial condition at is that from potential flow , with a vortex sheet of ( nondimensional ) strength on the surface , giving . for , the impulse should decrease smoothly from the initial value . for , there is some ( expected ) irregular behaviour for the first few time steps , but the impulse is generally well behaved .in contrast , the smaller and larger values of produce a jump in the impulse , followed by a relatively fast decrease for and an increase for , consistent with the behaviour of the drag ( figure [ drag_re550_400 ] ) . with 400 vortex panels , and : , ; , ; , ]calculations were performed for with different time steps ( smaller for all three values of and larger for and ) , but the behaviour of the impulse was similar to that shown in figure [ impulse_re550_400 ] with an overshoot for and an overshoot for .the impulse was examined for a large number of other runs with reynolds numbers varying from 150 to 9500 and a range of time steps , and its behaviour for short time provides a useful diagnostic as to the quality as the grid with regard to the ratio of the grid steps .this test could be used for other problems in which there is no reliable solution available to compare with , e.g. for flow past a square ( section [ square_results ] below ) .for all the reynolds numbers studied , a ratio of approximately 2/5 for the radial to azimuthal grid ( ) with was found to give an accurate solution provided the time step was small enough .figure [ drag_re550_grid ] shows the drag for runs with , 200 panels and , 400 panels and , and 600 panels and , i.e. maintaining the same scaling as for 400 panels in figures [ drag_re550_400 ] and [ impulse_re550_400 ] .clearly , the grid is too coarse to provide a good match with the fds solution very early in the run , but does give a reasonable value for . as above, there is good match with 400 panels , and a very close match with 600 panels .cylinder at with ; solid line , fds solution ; , 200 panels ; , 400 panels ; , 600 panels . ]the outer grid step and were also varied to ensure they did not significantly affect the results shown in figures [ drag_re550_400]-[drag_re550_grid ] .tests were also performed with other reynolds numbers to ensure the results presented below are accurate .calculations were performed for impulsively started flow past a circular cylinder using reynolds numbers of 150 , 550 , 1000 , 3000 , and 9500 .these reynolds numbers were chosen as they are commonly used as test cases .a large of amount of test data was generated , showing excellent agreement with the results from the fds code in all cases ( and with data found in other studies ) .representative results are presented for three reynolds numbers ( , 1000 , and 9500 ) , covering three orders of magnitude .figure [ total_drag_re150 ] shows the total drag obtained from the dvm and fds methods for flow with .the numerical parameters are , , ( ) , and . for the dvm , both the drag from the impulse ( [ imp_force ] ) and that from the pressure and wall shear stress ( [ press_force]-[ss_force ] ) are shown .there is excellent agreement between all methods .figure [ drag_components_re150 ] shows the pressure and wall shear stress components of the drag obtained from the dvm and fds schemes .again there is excellent agreement . .line : fds solution .symbols : dvm solution , from the impulse([imp_force ] ) , from the surface forces ( [ press_force]-[ss_force ] ) . ] .upper , pressure component .lower , shear stress component .lines : fds solution .symbols : dvm solution . ] the surface distribution of pressure and wall shear stress at a single point in time ( ) are shown in figures [ pressure_re150 ] and [ tau_re150 ] .there is very good agreement between the values from the two numerical schemes .figure [ cont_both_re150_t1 ] shows contours of the vorticity at , with the upper half of the plot showing the contours from the dvm method and the lower from the fds scheme .again , there is excellent agreement . against at for . is in radians measured from the rear of the cylinder .the reference value is zero at the front of the cylinder ( ) .symbols : dvm solution , line : finite difference solution . ] at for .symbols : dvm solution , line : finite difference solution . ] at : black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =1,2,5,10,20 .the contours above the axis ( ) are from the dvm code , and those below the axis from the fds code . ]figures [ total_drag_re1000 ] and [ drag_components_re1000 ] show the total drag and drag components for the two methods for flow with .the numerical parameters are , , ( ) , and . as for , there is excellent agreement .also , very good agreement is obtained for the surface forces and contours of vorticity , as can be seen for in figures [ pressure_re1000 ] , [ tau_re1000 ] and [ cont_both_re1000_t3 ] . .line : fds solution .symbols : dvm solution , from the impulse([imp_force ] ) , from the surface forces ( [ press_force]-[ss_force ] ) . ] .upper , pressure component .lower , shear stress component .symbols : dvm solution .lines : finite difference solution . ] against at for . is in radians measured from the rear of the cylinder .the reference value is zero at the front of the cylinder ( ) .symbols : dvm solution , line : finite difference solution . ] at for .symbols : dvm solution , line : finite difference solution . ] at : black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =2,5,10,30,40 .the contours above the axis ( ) are from the dvm code , and those below the axis from the fds code . ]flow with provides a much stiffer test of the method as the flow field is more complex , with short scale variations in the surface forces and a much more complicated vorticity pattern than that found with lower reynolds numbers .again , however , there is very good agreement between the solutions for the two numerical schemes .figures [ total_drag_re9500 ] and [ drag_components_re9500 ] show the total drag and drag components .the numerical parameters are , , ( ) , and .the surface forces at are shown in figures [ pressure_re9500 ] , [ tau_re9500 ] .there is a high level of agreement , in particular , in the wall shear stress on the rear part of the cylinder where the development of relatively small scale but strong structures in the flow lead to large peaks and high values of the gradient along the surface .the complex nature of the flow can also be seen in the vorticity contours ( figure [ cont_both_re9500_t2 ] ) .the values of the wall shear stress at the top and bottom shoulder of the cylinder ( and ) are slightly lower for the dvm method as compared with those for from the fds calculations .however , the radial grid used for the dvm calculation near the surface is coarse as compared to that for fds , and it was found that increasing the resolution gave a better match , but with an increase in computational effort . .line : fds solution . symbols : dvm solution , from the impulse([imp_force ] ) , from the surface forces ( [ press_force]-[ss_force ] ) . ] .symbols : dvm solution . lines : finite difference solution . ] against at for . is in radians measured from the rear of the cylinder .the reference value is zero at the front of the cylinder ( ) .symbols : fds solution , line : dvm solution . ] at for .symbols : fds solution , line : dvm solution . ] at : black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =5,10,25,50,100,200 .the contours above the axis ( ) are from the dvm code , and those below the axis from the fds code . ]this run generated approximately vortex element by , a similar number to that used by koumoutsakos and leonard .for comparison , for ( figures [ total_drag_re150]-[cont_both_re150_t1 ] ) , there were approximately vortex elements at .there is relatively little data available for flow past a square as compared to that for a cylinder .however , plots of drag and vorticity contours are given in for an impulsive start with and the square at 15 angle of attack .a similar calculation was performed , using a uniform , body fitted , cartesian grid embedded in a uniform cartesian grid aligned the flow .eighty one constant length vortex panels were used on each side of the body , and a grid step of was used for both the inner outer grids .the change between the inner and outer grids occurred a distance 0.5 from the body .the time step was , giving . with this set of parameters ,the behaviour of the impulse early in the calculation was as expected .the flow at a right angled convex corner is singular , but both the pressure and the vorticity behaving as where is the distance from the corner .hence , there may be large errors in the surface pressure obtained by integrating the panel strengths , and in calculating the lift and drag from the surface forces .figure [ force_square_re100 ] shows the drag and lift obtained from both methods .there is reasonable agreement , given the potential for large errors .the drag is consistent with that given in .a calculation with 41 panels on each side of the square and produced a similar result to that shown in figure [ force_square_re100 ] , but with a larger difference between the drag and lift calculated from the impulse and the surface forces . andthe top two lines are the drag , with the upper one from impulse and the lower from the surface forces .the bottom two lines are the lift , with the lower from the impulse and the upper from the surface forces . ]vorticity contours for for are shown in figure [ cont_square_re100_t20 ] .this figure agrees well with that in , in particular , as regards the position and strength of the vortices downstream of the body . for impulsively started flow past a square at 15 and re=100a step of 0.5 is used with zero omitted . ] in , the corners of the square were rounded to avoid unspecified numerical problems .this was not required for the calculations performed in the current work .all of the calculations described above have used a boundary fitted redistribution mesh near the body and a regular cartesian mesh further away .care has been taken to ensure that the total circulation is conserved .an alternative approach , used in a number of previous studies ( e.g. ) , is to delete any vorticity that crosses the boundary and rely on the creation process to regenerate the vorticity in an appropriate manner .there are several advantages to this approach .in particular , it allows simulation for flow past bodies of an arbitrary shape by embedding them into a regular grid , and simply deleting any vortex elements which are redistributed into the body .the major disadvantage is that the method will no longer produce high resolution values for the surface forces .in particular , since part of the circulation in the vortex sheet arises from the non conservative nature of the redistribution at the surface of the body , the surface pressure can not be estimated using ( [ press_force ] ) unless the deletion is accounted for .simulations were performed using a circular cylinder embedded in a uniform cartesian grid . following ,the vortex elements created each time step were placed a distance of above the surface so that the maximum velocity generated by a new element occurs at the surface , while all vortex elements within this distance or below the surface were deleted after the redistribution .the drag for a flow with , 400 vortex panels , a time step of and a redistribution mesh with is shown in figure [ total_drag_re150_single ] .also shown is the drag from the fds scheme , showing good agreement with the dvm values .the vorticity distribution for both methods at is shown in figure [ cont_both_re150_single_t1 ] .overall there is very close agreement , although some differences can be seen near the surface .however , and as expected , the distribution of vortex panel strengths and the wall shear stress showed large high frequency oscillations . .line : fds solution .symbols : dvm solution using a single cartesian redistribution grid with , and 400 vortex panels . ] at : black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =1,2,5,10,20 .the contours above the axis ( ) are from the dvm code with a single cartesian redistribution grid with , and 400 vortex panels , and those below the axis from the fds code . ]a further calculation was performed for but with 200 panels and so that there were approximately quarter the number of vortex elements .the drag was almost the same as shown in figure [ total_drag_re150_single ] .the vorticity contours at for both the dvm and fds schemes are shown in figure [ cont_both_re150_single_t1_coarse ] . away from the bodythere is still good agreement between the two solutions but the lack of resolution near the surface with the dvm method is more apparent . at :black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =1,2,5,10,20 .the contours above the axis ( ) are from the dvm code with a single cartesian redistribution grid with , and 200 vortex panels , and those below the axis from the fds code . ]calculations were also performed for flow with .figure [ drag_re9500_single ] shows the drag for the fds method and the dvm scheme with two different resolutions .the better resolved dvm calculation has , and , and the coarser calculation , and . in both cases ,the redistribution grid step was chosen to approximately match the vortex panel length .the drag from the better resolved dvm solution shows good agreement with that from the fds method , except , unsurprisingly , during the very early part of the run .the drag from the coarser calculation also agrees well up to , but not at later times . .line : fds solution .symbols : dvm solutions using a single cartesian redistribution grid : , 3000 panels , and ; , 1600 panels , and . ]figure [ cont_both_re9500_single ] shows vorticity contours from both the fds method and the dvm calculation with the finer grid .there is good agreement , but not as close as with the body fitted grid ( figure [ cont_both_re9500_t2 ] ) . a similar comparison with the lower resolution dvm solution showed a similar general structure ( e.g. the position of the large vortices sitting off the surface ) but significant differences at smaller scales , reflecting a lack of resolution . at : black , positive ; red , negative .going from the far field towards the cylinder , the contours are for =5,10,25,50,100,200 .the contours above the axis ( ) are from the dvm code with a single cartesian redistribution grid with 3000 vortex panels , , , and those below the axis from the fds code . ]a simple redistribution scheme for viscous flow has been presented . unlike other redistribution schemes, it operates by redistributing the circulation in a vortex element to a set of fixed nodes rather than transferring circulation between vortex elements . a new distribution of vortex elementscan then be constructed from the circulation on the nodes .a major advantage of the scheme is that the solution of the redistribution problem is given explicitly by a set of simple algebraic equations .a further advantage is that core overlap is not an issue for the viscous solution .the ability of the scheme to produce high resolution solutions has been demonstrated through a series of test problems .accurate estimates for both the total body and pointwise surface forces can be obtained when using a body fitted mesh near the surface of the body .accurate estimates of the total forces on the body can be obtained when using a non conservative scheme with the body embedded in a cartesian mesh .the solution presented for the redistribution problem is the simplest possible which satisfies the equations and has the required symmetry .it would be possible to obtain higher order solutions by extending the the computational stencil and setting higher moments to zero .there is no requirement to use the same mesh throughout the computational domain , and the use of local grid refinement is straightforward .also , the method extends naturally to three - dimensions .
|
a circulation redistribution scheme for viscous flow is presented . unlike other redistribution methods , it operates by transferring the circulation to a set of fixed nodes rather than neighbouring vortex elements . a new distribution of vortex elements can then be constructed from the circulation on the nodes . the solution to the redistribution problem can be written explicitly as a set of algebraic formulae , producing a method which is simple and efficient . the scheme works with the circulation contained in the vortex elements only , and does not require overlap of vortex cores . with a body fitted redistribution mesh , smooth and accurate estimates of the pointwise surface forces can be obtained . the ability of the scheme to produce high resolution solution is demonstrated using a series of test problems . keywords : discrete vortex method , particle methods , lagrangian methods , viscous flow
|
estimation of covariance matrix and its inverse is an important problem in many areas of statistical analysis . among many interesting examplesare principal component analysis , linear / quadratic discriminant analysis , and graphical models .stable and accurate covariance estimation is becoming increasingly more important in the high dimensional setting where the dimension can be much larger than the sample size . in this setting classical methods and results based on fixed and large are no longer applicable .an additional challenge in the high dimensional setting is the computational costs .it is important that estimation procedures are computationally effective so that they can be used in high dimensional applications .let be a -variate random vector with covariance matrix and precision matrix .given an independent and identically distributed random sample from the distribution of , the most natural estimator of is perhaps where .however , is singular if , and thus is unstable for estimating , not to mention that one can not use its inverse to estimate the precision matrix . in order to estimate the covariance matrix consistently ,special structures are usually imposed and various estimators have been introduced under these assumptions . when the variables exhibit a certain ordering structure , which is often the case for timeseries data , bickel and levina ( 2008a ) proved that banding the sample covariance matrix leads to a consistent estimator .cai , zhang and zhou ( 2010 ) established the minimax rate of convergence and introduced a rate - optimal tapering estimator . el karoui ( 2008 ) and bickel and levina ( 2008b ) proposed thresholding of the sample covariance matrix for estimating a class of sparse covariance matrices and obtained rates of convergence for the thresholding estimators .estimation of the precision matrix is more involved due to the lack of a natural pivotal estimator like . assuming certain ordering structures , methods based on banding the cholesky factor of the inversehave been proposed and studied .see , e.g. , wu and pourahmadi ( 2003 ) , huang et al .( 2006 ) , bickel and levina ( 2008b ) . penalized likelihood methods have also been introduced for estimating sparse precision matrices . in particular , the penalized normal likelihood estimator and its variants , which shall be called -mle type estimators , were considered in several papers ; see , for example , yuan and lin ( 2007 ) , friedman et al.(2008 ) , daspremont et al .( 2008 ) , and rothman et al .convergence rate under the frobenius norm loss was given in rothman et al .( 2008 ) . yuan ( 2009 ) derived the rates of convergence for subgaussian distributions . under more restrictive conditions such as mutual incoherence or irrepresentable conditions , ravikumar et al .( 2008 ) obtained the rates of convergence in the elementwise norm and spectral norm .nonconvex penalties , usually computationally more demanding , have also been considered under the same normal likelihood model .for example , lam and fan ( 2009 ) and fan et al . ( 2009 ) considered penalizing the normal likelihood with the nonconvex scad penalty .the main goal is to ameliorate the bias problem due to penalization .a closely related problem is the recovery of the support of the precision matrix , which is strongly connected to the selection of graphical models . to be more specific ,let be a graph representing conditional independence relations between components of .the vertex set has components and the edge set consists of ordered pairs , where if there is an edge between and . the edge between and excluded from if and only if and are independent given . if , then the conditional independence between and given other variables is equivalent to , where we set .hence , for gaussian distributions , recovering the structure of the graph is equivalent to the estimation of the support of the precision matrix ( lauritzen ( 1996 ) ) . a recent paper by liu et al .( 2009 ) showed that for a class of non - gaussian distribution called nonparanormal distribution , the problem of estimating the graph can also be reduced to the estimation of the precision matrix . in an important paper , meinshausen and bhlmann ( 2006 ) demonstrated convincingly a neighborhood selection approach to recover the support of in a row by row fashion .yuan ( 2009 ) replaced the lasso selection by a dantzig type modification , where first the ratios between the off - diagonal elements and the corresponding diagonal element were estimated for each row and then the diagonal entries were obtained given the estimated ratios .convergence rates under the matrix norm and spectral norm losses were established . in the present paper ,we study estimation of the precision matrix for both sparse and non - sparse matrices , without restricting to a specific sparsity pattern .in addition , graphical model selection is also considered .a new method of constrained -minimization for inverse matrix estimation ( clime ) is introduced .rates of convergence in spectral norm as well as elementwise norm and frobenius norm are established under weaker assumptions , and are shown to be faster than those given for the -mle estimators when the population distribution has polynomial - type tails .a matrix is called -sparse if there are at most non - zero elements on each row .it is shown that when is -sparse and has either exponential - type or polynomial - type tails , the error between our estimator and satisfies and , where and are the spectral norm and elementwise norm respectively .properties of the clime estimator for estimating banded precision matrices are also discussed .the clime method can also be adopted for the selection of graphical models , with an additional thresholding step .the elementwise norm result is instrumental for graphical model selection .in addition to its desirable theoretical properties , the clime estimator is computationally very attractive for high dimensional data .it can be obtained one column at a time by solving a linear program , and the resulting matrix estimator is formed by combining the vector solutions ( after a simple symmetrization ) .no outer iterations are needed and the algorithm is easily scalable .an r package of our method has been developed and is publicly available on the web .numerical performance of the estimator is investigated using both simulated and real data .in particular , the procedure is applied to analyze a breast cancer dataset .results show that the procedure performs favorably in comparison to existing methods .the rest of the paper is organized as follows . in section [ sec :estimation ] , after basic notations and definitions are introduced , we present the clime estimator .theoretical properties including the rates of convergence are established in section [ sec : rate ] .graphical model selection is discussed in section [ sec : consistency ] .numerical performance of the clime estimator is considered in section [ sec : simu ] through simulation studies and a real data analysis .further discussions on the connections and differences of our results with other related work are given in section [ sec : conclusion ] .the proofs of the main results are given in section [ sec : proof ] .in compressed sensing and high dimensional linear regression literature , it is now well understood that constrained minimization provides an effective way for reconstructing a sparse signal .see , for example , donoho et al . ( 2006 ) and cands and tao ( 2007 ) . a particularly simple and elementary analysis of constrained minimization methods is given in cai , wang and xu ( 2010 ) .in this section , we introduce a method of constrained minimization for inverse covariance matrix estimation .we begin with basic notations and definitions . throughout , for a vector , define and . for a matrix , we define the elementwise norm , the spectral norm , the matrix norm , the frobenius norm , and the elementwise norm . denotes a identity matrix .for any two index sets and and matrix , we use to denote the matrix with rows and columns of indexed by and respectively .the notation means that is positive definite .we now define our clime estimator .let be the solution set of the following optimization problem : where is a tuning parameter . in ( [ c1 ] ), we do not impose the symmetry condition on and as a result the solution is not symmetric in general .the final clime estimator of is obtained by symmetrizing as follows .write .the clime estimator of is defined as in other words , between and , we take the one with smaller magnitude .it is clear that is a symmetric matrix .moreover , theorem [ thn-2 ] shows that it is positive definite with high probability . the convex program ( [ c1 ] )can be further decomposed into vector minimization problems .let be a standard unit vector in with in the -th coordinate and in all other coordinates .for , let be the solution of the following convex optimization problem where is a vector in .the following lemma shows that solving the optimization problem ( [ c1 ] ) is equivalent to solving the optimization problems ( [ o1 ] ) .that is , .this simple observation is useful both for implementation and technical analysis .[ le1 ] let be the solution set of ( [ c1 ] ) and let where are solutions to ( [ o1 ] ) for . then . to illustrate the motivation of ( [ c1 ] ) ,let us recall the method based on regularized log - determinant program ( cf .daspremont et al .( 2008 ) , friedman et al .( 2008 ) , banerjee et al .( 2008 ) ) as follows , which shall be called glasso after the algorithm that efficiently computes the solution , the solution satisfies where is an element of the subdifferential .this leads us to consider the optimization problem : however , the feasible set in ( [ c - c1 ] ) is very complicated . by multiplying the constraint with , such a relaxation of ( [ c - c1 ] ) leads to the convex optimization problem ( [ c1 ] ) , which can be easily solved . figure [ fig : feasible ] illustrates the solution for recovering a by precision matrix ] such that where , is the support of and .the above assumption is particularly strong . under this assumption ,it was shown in ravikumar et al .( 2008 ) that estimates the zero elements of exactly by zero with high probability .in fact , a similar condition to for lasso with the covariance matrix taking the place of the matrix is sufficient and nearly necessary for recovering the support using the ordinary lasso ; see for example meinshausen and bhlmann ( 2006 ) .suppose that is -sparse and consider subgaussian random variables with the parameter . in addition to , ravikumar et al.(2008 ) assumed that the sample size satisfies the bound where .under the aforementioned conditions , they showed that with probability greater than , where {ss})^{-1}\|_{l_{1}} ] .the proof of theorem [ th2 ] ( ii ) is similar as that of theorem [ thn-3 ] .we would like to thank the associate editor and two referees for their very helpful comments which have led to a better presentation of the paper .hess , k. r. , anderson , k. , symmans , w. f. , valero , v. , ibrahim , n. , mejia , j. a. , booser , d. , theriault , r. l. , buzdar , a. u. , dempsey , p. j. , rouzier , r. , sneige , n. , ross , j. s. , vidaurre , t. , gmez , h. l. , hortobagyi , g. n. and pusztai , l. ( 2006 ) .pharmacogenomic predictor of sensitivity to preoperative chemotherapy with paclitaxel and fluorouracil , doxorubicin , and cyclophosphamide in breast cancer ._ journal of clinical oncology _ 24 : 4236 - 44 .ravikumar , p. , wainwright , m. , raskutti , g. and yu , b. ( 2008 ) . high - dimensional covariance estimation by minimizing -penalized log - determinant divergence .technical report 797 , uc berkeley , statistics department , nov . 2008 .( submitted ) .zhou , s. , lafferty , j. and wasserman , l. ( 2008 ) .time varying undirected graphs . to appear in machine learning journal ( invited ) , special issue for the 21st annual conference on learning theory ( colt 2008 ) .
|
a constrained minimization method is proposed for estimating a sparse inverse covariance matrix based on a sample of iid -variate random variables . the resulting estimator is shown to enjoy a number of desirable properties . in particular , it is shown that the rate of convergence between the estimator and the true -sparse precision matrix under the spectral norm is when the population distribution has either exponential - type tails or polynomial - type tails . convergence rates under the elementwise norm and frobenius norm are also presented . in addition , graphical model selection is considered . the procedure is easily implementable by linear programming . numerical performance of the estimator is investigated using both simulated and real data . in particular , the procedure is applied to analyze a breast cancer dataset . the procedure performs favorably in comparison to existing methods . * keywords : * constrained minimization , covariance matrix , frobenius norm , gaussian graphical model , rate of convergence , precision matrix , spectral norm .
|
before the financial crisis started in july 2007 with bear stearns default , interest rate desks would essentially use a unique interest rate curve for each currency to price and hedge all derivative products .the crisis showed that big investment banks could default and therefore lending and borrowing activities became severely restricted , no longer allowing for certain hedging strategies .ever since , basis spreads of tenor swaps where no longer negligible .after some debate , today there exists a broad consensus about how this issue must be handled for a single currency setting .regarding different tenors , i.e. 3 and 6 month fra ( forward rate agreement ) , as independent entities , leaves enough room for new variables to fit the arbitrage free equations that traded products must satisfy . in the single currency setting ( see ) , the discount curve for each currency is build out of overnight indexed swaps ( ois ) through a bootstrapping algorithm . forwardlibor estimation curves for each different tenor frequency are thereafter calculated out of par and tenor swap quotes by another bootstrapping algorithm given the previously calculated discount curve .some theoretical models have been proposed for the dynamics of these curves ( e.g. see , and ) . in the cross currency swap market a similar situation exists .the price of traded foreign exchange ( fx ) forwards and cross currency swaps ( ccs ) can not be exactly derived from arbitrage free models calibrated exclusively to the swap markets on each currency and the fx spot price , ( see for a heuristic analytic formula to predict basis tenor and cross currency swaps ) .this is so , because fx forward and ccs prices are , among other things , driven by trading flows and as long as the differences between the `` theoretical price '' and the market price remain low , the expected return will not be enough to compensate the bank capital expenses .unfortunately , for the multi - currency setting , it is not easy to introduce new variables in a similar way as it was done for the single currency setting ( see ) .the paper proposes a generalized framework to price and hedge cross currency swaps .this framework makes it possible to map the entity funding structure while still matching liquidly traded products i.e. it allows centralizing funding or simply fund each leg in its own currency . starting from the single currency setting in each currency , it is based on the decomposition of customized cross currency swaps as a combination of market quoted currency swaps plus a minor amount of additional cash flows which embed the pricing and hedging inaccuracies which are model - dependent .these minor additional cash - flows can then be ois - discounted ( or funded ) in their own currency or converted to another currency through foreign exchange forwards and thereafter ois - discounted or funded in that currency .this naturally allows choosing the currency perspective from which funding is managed .when cross currency basis spreads tend to zero , this multi - currency pricing converges to the valuation of cash flows in each currency according to the single currency setting .valuation of complex multi - currency exotic products such as callable swaps are beyond the scope of this framework .the paper assumes full collateralization of trades ( see ) and a funding structure between front office desks and the balance sheet of the bank or alco ( asset and liability committee ) at the overnight index - rate at least on one currency , provided that desks do not have a consistent liquidity imbalance between borrowing and lending ( see , and for more information about discounting and collateralized derivative pricing ) .when dealing with uncollateralized pricing and even if these hypotheses are not fulfilled , the collateralized framework will always provide a reference `` risk - free '' pricing on top of which funding and credit value adjustments can be applied ( see , , , , for an overview of these adjustments ) . the paper starts with section [ sec : productdef ] which defines the products involved in the discussion .section [ sec : study ] presents a study of foreign exchange and cross currency markets to illustrate their driving forces and motivate the proposed methodology .section [ sec : perspective ] shows how the funding currency is chosen for each currency .the proposed decomposition method to price cross currency swaps and foreign exchange forwards is presented in sections [ sec : pricefwdstccs ] to [ sec : pricefxfwd ] .section [ sec : hedging ] compares the proposed method with a benchmark using a worked example . finally , section [ sec : conclusions ] concludes .this section describes and defines the products and the notation which will be used across the paper .figure [ fig : frn ] presents the structure of payments of a floating rate note .the issuer of the note receives the notional ( represented by 1 ) on the start date and pays floating interest , ( e.g. libor or euribor index ) , for each period from to , according to the frequency of the note .the fixing of the libor - index on , , is multiplied by the year fraction , , of the period from to .on expiry date , , the holder of the note pays the floating interest of the last period and returns the notional .the floating indices fix their value usually two business days before the start of each period and are paid two business days after the end of the period depending on the conventions .the notation represents the remaining cashflows of a frn that occur beyond and ending on , is a fixed spread added to the floating leg and is the currency in which fixed and floating payments are denominated ( corresponds to usd in figure [ fig : frn ] ) .figure [ fig : ncs ] presents the structure of a non - resettable cross currency swap ( ncs ) .the main purpose of this structure is to transfer funding from one currency to another ( e.g. giving a loan in domestic currency and financing it by borrowing money in a foreign currency plus entering into a cross currency swap ) .this structure will be denoted , where is the domestic currency ( eur in figure [ fig : ncs ] ) , is the foreign currency ( usd in figure [ fig : ncs ] ) , is the exchange rate fixing on date in units of foreign currency per unit of domestic currency , is the fixed spread added to the foreign floating leg and is the spread added to the domestic floating leg . to reduce counterparty exposure due to foreign exchange risk , banks usually trade resettable currency swaps ( ccs ) of figure [ fig : ccs ] instead of the non - resettable structure of figure [ fig : ncs ] .the foreign notional is reset at the end of each period according to the foreign exchange rate at that moment.the upper dotted arrows of 1 eur correspond to the notional returned at the end of each period and the 1 eur lower dotted arrows correspond to the notional received at the beginning of each period starting on that date .both lines are dotted because they cancel each other .this structure will be denoted by . equation ( [ eq : ncs ] ) presents the decomposition of a non - resettable ncs into a sum of two floating rate notes and equation ( [ eq : ncs_ccs ] ) shows how a ncs can be decomposed into a resettable ccs plus a sum of floating rate notes , where denotes the foreign exchange fixing on .under the assumption of no arbitrage , a foreign exchange forward should be priced using a simple cash and carry reasoning according to equation ( [ eq : fxfwd ] ) , where denotes the forward foreign exchange rate from present time , , to expiry , . considering that fixed income desks fund collateralized derivatives at the overnight - index rate , the discount factors of equation ( [ eq : fxfwd ] ) should be apparently calculated according to ois curves . however , foreign exchange markets are also shared by other participants such as money market desks , which will apply the same cash and carry reasoning but with a completely different set of market instruments such as deposit rates . in addition ,resettable currency swap ( ccs ) structures of different maturities may also imply foreign exchange forwards . before the crisis , the difference among these three methodologies was negligible .however , at this moment it is not .this means that a careful analysis is needed to know where real prices come from and why .figure [ fig : sprfxfwd ] compares the historical difference in basis points of the foreign exchange forward quoted in the market and those implied from deposit rates , ois curves and ccs for maturities of 3 , 6 and 9 months and 1 year .the first two methods ( deposit and ois ) use equation ( [ eq : fxfwd ] ) with different discount factors coming from either deposit rates ( , where is the deposit rate with maturity ) and ois curves bootstrapped from ois swaps .the third method ( ccs ) estimates forward foreign exchange rates out of a bootstrapping method which calculates foreign exchange forwards implied from forcing zero valuation of market currency swaps .it is clear from figure [ fig : sprfxfwd ] , that for the four maturities , foreign exchange forwards estimated from ois curves show the highest difference with respect to actual traded forwards , followed by those estimated by deposit rates .foreign exchange forwards estimated from ccs are very aligned with market .these plots suggest that the foreign exchange market is shared by participants financed with deposit ( money market ) and overnight rates ( derivatives ) , having those financed with deposits a bigger weight .the reason why ccs and foreign exchange forwards are aligned is because both are zero cost contracts which are collateralized ( this does not happen with deposits ) .the misalignment of the foreign exchange forward rates does not come out of wrong discounting .theoretically , the foreign exchange forward could be arbitraged with the cash and hold argument ( foreign exchange forwards are normally collateralized ) .however , this would imply altering the bank cash balance forcing borrowing huge amounts of one currency and lending them into another .this situation is not sustainable as the alco would not provide funding at the overnight index rate for big amounts of cash consistently in the same direction .therefore , the challenge is to build pricing and hedging models in this situation of misaligned markets .most of the foreign exchange risk from multi - currency fixed income desks comes out of cross currency swaps .sections [ sec : pricefwdstccs ] and [ sec : customccs ] show how customized cross currency swaps can be decomposed in terms of market quoted currency swaps , whose value will be zero , plus a marginal set of additional cash flows which have to be properly valued too .consider the evaluation operator , ] is introduced .it represents the value in currency at time of the cash flow , funded in currency .equation ( [ eq : eindicator2c ] ) shows how this expectation is calculated depending on the funding currency , where is the ois discount factor between and of currency and is the forward exchange rate at time to change one unit of currency to at time ( these forward exchange rates will be calculated according to section [ sec : pricefxfwd ] ) . = df_{t , t}^{c } } & { } & { { \bf e}_t^{c_1 } \left [ { { \bf 1}_{\left\ { {t = t } \right\}}^{c_2 } } \right ] = x_{t , t}^ { - 1 } df_{t , t}^{c_1 } } \\ \end{array } \label{eq : eindicator2c}\ ] ] since current regulation enforces inter - bank trades to be liquidated through clearing counter parties ( ccp ) e.g. london clearing house and collateral interest payments are indexed to overnight rates i.e eonia , fed funds , sonia ... the authors suggest valuing these remaining cashflows applying the corresponding overnight discount curve on each currency according to equation ( [ eq : eindicator_f ] ) , where is the spot exchange rate to change one unit of into . = x_t^ { - 1}{\bf e}_t^{c_2 } \left [ { { \bf 1}_{\left\ { { t = t } \right\}}^{c_2 } } \right ] = x_t^ { - 1 } df_{t , t}^{c_2 } } \\ \end{array } \label{eq : eindicator_f}\ ] ] however , some institutions might decide to fund them in another currency .in this situation , cash flows can be exchanged to that currency with foreign exchange forwards and discounted with the ois curve of that currency according to equation ( [ eq : eindicator_d ] ) .this way , the cross currency basis risk is well taken into account for pricing and hedging and the funding currency can be easily chosen . = { \bf e}_t^{c_1 } \left [ { { \bf 1}_{\left\ { { t = t } \right\}}^{c_2 } } \right ] = x_{t , t}^ { - 1 } df_{t , t}^{c_1 } } \\ \end{array } \label{eq : eindicator_d}\ ] ] \approx \frac{df_{t , t_0}^{c_d}}{x_{t , t_0 } } - \sum\limits_{i = 0}^{n - 1 } { \left ( { l_{t , t_i}^{c_f } + s^{c_f } } \right)\tau _ i^{c_f } \frac{df_{t , t_{i + 1}}^{c_d}}{x_{t , t_{i+1 } } } } - \frac{df_{t , t_n}^{c_d}}{x_{t , t_n } } \label{eq : frnf_dom}\ ] ] equation ( [ eq : frnf_dom ] ) shows the valuation of an denominated in foreign currency , , from the domestic perspective ( valued and funded in domestic currency , ) , where is the forward libor index rate of currency at time of the period from to , will represent from now on the exchange rate from domestic to foreign currencies and is the year fraction from to according to the conventions of currency .see that the result of equation ( [ eq : frnf_dom ] ) will not be equal to x_t^{-1} ] .this approximation is not a major problem since is only calculated for interpolation purposes . ] .after the valuation under the single currency setting , it is converted to domestic currency through the spot exchange rate .the resettable structure ( second expectation ) is expressed as a sum of forward starting single period floating rate notes whose notional is dependent on the forward exchange rate calculated according to section [ sec : pricefxfwd ] . equation ( [ eq : suv_scs_cd ] ) only prices the forward starting currency swap correctly when the cross currency basis spread is zero . equation ( [ eq : euvupdown ] ) shows the spread difference between the market and single currency setting ( scs ) .this is the error made by the scs model .however , this difference is not calculated for the actual forward starting ccs from to , but for two similar forward ccs with the same maturity but starting on , the previous date to out of the market schedule , and , the following date to from this schedule .it is clear that the estimation of the error would be very accurate as obtained from ( [ eq : suvmkt_cd ] ) exactly corresponds to a forward ccs derived from direct market quoted swaps .the same happens for . equation ( [ eq : euv ] ) obtains the pricing error of the scs model for the customized forward ccs as a linear interpolation between and .this interpolation has good accuracy as it involves only the pricing error of the scs model in between two well - calculated market points . on the other hand, the scs model calculates the correct spread when the cross currency basis is zero .therefore , the accuracy is not significantly lost by the error interpolation because it is indeed small . equation ( [ eq : suvmkt ] ) finally obtains the forward ccs spread from the single currency setting out of equation ( [ eq : suv_scs_cd ] ) , corrected by an estimation of the error , , which is obtained through linear interpolation between two accurately - estimated errors .this spread makes the net present value of the structure equal to zero .this way of interpolating allows sticking the valuation as much as possible to market .approximations are only carried out on a very small contribution given by the error of the single currency setting .this error would disappear as the cross currency basis spread approaches to zero .therefore , this pricing method naturally reduces to the single currency setting when the cross currency basis spread disappears .this second step prices a customized resettable currency swap through a decomposition procedure in which the currency swap is expressed as the sum of a forward starting market currency swap ( priced according to section [ sec : pricefwdstccs ] ) plus some additional small contributions priced according to section [ sec : perspective ] which allow flexibility to choose the funding and hedging perspective .pricing inaccuracies in these small contributions do not usually have a significant impact in pricing .consider the customized currency swap of figure [ fig : ccs ] where the domestic and foreign currencies are eur and usd and the start and end dates are and .the swap is valued in between two fixing dates at time , where is the date of the last floating rate fixing , is the following date in the product schedule and . according to equation ( [ eq : q ] ) , and . } \\\end{array } \label{eq : ccsdecom}\ ] ] equation ( [ eq : ccsdecom ] ) shows the currency swap decomposition where the left hand side is the customized swap with fixed spreads in both legs , and .the first term of the right hand side is a forward starting resettable currency swap whose spread , , is obtained according to equation ( [ eq : suvmkt ] ) .the second and third terms represent the swap structure of the already started hub period ( exchange of notionals and floating payments at the end of the period ) and the sum adds fixed payments to the domestic leg , equal to the difference between the customized and market spreads , and the fixed spread , , of the foreign curve . no valuation has been performed yet , only a payoff decomposition . at this point , the only multi - currency piece of the decomposition is the forward starting ccs whose valuation is zero ( is calculated to satisfy this condition ) and does not need any funding .the rest of the pieces involve single currency fixed cash flows .the first two , given by equation ( [ eq : frnvsfrn ] ) ) change signs with respect to equation ( [ eq : ccsdecom ] ) because the notation for a long position of frn was defined returning the notional and paying the floating rate at expiry . ] , must be priced assuming a common funding currency ( either domestic or foreign ) taking expectations , either ] , of the indicator functions according to equation ( [ eq : eindicator_f ] ) .this common funding avoids the inconsistency between the market forward foreign exchange rate , and the rate given by equation ( [ eq : xscs ] ) of ois discount factors , .equation ( [ eq : efrnvsfrn ] ) shows the joint valuation of these two cash flows expressed in units of domestic currency .foreign exchange forwards are interpolated according to section [ sec : pricefwdstccs ] . \\cf^{c_f } = x_{t_{\underline l}^{prd } } [ { 1 + ( { l_{t_{\underline l}^{prd } } ^{c_f } + s_{uv}^{c_f } } ) \tau _ { q ( { t_{\underline l}^{prd } } ) } ^{prd } } ] \\\end{array } \label{eq : frnvsfrn}\ ] ] = { \bf e}_t^{c_d } [ cf^{frn } ] = ( cf^{c_f } x_{t , t_{\bar l}^{prd } } ^ { - 1 } - cf^{c_d } ) df_{t , t_{\bar l}^{prd } } ^{c_d } \\ { \bf v}_t^{c_d } [ cf^{frn } ] = \frac { { \bf e}_t^{c_f } [ cf^{frn } ] } { x_t } = x_t^{-1 } ( cf^{c_f } - cf^{c_d } x_{t , t_{\bar l}^{prd } } ) df_{t , t_{\bar l}^{prd } } ^{c_f } \\ \end{array } \label{eq : efrnvsfrn}\ ] ] similarly , the rest of cash flows of equation ( [ eq : ccsdecom ] ) should also be jointly funded in a chosen centralized currency . however , as these payments are very small ( just some basis points ) and are spread out along many maturities , they can be priced assuming funding in the currency in which they are denominated ( instead of a centralized currency ) or any other currency , calculating the expectation of the indicator functions using equation ( [ eq : eindicator2c ] ) . \approx x_{t , t_j } { \bf e}_t^{c_d } \left [ { frn_{t_i , t_n } ^{s^{c_f } } } \right ] \label{eq : pricexmxfrn}\ ] ] non - resettable currency swaps can be priced according to the decomposition of equation ( [ eq : ncs_ccs ] ) .the first term of the right hand side is a customized resettable cross currency swap which can already be priced .the second term is a sum of expectations of floating rate notes .if these terms are chosen to be funded in a different currency from which they are denominated ( e.g. the domestic currency , , or the collateral currency ) , each of these expectations is approximated according to equation ( [ eq : pricexmxfrn ] ) , assuming that the evolution of foreign exchange and swap rates are independent of each other .in order to interpolate forward foreign exchange rates on a particular date as needed for equations ( [ eq : eindicator2c ] ) to ( [ eq : frnf_dom ] ) and equation ( [ eq : efrnvsfrn ] ) , a similar method is used as section [ sec : pricefwdstccs ] . consider that are the market quoted foreign exchange rates from the swap point quotes . equation ( [ eq : fxemkt ] ) shows the errors of the forward foreign exchange rates on dates and when the single currency setting ( scs ) is used .the forward foreign exchange rates , , are calculated according to equation ( [ eq : xscs ] ) , where the discount factors are taken from ois curves .the dates from the market schedule , and , are in between date of the product schedule .the exchange rate error is well known on dates and as they belong to the market schedule . to estimate the error on ,a linear interpolation is carried out as shown by equation ( [ eq : fxeprd ] ) . equation ( [ eq : xtlprd ] ) shows how the exchange rate on is finally priced with the single currency setting and corrected with the interpolated error .see that when the cross currency basis spread disappears , the error is equal to zero and this pricing methodology smoothly converges to the single currency setting .this section presents a worked example comparing two methods with different financing schemes : all remaining foreign cash flows are funded in domestic currency ( four - curve method ) , or each remaining cash flow is discounted in its own currency ( multi - funding using proposed method ) .this will be done for the resettable ( ccs ) and non - resetable ( ncs ) currency swaps of figures [ fig : ncs ] and [ fig : ccs ] , with legs denominated in usd and eur and notional amount of 100 million eur .market data has been taken on january 29th 2014 , with spot and forward foreign exchange rates in usd per eur of \{spot : 1.3533 , 1y : 1.3543 , 2y : 1.3610 , 3y : 1.3741 , 4y : 1.3928 , 5y : 1.4143 , 7y : 1.4589 , 10y : 1.5145 } , and a cross currency basis spread curve in basis points added to the eur floating leg of \{1y : -4 , 2y : -5.5 , 3y : -6.25 , 4y : -7 , 5y : -7 , 7y : -6.75 , 10y : -5.75 , 15y : -4.75 , 20y : -4.5 } with zero spread on the usd floating leg .the four - curve method assumes centralized funding in eur and the proposed method , instead of considering centralized funding in either eur or usd , it will consider that each leg is funded in its own currency .each case considers five curves : eonia ( `` eo '' ) , 3 month euribor ( `` e3 m '' ) , federal funds ( `` ff '' ) , 3 month us libor ( `` u3 m '' ) and cross currency basis spread ( `` ccb '' ) . f. ametrano and m. bianchetti , `` bootstrapping the illiquidity : multiple yield curves construction for market coherent forward rates estimation '' , modeling interest rates : last advances for derivatives pricing , ed .f. mercurio , risk books , 2009 .bianchetti m. , `` two curves , one price : pricing and hedging interest rate derivatives decoupling forwarding and discounting yield curves '' , working paper , august 2012 , url : `` http://arxiv.org/abs/0905.2770 '' . fujii m. , shimada y. , takahashi a. , `` a market model of interest rates with dynamic basis spreads in the presence of collateral and multiple currencies '' , working paper , december 2009 , url : `` http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1520618 '' .mercurio f. , `` interest rates and the credit crunch : new formulas and market models '' , bloomberg portfolio research paper , february 2005 , url : `` http://papers.ssrn.com/ sol3/papers.cfm?abstract_id=1332205 '' .
|
it is well known that traded foreign exchange forwards and cross currency swaps ( ccs ) can not be priced applying cash and carry arguments . this paper proposes a generalized multi - currency pricing and hedging framework that allows the flexibility of choosing the perspective from which funding is managed for each currency . when cross currency basis spreads collapse to zero , this method converges to the well established single currency setting in which each leg is funded in its own currency . a worked example tests the quality of the method .
|
studying human activity patterns is of central interest due to the wide practical usage . understanding the dynamics underlying the timing of various human activities such as starting a phone call , sending an e - mail or visiting a web - site are crucial to modelling the spreading of information or viruses .modelling human dynamics is also important in resource allocation .it has been shown that for many human activities the interevent time distribution follows a power law with exponents usually varying between and .processes with power - law decaying interevent time distribution look very different from the poisson process , which has been used to model the timing of human activities before .while time series from the latter look rather homogeneous , the former processes produce bursts of rapidly occurring events , which are separated by long inactive periods . some examples wherepower - law decaying interevent time distribution has been observed are email - communication ( with exponent , ) , surface mail communication ( , ) , web - browsing ( , ) and library loans ( , ) . in some other cases a monotonous relation has been reported between the user activity and the interevent time exponent , for example in short - message communication ( , ) or in watching movies ( , ) . in a recent paper there can be found a distribution of exponents of various channels of communications .these observations make it necessary to find a model in which the interevent time exponent is tunable .similar bursty behaviour has been observed in other natural phenomena , for example in neuron - firing sequences or in the timings of earthquakes .the interevent time distribution does not give us any information about the dependency among the consecutive actions .correlation between events is usually characterised by the autocorrelation function of the timings of detected events .bursty behaviour is often accompanied by power - law decaying autocorrelation function , which is usually thought to indicate long - range dependency , see e.g. .however , time series with independent power - law distributed interevent times show long - range correlations . this paper is organized as follows .we start with introducing a task - queuing model , which has an advantage compared to the barabsi - model , namely that the observable is not the waiting time of an action ( from adding the task to the list until executing it ) but the interevent time between similar activities .we determine the asymptotic decay of the interevent time distribution in a simple limit of the model .we give a simple proof of the scaling law between the exponents of the interevent time distribution and the autocorrelation function based on tauberian theorems .finally , we demonstrate that the scaling law can be violated if the interevent times are long - range dependent .we assume that people have a task list of size and they choose their forthcoming activity from this list .the list is ordered in the sense that the probability of choosing the activity from the list is decreasing as a function of position .the task at the chosen position jumps to the front ( triggering the corresponding activity ) and pushes the tasks that preceded it to the right ( figure [ fig : listdyn ] ) .this mechanism is responsible for producing the bursty behaviour because once a person starts to do an activity , that is going to have high priority for a while .the model is capable of covering many types of activities but now we only concentrate on one type .the tasks of this type are marked with in figure [ fig : listdyn ] ( e.g. watching movies ) .for the sake of simplicity we assume that the list contains only one single item of type , all the others are activities of type .it turns out that this restriction is important for the interevent time distribution but irrelevant for the autocorrelation function .we show in our paper that this model produces power - law decaying interevent time distribution for _ a wide variety of the distributions_. first we analyse the model with power - law decaying priority distribution , second we analyse the effect of exponentially decaying priority distribution , and finally we discuss the stretched exponential case .if the list is finite , the power - law regime is followed by an exponential cutoff .the cutoff is the consequence of reaching the end of the list , from which a geometrically distributed waiting time follows : .a marginal result of section [ sec : autocorr ] shows that the expectation value of the interevent time is equal to the length of the list independently on the priority distribution ( as long as ergodicity is maintained ) . for the sake of simplicitywe determine the exponent of power - law decaying region in the case of an infinitely long list where the exponential cutoff does not appear .the interevent time is equal to the recurrence time of the first element of the list .let denote the probability of finding the observed element at position after timesteps without any recurrences up to time .we emphasise here that is not a conditional probability and that gives the survival function of the interevent time distribution .the initial condition is set as , that is , at the first step the observed element moves from the front of the list to the second position with probability ( otherwise a recurrence would occur ) .the restriction not to recur is important because this makes large jumps to the front of the list _ forbidden _ for the observed element .time evolution is given by where is the survival function of the priority distribution .equation expresses that the observed element can be found at position either if it was there in the previous time step and we choose a position smaller than or it was at position and we choose an activity at position .our aim is to determine the asymptotic behaviour of .the main trick we use in analysing the recursion above is to consider the levels first instead of the levels which intuitively one would do . at every stepthe time coordinate gets increased while the position of the observed element might remain unchanged or get increased by one as well ( figure [ fig : q_nt_exp ] ). plane . some typical sections in the path corresponding to the random variableare emphasized by a rectangle.,width=340 ] the path of the element in the plane can be divided into sections that start with a step on the bias and are followed by some steps upwards ( which can be zero as well ) .these sections can be characterised by their height - difference , which can take values from .these height differences are independent and ( optimistic ) geometrically distributed with parameter depending on the position .let be independent geometrically distributed random variables with parameter , i.e. . with fixed is the probability that we find the element at the position at time ( without any recurrences ) .this corresponds to paths with steps to the right ( started from position at time ) and is the distribution mass function of the sum of the random heights . we could go so far without specifying the priority distribution , now we have to specify to continue . herewe analyse the model in which the survival function of the priority distribution is power - law decaying , i.e. . means that that term is asymptotically at most of the order of . in this case is also power - law decaying with exponent . though the random variables are not identically distributed by checking the lyapunov condition one can show that the central limit theorem holds for this situation ( see [ sec : appcltpower ] ) . in this approximation is gaussian in variable with mean and variance , where and . using integral approximation to evaluate the sums and keeping only the highest order terms in yields : this formula shows that the probability of finding an element at position at time is centered on the curve which is in agreement with the numerical results ( figure [ fig : q_nt_cont ] ) . .the image on the left is a numerical result calculated directly form the recursion equations ( [ eq : q1]-[eq : qn ] ) , the right plot shows the clt approximation .the dashed curve is ( ).,title="fig:",width=226 ] .the image on the left is a numerical result calculated directly form the recursion equations ( [ eq : q1]-[eq : qn ] ) , the right plot shows the clt approximation .the dashed curve is ( ).,title="fig:",width=226 ] the sum of in variable is the probability of the observed element has not recurred up to time .we approximate this sum by integral and we apply the following substitution ( with new variable ) : for ( here may refer to : ) : where means that that term is asymptotically of strictly smaller order than . from equations ( [ eq : q_nt_b]-[eq : ngamma ] ) it follows that differentiating this equation gives the first main result of our paper , with .now we study the model with priority survival function .in contrast to the previous case the central limit theorem can not be used , because the last term in is comparable with the complete sum .however , we can construct a limit theorem even for this situation .first of all , we approximate the geometrically distributed random variables by exponential ones : .we apply the scaling property of the exponential distribution to express by i.i.d .exponential random variables : , where . in equation we need the distribution of sums of the first : has a well defined limit as , which we denote by . the probability density function of this ( limit ) random variable is denoted by ( and is given explicitly in [ sec : appltexp ] ) .the finite sum in equation can be approximated by , yielding and since tends to 1 as . by differentiatingwe get independently on parameter .the last investigated priority distribution is the stretched exponential , i.e. . in this case is totally dominated by the last term in the sum , hence we approximate the sum with this term . this means that the exponent holds for this case but a logarithmic correction appears . in this casethe increment of is just small enough for the central limit theorem to be still applicable ( see appendix ) .we approximate the mean and variance of by integral and we use the property . with thesewe get the asymptotic behaviour of can be determined by introducing a proper new variable , keeping the leading order term only , one finds that the survival function of the interevent time distribution is for more details of the calculations see [ sec : appcltstrexp ] .we have got with logarithmic correction again .while for the correction makes the decay of the distribution faster than the pure power law , for it is slightly slowed down .another characteristic property of the time series is autocorrelation function .let denote the indicator variable of the observed activity : if the observed activity is at the first position of the list at time ( i.e. an event happened ) and otherwise .we define the autocorrelation function as usual , \right\rangle_s-\left\langle \mathbb{e}\left[x(s)\right ] \right\rangle_s^2}{\left\langle \mathbb{e}\left[x(s)\right ] \right\rangle_s-\left\langle \mathbb{e}\left[x(s)\right ] \right\rangle_s^2}.\ ] ] the model defines a markov - chain and the state space is the space of permutations of the list .the transition matrix for a list of length is given by where .this matrix is doubly stochastic , hence \right\rangle_t=\frac{1}{n} ] , the probability of finding the activity at the first position can be calculated numerically by successive application of the markov - chain transition matrix . for power - law decaying priority distributionsnumerical computations show that the autocorrelation function is power - law decaying with an exponential cutoff ( figure [ fig : auto_skala ] ) . given ,the autocorrelation functions for various list sizes can be rescaled to collapse into a single curve ( figure [ fig : auto_skala ] , inset ) . s collapse into a single curve if is plotted as a function of .,width=377 ]this property can be written in a mathematical way : the exponents used for rescaling the autocorrelation functions are listed in table [ tab : gammadelta ] . 4 & 5 + & 1.07 & 1.15 & 1.17 & 1.57 & 1.8 & 2 & 3 & 4 & 5 + & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 + indicating that in the limit the autocorrelation function is power - law decaying with exponent for .with the exact result this can be combined into a scaling law : .if the priority distribution decays faster than power - law , then and .this means that the autocorrelation function decays slower than any power .this is the case when the priority distribution decays like a ( stretched ) exponential function .the essential properties of the model for the scaling relation are that the interevent times are independent and power - law decaying .let denote the set of recurrence times and let be an interevent time .the autocorrelation function can be written in the following form : }}{1-\frac{1}{\mathbb{e}\left[\tau\right ] } } \,,\ ] ] which simplifies to if . the laplace transform of the autocorrelation function can be expressed by the laplace transform of the interevent time distribution : \right)^{-1 } , \ ] ] where we used the property that the interevent times are independent and identically distributed .tauberian and abelian theorems connect the asymptotics of a function with the asymptotics of its laplace transform .applying abelian theorem to the right side of the last equation results in . then applying the tauberian theorem yields or to be precise we had to use an extended version of the abelian theorem , which can be derived from the original theorem using integration by parts . with similar trainof thought the scaling law can be extended to the region where holds .these results are in agreement with .independence of the interevent times was important in the proof of the scaling law .the violation of the scaling relation indicates dependency among the interevent times but we emphasize that the opposite direction does not necessarily hold . in this sectionwe investigate two models that are characterised by dependent interevent times .one of them satisfies the scaling relation and the other does not .our first example is the two - state model of reference , because that model was introduced to capture deep correlations , nevertheless , it obeys the scaling law .we start with a brief introduction of the model .the system alternates between a normal state , in which the events are separated by longer time intervals , and an excited state , which performs events with higher frequency . after each event executed in state system may switch to state with probability or remain in state with probability .in contrast , the excited state has a memory .that is , if the system had executed more events in state since the last event in the normal state , the transition to state would be less probable .the probability of performing one more event in state is given by , where is the number of events in the current excited state .this memory function was introduced to model the empirically observed power - law decay in the bursty event number distribution ( for the definition see the reference ) .the dynamics is illustrated in figure [ fig : 2statemodel]a ) for a better understanding . in both statesthe interevent times are generated from a reinforcement process resulting independent random interevent times and with power - law decaying distributions .we denote the corresponding exponents by and .the parameters of the model were the following : , , , , and it was found that the model satisfies the scaling law with exponents and .\a ) and the lower sticks correspond to events performed in state .looking at the time series at a larger scale the ratio of periods shrinks .for the better visibility of the time series the model parameters were changed to and .,title="fig:",width=226 ] b ) and the lower sticks correspond to events performed in state .looking at the time series at a larger scale the ratio of periods shrinks .for the better visibility of the time series the model parameters were changed to and .,title="fig:",width=245 ] the number of events performed in a single period of state or are i.i.d .random variables or . is geometrically distributed with parameter .the distribution of can be easily obtained from , and it is power - law decaying with exponent . making use of these new random variables the dynamicssimplifies : the system executes events with interevent times distributed as , then switches to the excited state and executes events with interevent times , then switches back to the normal state , and so on .it is clear that the decay of the interevent time distribution of the whole process is determined by the smallest of the exponents and .since both of ] are finite , the periods of state have a characteristic length \mathbb{e}\left[n_b\right] ] . and parameter of the proposal density is varied .the dashed line shows the decay corresponding to the scaling law of independent processes.,width=377 ] the simulation results ( figure [ fig : metro ] ) show that the autocorrelation function decays slower than it would be assumed from the scaling law .this example shows that the scaling relation can be violated if there is long range dependency among the interevent times .a trivial extension of the model could be putting more items of the observed activity into the list .then this parameter would tune the frequency of doing the activity . in this casethe interevent times become dependent and the simulation results on finite lists show that the interevent time distribution decays faster than power - law but decays slower than an exponential function .it is easy to prove that in spite of this the autocorrelation function is independent of the number of the observed activity and remains power - law decaying .this is another example for models in which the scaling law does not hold .when the list is finite , we have much more freedom in the choice of the priority distribution .however , the method based on expressing as a probability density function of sum of independent geometrically distributed random variables is general , and can offer a good base for further calculations , even for finite lists .the model may be interesting not only for human dynamics but also for the mathematical theory of card shuffling .we can define the time reversed version of the model in which we choose a position from the same distribution as in the original model and we move the first element of the list to the random position .this model has similar properties to the original model , e.g. this model has the same interevent time distribution and autocorrelation function .if we think of the list as a _ deck of cards _ , then the time reversed model is a generalisation of the _ top - in - at - random shuffle _ method .the latter is a primitive model of shuffling cards : the top card is removed and inserted into the deck at a uniformly distributed random position .in a proper model one should take into consideration the dependency among the interevent times besides the interevent time distribution . in human dynamicsthe latter is slowly decaying and as a consequence of this the autocorrelation function of the interevent times is not well - defined ( i.e. the second moment does not exist ) .however , the autocorrelation function of the time series exists and it might be a good measure for long range dependency among the interevent times .time series of messages sent by an individual in an online community are reported to be not more correlated than an independent process with the same interevent time distribution .similarly , the exponents in the neuron - firing sequences approximately satisfy the scaling law ( , ) . for these systemsthe model we studied might be applicable .however , this is not always the case .for example , the autocorrelation function of the e - mail sequence decays slower ( ) than it should do estimated from the scaling law .this indicates long range dependency in the time series in addition to the power - law decaying interevent time distribution . in this casea dependent model should be applied , for example a model based on metropolis algorithm that is similar to the one we studied as a counterexample to the scaling law .effects of circadian patterns and inhomogeneity in an ensemble of individuals should also be considered . in real networks interactions between individualshave to be taken into account to reproduce some social phenomena , e.g. temporal motifs observed in a mobile call dataset .interactions could be incorporated in this model by allowing the actual activity of an individual to modify the priority list of some of his / her neighbour .if this effect is rare and can be considered as a perturbation , our results on the dynamics of the list could be a starting point to further calculations covering for example information flow in a network .the model discussed here was introduced by chaoming song and dashun wang .we are grateful to them , lszl barabsi and mrton karsai for discussions .this project was partially supported by fp7 ictecollective project ( no .238597 ) and by the hungarian scientific research fund ( otka , no .k100473 ) .in this section we determine the probability density function of the limit random variable . in the main text we defined and as follows where are i.i.d .exponential random variables . the probability density function of sums of independent exponential random variables with different parameters can be expressed .substituting the proper parameters into that formula one gets with some trivial algebraic manipulations this can be written in the following form the probability density function in the limit reads the products are usually cited as q - pochhammer symbols and are convergent in the limit .equation shows clearly that the asymptotic behaviour of the limit random variable is determined by the first term in the sum .the lyapunov condition for the central limit theorem reads : if this condition is fulfilled for any , then the central limit theorem can be used .we test this condition at with the following moments : substituting these and approximating the sums by integrals yield we test the lyapunov condition at similarly to the previous case .the moments are the following : we make integral approximation of the sums of the moments above : the last approximation comes from the property that . with thesewe have : if .now we give some more details in calculating the exponent .the mean and variance appearing in the central limit theorem are and the approximating probability density function is we introduce a new variable \,,\end{aligned}\ ] ] in the second equation . the dominant term in the jacobian when is . with thesethe integral reads ^{-1 } \ , dy = \\ % = \sqrt{\frac{\lambda \nu}{\pi}}\frac{1}{\lambda^{1 + 1/\nu } \nu^2 } \int_{-\infty}^{\infty } e^{-\frac{1}{\lambda \nu } y^2}(\frac{1}{t \log(c\lambda\nu t)^{1 - 1/\nu}}+\ordo(\frac{1}{t \log(c\lambda\nu t)^{1 - 1/\nu } } ) ) \ , dy \\% = \frac{1}{\lambda^{1/\nu}\nu } \frac{1}{t \log(c\lambda\nu t)^{1 - 1/\nu}}+\ordo(\frac{1}{t \log(c\lambda\nu t)^{1 - 1/\nu}})\end{aligned}\ ] ] we underlined the terms that determine the decay of the integral for large .99 a. vazquez , b. rcz , a. lukcs , a .-barabsi , phys .lett * 98 * , 158702 ( 2007 ) j. h. greene , _ production and inventory control handbook _ , macgraw - hill , new york ( 1997 ) p. reynolds , _ call center staffing _ , the call center school press , lebanon , tennessee ( 2003 ) j .- p .eckmann , e. moses , d. sergi , proc .usa * 101 * , 14333 ( 2004 ) .j. g. oliveira , a .-barabsi , nature ( london ) * 437 * , 1251 ( 2005 ) .z. dezs , e. almaas , a. lukcs , b. rcz , i. szakadt , a .-barabsi , phys .e * 73 * 066132 ( 2006 ) a. vzquez , j. g. oliveira , z. dezs , k .-goh , i. kondor , a .-barabsi , phys .e * 73 * , 036127 .( 2006 ) w. hong , x .-han , t. zhou , b .- h .wang , chin .* 26 * , no . 2 ( 2009 ) 028902 t. zhou , l .- l .jiang , r .- q .su , y. c. zhang , euro .* 81 * , 58004 .( 2008 ) c. song , d. wang , a .-barabsi , arxiv:1209.1411 ( 2012 ) t. kemuriyama , h. ohta , y. sato , s. maruyama , m. tandai - hiruma , k. kato , y. nishida , biosystems * 101 * , 144 - 147 .( 2010 ) .corral , phys .lett . * 92 * , 108501 ( 2004 ) m. karsai , k. kaski , a .-barabsi , j. kertsz , scientific reports ( nature ) * 2 * , 397 ( 2012 ) h. e. stanley , s. v. buldyrev , a. l. goldberger , z. d. goldberger , s. havlin , r. n. mantegna , s. m. ossadnik , c .- k .peng , and m. simons , physica a * 205 * , 214 - 253 ( 1994 ) p. allegrini , d. menicucci , r. bedini , l. fronzoni , a. gemignani , p. grigolini , b.j . west , p. paradisi , phys .e * 80 * , 061914 ( 2009 ) d. rybski , s. v. buldyrev , s. havlin , f. liljeros , h. a. makse , scientific reports ( nature ) * 2 * , 560 ( 2012 ) a .- l .barabsi , nature ( london ) * 435 * , 207 ( 2005 ) m. kac , bulletin of the american mathematical society * 53 * ( 1947 ) : 10021010 p. fiorini , _ modeling telecommunication systems with self - similar data traffic _ , thesis ( 1998 ) , + http://www.cse.uconn.edu/~lester/papers/thesis.pierre.pdf w. feller , _ an introduction to probability theory and its applications _2 . , wiley india pvt . ltd . ( 2008 ) .d. aldous , p. diaconis , the american mathematical monthly , vol .93 , no . 5 ( 1986 ) ,333 - 348 .j. kim , d. lee , b. kahng , plos one 8(3 ) : e58292 ( 2013 ) l. kovanen , m. karsai , k. kaski , j. kertsz , j. saramki , j. stat .mech . p11005 ( 2011 ) m. balazs , http://www.math.bme.hu/~balazs/sumexp.pdf
|
many human - related activities show power - law decaying interevent time distribution with exponents usually varying between and . we study a simple task - queuing model , which produces bursty time series due to the nontrivial dynamics of the task list . the model is characterised by a priority distribution as an input parameter , which describes the choice procedure from the list . we give exact results on the asymptotic behaviour of the model and we show that the interevent time distribution is power - law decaying for any kind of input distributions that remain normalisable in the infinite list limit , with exponents tunable between and . the model satisfies a scaling law between the exponents of interevent time distribution ( ) and autocorrelation function ( ) : . this law is general for renewal processes with power - law decaying interevent time distribution . we conclude that slowly decaying autocorrelation function indicates long - range dependency only if the scaling law is violated .
|
massive multiple - input multiple - output ( mimo ) is known to achieve high capacity performance with simplified transmit precoding / receive combining design - .most notably , simple linear precoding schemes , such as zero - forcing ( zf ) , are virtually optimal and comparable to nonlinear precoding like the capacity - achieving dirty paper coding ( dpc ) in massive mimo systems .however , to exploit multiple antennas , the convention is to modify the amplitudes and phases of the complex symbols at the baseband and then upcovert the processed signal to around the carrier frequency after passing through digital - to - analog ( d / a ) converters , mixers , and power amplifiers ( often referred to as the radio frequency ( rf ) chain ) .outputs of the rf chain are then coupled with the antenna elements . in other words ,each antenna element needs to be supported by a dedicated rf chain .this is in fact too expensive to be implemented in massive mimo systems due to the large number of antenna elements . on the other hand ,cost - effective variable phase shifters are readily available with current circuitry technology , making it possible to apply high dimensional phase - only rf or analog processing - .phase - only precoding is considered in , to achieve full diversity order and near - optimal beamforming performance through iterative algorithms .the limited baseband processing power can further be exploited to perform multi - stream signal processing as in , where both diversity and multiplexing transmissions of mimo communications are addressed with less rf chains than antennas . then takes into account more practical constraints such as only quantized phase control and finite - precision analog - to - digital ( a / d ) conversion .works in - , however , do not consider the multiuser scenario and are not aimed to maximize the capacity performance in the large array regime . in this paper , we consider the practical constraints of rf chains and propose to design the rf precoder by extracting the phases of the conjugate transpose of the aggregate downlink channel to harvest the large array gain in massive mimo systems , inspired by .low - dimensional baseband zf precoding is then performed based on the equivalent channel obtained from the product of the rf precoder and the actual channel matrix .this hybrid precoding scheme , termed pzf , is shown to approach the performance of the virtually optimal yet practically infeasible full - complexity zf precoding in a massive multiuser mimo scenario .furthermore , hybrid baseband and rf precoding has been considered for millimeter wave ( mmwave ) communications in works - .they share the idea of capturing dominant " paths of mmwave channels using rf phase control and the rf processing is constrained , more or less , to choose from array response vectors .we will also show in the simulation the desirable performance of our proposed pzf scheme in mmwave channels .we consider the downlink communication of a massive multiuser mimo system as shown in fig .[ fig : system ] , where the base station ( bs ) is equipped with transmit antennas , but driven by a far smaller number of rf chains , namely , .this chain limitation restricts the maximum number of transmitted streams to be and we assume scheduling exactly single - antenna users , each supporting single - stream transmission .as discussed , the downlink precoding is divided among baseband and rf processing , denoted by of dimension and of dimension , respectively .notably , both amplitude and phase modifications are feasible for the baseband precoder , but only phase changes can be made to the rf precoder with variable phase shifters and combiners .thus each entry of is normalized to satisfy where denotes the magnitude of the element of .we adopt a narrowband flat fading channel and obtain the sampled baseband signal received at the user where is the downlink channel from the bs to the user , and denotes the signal vector for a total of users , satisfying = \frac{p}{k } { \bf i}_k ] is the expectation operator . to meet the total transmit power constraint ,we further normalize to satisfy . denotes the additive noise , assumed to be circular symmetric gaussian with unit variance , i.e. , .then the received signal - to - interference - plus - noise - ratio ( sinr ) at the user is given by where denotes the column of .if gaussian inputs are used , the system can achieve a long - term average ( over the fading distribution ) spectral efficiency .\end{aligned}\ ] ]in massive mimo systems , zf precoding is known as a prominent linear precoding scheme to achieve virtually optimal capacity performance due to the asymptotic orthogonality of user channels in richly scattering environment .it is typically realized through baseband processing , requiring rf chains performing rf - baseband frequency translation and a / d conversion .this tremendous hardware requirement , however , restricts the array size from scaling large . to alleviate the hardware constraints while realizing full potentials of massive multiuser mimo systems , we propose to apply phase - only control to couple the rf chain outputs with transmit antennas , using cost - effective rf phase shifters .low - dimensional multi - stream processing is then performed at the baseband to manage inter - user interference .the proposed low - complexity hybrid precoding scheme , termed phased - zf ( pzf ) , can approach the performance of the full - complexity zf precoding , which is , as stated , practically infeasible due to the requirement of supporting each antenna with a dedicated rf chain . the spectral efficiency achieved by the proposed pzf schemeis then analyzed .the structure shown in fig .[ fig : system ] is exploited to perform the proposed hybrid baseband and rf joint processing , where the baseband precoder modifies both the amplitudes and phases of incoming complex symbols and the rf precoder controls phases of the upconverted rf signal .we propose to perform phase - only control at the rf domain by extracting phases of the conjugate transpose of the aggregate downlink channel from the bs to multiple users .this is to align the phases of channel elements and can thus harvest the large array gain provided by the excessive antennas in massive mimo systems . to clarify , denote as the element of and we perform the rf precoding according to where is the phase of the element of the conjugate transpose of the composite downlink channel , i.e. , ] is the composite downlink channel .hence multi - stream baseband precoding can be applied to , where simple low - dimensional zf precoding is performed as where is a diagonal matrix , introduced for column power normalization . with this pzf scheme , to support simultaneous transmission of streams , _ hardware complexity is substantially reduced , where only rf chains are needed , as compared to required by the full - complexity zf precoding_. * quantized rf phase control : * according to , each entry of the rf precoder differs only in phases which assume continuous values .however , in practical implementation , the phase of each entry tends to be heavily quantized due to practical constraints of variable phase shifters .therefore , we need to investigate the performance of our proposed pzf precoding scheme in this realistic scenario , i.e. , phases of the entries of are quantized up to bits of precision , each quantized to its nearest neighbor based on closest euclidean distance .the phase of each entry of can thus be written as where is chosen according to where is the unquantized phase obtained from .then the baseband precoder is computed by with the quantized . in this part, we analyze the spectral efficiency achieved by our proposed pzf and full - complexity zf precoding in the limit of large transmit antenna size assuming rayleigh fading .closed - form expressions are derived , revealing the roles different parameters play in affecting system capacity . denoting the column of by , we obtain { \bf w s } + n_k\end{aligned}\ ] ] based on . as described in section [ sec : scheme ] , is designed by extracting the phases of , we thus have the diagonal term where denotes the element of the vector . under the assumption that each element of is independent and identically distributed ( i.i.d . )complex gaussian random variable with unit variance and zero mean , i.e. , , we conclude that follows rayleigh distribution with mean and variance .when tends to infinity , the central limit theorem indicates for the off - diagonal term , i.e. , , we have , where gives the complex conjugation . its distribution is characterized in the lemma below .[ lemma ] in rayleigh fading channels , the off - diagonal term is distributed according to .the proof is achieved by analyzing the real and imaginary parts of separately , followed by proving their independence .the proof is straightforward by definitions , and hence details are left out due to space limit .based on lemma [ lemma ] , we derive that the magnitude of the off - diagonal term , i.e. , follows the rayleigh distribution with mean and variance .compared with the diagonal term given by , it is safe to say _ the off - diagonal terms are negligible when the transmit antenna number is fairly large_. this implies that the inter - user interference is essentially negligible even without baseband processing at large !however , we note when assumes some medium high values , the residual interference may still deteriorate the system performance .therefore we apply in our proposed scheme zf processing at the baseband to suppress it as in .we reason that even with zf processing at the baseband , the spectral efficiency achieved is still less than it would be if the off - diagonal terms s were precisely zero . in other words ,the spectral efficiency achieved by pzf is upper bounded by with ] . in the spectral efficiency analysis, we exploit the property that users channels are asymptotically orthogonal in massive multiuser mimo systems .it indicates full - complexity zf precoding converges to conjugate beamforming with _inter - user interference forced to zero _ , achieving then according to , we obtain the spectral efficiency of full - complexity zf precoding in the limit of large as \nonumber \\= & ke^{\frac{k}{p } } \log_2e \sum\limits_{n=1}^{n_t } e_n\left(\frac{k}{p } \right)\end{aligned}\ ] ] by acknowledging that follows chi - squared distribution with degrees of freedom and is the exponential integral of order .we numerically compare our proposed pzf precoding scheme in fig . [fig : maincomparescheme ] along with its quantized version against the full - complexity zf scheme , which is deemed virtually optimal in the large array regime but practically infeasible due to the requirement of costly rf chains .it is observed that the proposed pzf precoding performs measurably close to the full - complexity zf precoding , with less than db loss but substantially reduced complexity . as for the heavily quantized phase control , we find that with bits of precision , i.e. , phase control candidates of , the proposed scheme suffers negligible degradation , say less than db .the derived analytical spectral efficiency expressions and are also plotted in fig .[ fig : maincomparescheme ] .we observe that the derived closed - form expressions are quite accurate in characterizing spectral efficiencies achieved by the proposed pzf precoding and full - complexity zf precoding schemes throughout the whole signal - to - noise ( snr ) is the common average snr received at each antenna with noise variance normalized to unity.]range , thus providing useful guidelines in practical system designs .apart from ideal i.i.d .rayleigh fading channels , our proposed pzf scheme can also be applied to the mmwave communication which is known to have very limited multipath components . to capture this poor scattering nature , in the simulation, we adopt a geometric channel model - where each user is assumed to observe the same number of propagation paths , denoted by , the strength associated with the path seen by the user is represented by ( assuming ) , and is the random azimuth ( elevation ) angle of departure drawn independently from uniform distributions over $ ] . is the array response vector depending only on array structures .here we consider a uniform linear array ( ula ) whose array response vector admits a simple expression , given by ( * ? ? ?* eq . ( 6 ) ) where is the normalized antenna spacing .we compare in fig .[ fig : mmwavecomparescheme ] our proposed pzf scheme against the beamspace mimo ( b - mimo ) scheme proposed in , which essentially steers streams onto the approximate strongest paths ( using dft matrix columns ) at the rf domain and performs low - dimensional baseband zf precoding based on the equivalent channel . for fair comparison ,the bs is also assumed to have a total of chains .the b - mimo scheme achieves desirable performance in line - of - sight ( los ) channel but fails to capture sparse multipath components in non - los channels .in this paper , we have studied a large multiuser mimo system under practical rf hardware constraints .we have proposed to approach the desirable yet infeasible full - complexity zf precoding with low - complexity hybrid pzf scheme .the rf processing was designed to harvest the large power gain with reasonable complexity , and the baseband precoder was then introduced to facilitate multi - stream processing .its performance has been characterized in a closed form and further demonstrated in both rayleigh fading and poorly scattered mmwave channels through computer simulations .f. rusek , d. persson , b. k. lau , e. g. larsson , t. l. marzetta , o. edfors , and f. tufvesson , `` scaling up mimo : opportunities and challenges with very large arrays , '' _ ieee sig . process . mag ._ , vol . 30 , no . 1 ,4060 , jan . 2013 .v. venkateswaran and a. j. van der veen , `` analog beamforming in mimo communications with phase shift networks and online channel estimation , '' _ ieee trans .sig . process ._ , vol .58 , no . 8 , pp .41314143 , aug .w. roh _ et al ._ , `` millimeter - wave beamforming as an enabling technology for 5 g cellular communications : theoretical feasibility and prototype results , '' _ ieee commun . mag ._ , vol .52 , no . 2 , pp .106113 , feb .o. e. ayach , s. rajagopal , s. abu - surra , z. pi , and r. w. heath , jr .`` spatially sparse precoding in millimeter wave mimo systems , '' _ ieee trans .wireless commun ._ , vol .13 , no . 3 , pp . 14991513 , mar .2014 .j. choi , v. raghavan , and d. j. love , `` limited feedback design for the spatially correlated multi - antenna broadcast channel , '' in _ proc .ieee global telecommun .( globecom ) _ , dec .2013 , pp .34813486 .alouini and a. j. goldsmith , `` capacity of rayleigh fading channels under different adaptive transmission and diversity - combining techniques , '' _ ieee trans . veh ._ , vol .48 , no . 4 , pp .11651181 , july 1999 .
|
massive multiple - input multiple - output ( mimo ) is envisioned to offer considerable capacity improvement , but at the cost of high complexity of the hardware . in this paper , we propose a low - complexity hybrid precoding scheme to approach the performance of the traditional baseband zero - forcing ( zf ) precoding ( referred to as full - complexity zf ) , which is considered a virtually optimal linear precoding scheme in massive mimo systems . the proposed hybrid precoding scheme , named phased - zf ( pzf ) , essentially applies phase - only control at the rf domain and then performs a low - dimensional baseband zf precoding based on the effective channel seen from baseband . heavily quantized rf phase control up to bits of precision is also considered and shown to incur very limited degradation . the proposed scheme is simulated in both ideal rayleigh fading channels and sparsely scattered millimeter wave ( mmwave ) channels , both achieving highly desirable performance . massive mimo , hybrid precoding , millimeter wave ( mmwave ) mimo , rf chain limitations .
|
contrastive divergence ( cd ) algorithm has been widely used for parameter inference of markov random fields .this first example of application is given by hinton to train restricted boltzmann machines , the essential building blocks for deep belief networks .the key idea behind cd is to approximate the computationally intractable term in the likelihood gradient by running a small number ( ) of steps of a markov chain monte carlo ( mcmc ) run .thus it is much faster than the conventional mcmc methods that run a large number to reach equilibrium distributions . despite of cd s empirical success , theoretical understanding of its behavior is far less satisfactory .both computer simulation and theoretical analysis show that cd may fail to converge to the correct solution . studies on theoretical convergence properties have thus been motivated .yuille relates the algorithm to the stochastic approximation literature and gives very restrictive convergence conditions .others show that for restricted boltzmann machines the cd update is not the gradient of any function , but that for full - visible boltzmann machines the cd update can be viewed as the gradient of pseudo - likelihood function if adopting a simple scheme of gibbs sampling . in any case , the fundamental question of why cd with finite can work asymptotically in the limit of has not been answered .this paper studies the convergence properties of cd algorithm in exponential families and gives the convergence conditions involving the number of steps of markov kernel transitions , spectral gap of markov kernels , concavity of the log - likelihood function and learning rate of cd updates ( assumed fixed in our analyses ) .this enables us to establish the convergence of cd with a fixed to the true parameter as the sample size increases .section 2 describes the cd algorithm for exponential family with parameter and data .section 3 states our main result : denoting the parameter sequence generated by cd algorithm from an i.i.d .data sample , a sufficiently large can guarantee under mild conditions .section 4 shows that is a markov chain under , the conditional probability measure given any realization of data sample , and impose three constraints on , which hold asymptotically with probability .thereafter sections 5 - 8 studies under in the framework of markov chain theory , and show that the chain is _ positive harris recurrent _ and thus processes a unique invariant distribution .the invariant distribution concentrates around the mle at a speed arbitrarily slower than , and only affects the coefficient factor of the concentration rate .section 9 completes the proof of the main result . for convenience of the reader ,we assume throughout sections 3 - 9 that the exponential family under study is a set of continuous probability distributions and show in section 10 how to get a similar conclusion for the case of discrete probability distribution .we also provide two numerical experiments to illustrate the theories in section 11 .consider an exponential family over with parameter where is the carrier measure , is the sufficient statistic and is the cumulant generating function we assume is bounded , then the natural parameter domain ( if it is not empty ) . is convex and differentiable at any interior point of the natural parameter domain , and both the gradient and hessian of cumulant generating function exist \\ \sigma(\theta ) & \triangleq \nabla^2 \lambda(\theta ) = \mathbb{c}\text{ov}_\theta [ \phi(x)]\end{aligned}\ ] ] given an i.i.d .sample generated from a certain underlying distribution , the log - likelihood function is and the gradient assuming the positive definiteness of , the maximum likelihood estimate ( mle ) uniquely exists and satisfies or equivalently maximum likelihood learning can be done by gradient ascent \ ] ] where learning rate . when computing the gradient , the first term is easy to compute .but it is usually difficult to compute the second term , which involves a complicated integral over .markov chain monte carlo ( mcmc ) methods may generate a random sample from a markov chain with the equilibrium distribution and approximate by the sample average . however , markov chains take a large number of steps to reach the equilibrium distributions . to address this problem ,hinton proposed the contrastive divergence ( cd ) method .the idea of cd is to replace and with respectively , where is obtained by a small number ( ) of steps of an mcmc run starting from the observed sample .formally , denote by the markov transition kernel with as equilibrium distribution .cd first run markov chains for steps and makes update .\ ] ] denote by the markov operator associated with , i.e. and by the second largest absolute eigenvalue of .markov kernel is said to have -spectral gap if . the convergence rate of mcmc depends on -spectral gap . throughout the paper , denotes the -step transition kernel of , denotes the distribution of markov chain after -step transition starting from initial distribution , and denotes the -step markov operator of .we also let denote the -norm .we base the convergence properties of cd algorithms for for exponential family of continuous distributions on the assumptions [ assumption : x ] , [ assumption : theta ] , [ assumption : concavity ] , [ assumption : operator continuity ] , [ assumption : spectral gap ] , [ assumption : kernel positiveness ] .theorem [ theorem : main ] states our main result , whose proof is presented in sections 4 - 9 .we later show in section 10 a similar conclusion for the case of discrete distribution . 1 .[ assumption : x ] is bounded , i.e. there exists some constant such that ^d ] .in particular , the drift condition holds outside any closed ball centering at mle of radius with , i.e. \le -\delta < 0\ ] ] with \\ & \ge \eta ( \beta^2 - 1 ) ( a r_n^2 - b r_n)\\ & = \eta ( \beta^2 - 1)c_n\end{aligned}\ ] ] * remark . * a function called _ supharmonic _ for a transition probability at if . and it is called _ strong supharmonic _ if for some positive .we actually prove in lemma [ lemma : drift condition ] that is strong supharmonic at .we later see in theorem [ theorem : tweedie ] a nice connection between strong supharmonic functions , positive recurrence of markov chains , and supmartingales .tweedie connected the _ drift condition _ in definition [ definition : drift condition ] to positive recurrence of sets in the state space by a markov chain .we restate this result in theorem [ theorem : tweedie ] and provide a proof based on sup - martingales and sup - harmonic functions in appendix .next , corollary [ corollary : positive recurrent balls ] combines lemma [ lemma : drift condition ] with theorem [ theorem : tweedie ] and concludes that the closed balls centering at the mle of radius are positive recurrent by the chain .[ theorem : tweedie ] ( theorem 6.1 in ) suppose a markov chain has a non - negative function on the state space satisfying the drift condition in definition [ definition : drift condition ] with some and set .let be the first hitting time of if starting from or the first returning time otherwise , then where is the transition probability of the chain .thus , if also holds , then the set is positive recurrent ._ lyapunov function _ is widely used in stochastic stability or optimal control study . as we have seen in theorem [ theorem : tweedie ] , a suitably designed _ lyapunov function _ can determine the positive recurrence of sets of a markov chain . we proceed to apply theorem [ theorem :tweedie ] to the markov chain , for which satisfies the _ drift condition _ outside of any closed ball in lemma [ lemma : drift condition ] , and conclude in corollary [ corollary : positive recurrent balls ] that closed balls centering at mle are positive recurrent by the chain . [ corollary : positive recurrent balls ] following theorem [ theorem : tweedie ] and lemma [ lemma : drift condition ] , for each are positive recurrent by the chain .let be the first hitting or returning time of by the chain .lemma [ lemma : drift condition ] establishes the drift condition for the likelihood gap function outside of , i.e. the compactness of and continuity of follow the boundedness of , implying both conditions of theorem [ theorem : tweedie ] are satisfied , thus is positive recurrent .next we prove the positive harris recurrence of the chain , which further implies the distribution convergence of markov chain in total variation .[ definition : small set ] an accessible set is called a small set of a markov chain if for some positive and probability measure over the state space . [ definition : positive harris recurrence ] a markov chain is called harris recurrent if there exists a set s.t . 1 . is recurrent . is a small set .if is positive recurrent in addition , then the chain is called positive harris recurrent .[ lemma : positive harris recurrence ] assume [ assumption : x ] , [ assumption : theta ] , [ assumption : concavity ] , [ assumption : operator continuity ] , [ assumption : spectral gap ] , [ assumption : kernel positiveness ] . for sufficiently large and for any and learning rate satisfying data sample satisfying ( [ eqn : sample mean ] ) , ( [ eqn : mle ] ) and ( [ eqn : empirical process ] ) , the chain generated by cd updates is positive harris recurrent . since corollary [ corollary : positive recurrent balls ] ensures the positive recurrence of , it suffices to show is a small set by checking definition [ definition : small set ] . since is an interior point of and a continuous mapping, is an interior point of . denoting by the boundary of , ( [ eqn : sample mean ] ) holds , for sufficiently large .then for any .\ ] ] assumption [ assumption : kernel positiveness ] implies has positive density over , which is strictly bounded away from for any , so does \ ] ] over ] .if where are constants , is a probability measure on , and then for every \right| > t \right ) \le \left(\frac{c_2 t}{\sqrt{s}}\right)^s e^{-2t^2},\ ] ] where constant only depends on .we proceed to bound the tail by using theorem [ theorem : vaart ] .assume [ assumption : x ] , [ assumption : theta ] and [ assumption : operator continuity ] and .then as , for any .let , then for , let \\ & = n^{-1/2}\sum_{i=1}^n \left [ \int \phi_j(y)k_\theta^m(x_i , y)dy - \int \phi_j(y)k_\theta^m p_{{\theta^*}}(y ) dy\right].\end{aligned}\ ] ] and view as a stochastic process indexed by . \\ & = \sum_{i=0}^{m-1 } \int_\mathcal{x } \left [ k_\theta k_\theta^{m - i-1}\phi_j - k_{\theta ' } k_\theta^{m - i-1}\phi_j\right](y ) k_{\theta'}^i(x , y ) dy\end{aligned}\ ] ] implying where is the metric and is the lipchitz constant introduced by assumption [ assumption : operator continuity ] .it concludes that denoting by the function class of , it follows that applying theorem [ theorem : vaart ] to function class yields as .further , as , completing the proof .we first show that , if , is a super - martingale adapted to s canonical filtration .the adaptedness of to follows being a -stopping time .it suffices to show \le 0 $ ] , then we also have integrability of non - negative by induction . indeed , \mathbb{i}(t \le t)\\ & = 0\\ ( m_{t+1 } - m_t ) \mathbb{i}(t \ge t+1 ) & = \left[\left(v(z_{t+1 } ) + ( t+1)\delta_1\right ) - \left(v(z_t ) + t\delta_1\right)\right]\mathbb{i}(t \ge t+1)\\ & = \left [ v(z_{t+1 } ) - v(z_t ) + \delta \right ]\mathbb{i}(t \ge t+1)\end{aligned}\ ] ] implying for & = \mathbb{e}_z \left [ ( m_{t+1 } - m_t ) \mathbb{i}(t \le t ) |\mathcal{g}_t \right ] + \mathbb{e}_z \left[(m_{t+1}-m_t ) \mathbb{i}(t \ge t+1 ) |\mathcal{g}_t \right]\\ & = \mathbb{e}_z \left [ \left(v(z_{t+1 } ) - v(z_t ) + \delta\right ) \mathbb{i}(t\ge t+1)| \mathcal{g}_t \right]\\ & \overset{(i)}{= } \mathbb{e}_z \left [ \left(v(z_{t+1 } ) - v(z_t ) + \delta\right ) | \mathcal{g}_t \right ] \mathbb{i}(t \ge t+1)\\ & \overset{(ii)}{= } \mathbb{e}_z \left [ \left(v(z_{t+1 } ) - v(z_t ) + \delta\right ) | z_t \right ] \mathbb{i}(t \ge t+1)\\ &\overset{(iii)}{\le } \left[-\delta + \delta\right ] \mathbb{i}(t \ge t+1)\\ & = 0\end{aligned}\ ] ] where ( i ) follows is a -stopping time , and thus , ( ii ) is due to the markov property of and ( iii ) follows ( given and ) and the drift condition in definition [ drift condition ] .consequently , for .that is implying with non - negativeness of taking , the monotone convergence theorem yields furthermore , one step analysis gives for , completing the proof .this work is supported by nsf of us under grant dms-1407557 .the authors would like to thank prof .lester mackey , dr .rachel wang and weijie su for valuable advice .hinton , g. e. ( 2002 ) .training products of experts by minimizing contrastive divergence ._ neural computation _ * 14(8 ) * 17711800 .hinton , g. , s. osindero and y. teh ( 2006 ) . a fast learning algorithm for deep belief nets . _ neural computation _* 18(7 ) * 15271554 .mackay , d. ( 2001 ) .failures of the one - step learning algorithm . in availableelectronically at http://www.inference .phy.cam.ac.uk/mackay/abstracts/gbm.html ] .sutskever , i. and tieleman , t. ( 2010 ) . on the convergence properties of contrastive divergence . in _ international conference on artificial intelligence and statistics _ 789 - 795 .hyvrinen , a. ( 2006 ) .consistency of pseudolikelihood estimation of fully visible boltzmann machines ._ neural computation _ * 18(10 ) * 2283 - 2292 .
|
this paper studies the convergence properties of contrastive divergence algorithm for parameter inference in exponential family , by relating it to markov chain theory and stochastic stability literature . we prove that , under mild conditions and given a finite data sample i.i.d . in an event with probability approaching to 1 , the sequence generated by cd algorithm is a positive harris recurrent chain , and thus processes an unique invariant distribution . the invariant distribution concentrates around the maximum likelihood estimate at a speed arbitrarily slower than , and the number of steps in markov chain monte carlo only affects the coefficient factor of the concentration rate . finally we conclude that as , , ,
|
the inferential building block of conventional statistics , data mining and machine learning rests on the assumption that data is identically and independently distributed ( iid ) . for example , the central limit theorem that the average of random variables ( with finite variance ) approaches a normal distribution assumes the random variables are iid .similarly standard linear regression postulates a relationship between a dependent and independent variables as where is the dependent variable , is the matrix in which each column represents an independent variable , and is the error vector , assumes that the errors ( ) are iid .the regression parameters when estimated then describe the dependent variable as a linear combination of the independent variables . a recurring problem in spatial statistics and data mining has been to carry out a spatial inferential task without invoking the iid assumption .indeed spatially indexed random variables neither tend to be independent nor identical .in fact tobler s first law of geography that _ everything is related to everything else , but things that are nearby are more related than distant things _ is clearly a statement against making an iid assumption in the analysis of spatial data .a common approach has been to extend the inferential process by using a spatially autoregressive step here , if on performing standard linear regression the residual error term shows spatial dependence , then the spatial contiguity or adjacency matrix is introduced to capture all the spatial dependency present in the data .the value of the parameter is estimated ( between 0 and 1 ) along with the non - spatial parameters , and if it is large ( towards 1 ) , then this is taken as evidence of spatial effects in the data .a large body of work has developed to exploit the above model for spatial regression tasks .however , two specific problems are identified in the literature .first , , which could take many different forms based on local connectivity assumptions , is always assumed a priori and imposed upon the problem . spatial dependence of the non - spatial independent or dependent variables , as we will see , could take complex forms , and an a priori assumption of could at best hide latent dependences ( by not modeling long range spatial dependences since standard forms of model only local spatial dependence , for example ) , or at worst introduce artifact dependences ( by positing that two points close in space are similar even if they have very different behaviors , for example ) . the role of this has been a source of much debate in the econometrics community , and its modeling is acknowledged as an open problem .second , even when the spatial parameter is estimated , it is unclear how itself could be helpful in making predictions on the nature of the non - spatial dependent or independent variables , beyond demonstrating that a particular problem has strong spatial effects .if however , the were estimated from the data along with the other regression parameters , it could be used to understand the spatial dependence inherent in the problem and then used to predict dependences and clustering properties of either the dependent or the independent variables .it could also implicitly explain the spatial effects that come into play through omitted variables that have not been modeled .in addressing the above issues , in this work we propose a convex optimization formulation of the problem where we infer both the contiguity matrix and the regression parameters .inferring both and has several advantages in capturing the specifics of the problem that are not captured in a standard assumption of : 1 . by inferring from the data we are directly able to infer both short distance connections and long distance connections .for example , houses along a major transport link ( rail or highway ) in a city may show correlations in prices over both short and long distances .usually , the a priori assumption of captures only short range connectivity and misses out on the long range connectivity properties that may exist in the data .2 . by inferring from the data we are directly able to capture clustering properties .usually , the a priori assumption of captures only local contiguity and misses out on clustered behavior that may exist in the data .this allows us to capture two different clustering effects : ( i ) a cluster that is discontinuous in space , ( e.g. , two housing submarkets in a city sharing dwelling stock and socio - economic characteristics may not be phyiscally adjacent but their house prices maybe directly correlated ) , and ( ii ) different clusters that overlap in space , ( e.g. low priced and high priced distinct submarket clusters may be superposed at the same point in space . ) however their are certain unique challenges that emerge when dealing with the above problem : 1 .the number of data items will always be less than the number of parameters . for locations in space and explanatory variables , the task is to estimate parameters .we address this by a sparsity regularization condition on , in which we choose an optimum with minimum norm .such a sparse solution also causes the picking out " of the most important spatial dependence relations . at the same time, it ensures that we choose an optimum that satisfies a given condition out of the infinite possible optima .2 . how should one carry out a validation of ; i.e. , once we estimate a , how do we validate whether its learnt form is actually giving us meaningful information ? we address this by posing the optimization formulation in a form that ensures consistency with ordinary least squares ( ols ) solutions . that is , when the sparsity condition on is removed , we obtain the non - spatial least squares solution , since is forced to be close to zero.the ols solution is treated as the baseline for prediction accuracy .then , when the weight on the sparsity condition is increased , and we learn a sparse , then the prediction accuracy should improve . additionally , using the boston and sydney housing market data, we also validate the cluster structure detected by by comparing it with actual price submarkets in the data , to ascertain that the spatial dependence of the dependent variable being uncovered by is meaningful .finally how can the inferred parameters be used in a given application setting ?we address this by showing that the learnt leads to both better accuracy of prediction as well as the simultaneous identification of cluster structure in the data . in the context of housing markets, this leads to simultaneous estimation of global regression parameters along with the identification of housing submarkets and spatial spillover effects .once the submarkets have been identified , independent local ols could be run for these to identify subsets of explanatory variables that play special roles in specific submarkets .the rest of the paper is organized as follows .section [ background ] provides a background on the application setting : housing markets .section [ probdef ] formulates the optimization problem , and section [ algo ] provides the admm solution and algorithm .section [ results ] presents the results of applying the model and algorithm to housing market data from boston and sydney .finally , related work is presented in section [ relatedwork ] and we end with conclusions in section [ conclusions ] .the approach we present in the paper is completely general and can be applied wherever spatial data is used . however , for concreteness , all our examples and experimental results will be based on housing markets analysis and predictions . the primary tool in housing econometrics for predicting relationships between a dependent variable ( e.g. price ) and a set of independent variables ( e.g. socio - economic or hedonic variables ) is regression .traditional regression studies are modeled , however , on the assumption that the entire city or region is a unitary housing market .if the entire city or region is a single market , then traditional regression models need not consider spatial dependence beyond global location variables .the parameters of regression could then predict price and other trends in terms of sets of socio - economic and hedonic variables that operate consistently for the entire market .however , as new research reveals , this view is problematic .the current mainstream view of a unitary housing market stems from the seminal work on access - space models by alonso and muth , where the price gradient for housing costs is modeled in terms of the travel distance at which housing is located from the central business district ( cbd ) .although powerful in terms of providing analytically tractable long term equilibrium behaviors in terms tradeoffs of location , travel costs , and housing price , these models make a number of simplifying assumptions .there are no distinctions of housing or dwelling types , the city is embedded in a featureless space , travel costs are the same in all directions , and a monocentric city is assumed with a single cbd .these assumptions , however , do not hold for most cities .thus , while the access - space models explain the large scale suburbanisation behaviors successfully , they are unable to account for the heterogeneity of the housing market , observed by the consistent presence of housing submarkets within a larger market . in the presence of polycentricity of urban structure (as opposed to one cbd ) and other socio - economic , transport , land use and housing stock characteristics that break the symmetry of the monocentric price gradient model , a single city or region s housing market could show price gradients of a highly variable nature .a significant body of research as well as current market conditions now provide convincing evidence of quasi - independent housing submarkets within a highly segmented unitary housing market .a housing submarket is defined as a set of dwellings that are close substitutes for other dwellings in the same submarket , but a poor substitute for dwellings in other submarkets . in terms of spatial distribution, views are divided between whether such submarkets are geographically defined , or defined on the basis of structural variables such as dwellings types . while the geographical definition provides spatially demarcated and non - overlapping submarkets , the structural definitions provide spatially segregated and discontinuous submarkets scattered through the city ( e.g. as has been defined for sydney and melbourne .however , it is accepted that neither the geographical nor the structural definition by itself can provide accurate identification of submarket structures , and that both georgaphy and structural features must be considered jointly for the task . to the best of our knowledge, we do not know of algorithms , methods or theory that addresses how geographical heterogenous clustering and global structural trends can be brought together jointly to accurately identify housing submarkets . even though the regression and clustering problems are intimately related , current approaches are usually sequential and either the regression problem sits divorced from the clustering problem , or the clustering problem is qualitatively pre - treated in ad - hoc ways .this concrete application problem also outlines the general need for learning the structure of short range , long range , or clustered spatial dependence along with the simultanoeus estimation of regression parameters .let be an vector of prices ( or any other dependent variable ) , let be an matrix , each column of which represent one of explanatory variables , and let be a vector of regression parameters .then the standard regression model is stated as , where represents the error term . in the presence of spatial autocorrelation, the error term does not show the identically independently distributed ( i.i.d . )structure , but instead shows spatial patterns of dependence .an example standard response in spatial econometrics and statistics is a spatial autoregressive model ( sar ) model of the form : , where the matrix is an spatial dependence matrix . as discussed , this is modeled based on local distance , contiguity or lattice type assumptions .the principle observation is that it is imposed onto the problem as a data term , instead of being treated as a variable .we now consider the more realistic case where we treat as a variable and learnt . in the case of housing prices ,particular neighborhoods for example show similar price trends because of the common presence of highly regarded local amenities such as school quality or negative criteria such as crime rates .thus , it is likely that the price data will show spatial clustering , in which case could have a clustered structure with short and long range dependences rather than following local lattice , distance or contiguity assumptions .in other words , could have a higher - order structure that could be revealed using the data itself . since we have to estimate parameters , and the number of variables is always going to be higher than the data, we will have an infinite number of solutions .in this instance , we draw upon the condition that has to be sparse and positive , since a dense implies every location is related to every other , and is unlikely to provide meaningful information . hence we learn the with the minimum -norm .thus , the optimization problem can be stated as : will use admm to solve the above optimization problem . introducing auxiliary variable , we can rewrite the above as then , the augmented lagrangian is written as = \\ & \frac{1}{2 }||\mathbf{y } - \mathbf{wy } - \mathbf{x}\beta||_{2}^{2 } ~+~ \lambda_{1 } ||\mathbf{a}||_{1 } \\ & ~+~ \text{tr}[\mathbf{\delta_{1}}^{t } ( \mathbf{a } - \mathbf{w } + diag(\mathbf{w } ) ) ] \\ & ~+~ \frac{\rho_{1}}{2 } ||\mathbf{a } - \mathbf{w } + diag(\mathbf{w})||_{2}^{2 } ~+ i_{+}(\mathbf{w } ) \end{aligned}\ ] ] where when and otherwise . ~+ \\\frac{\rho_{1}}{2 } ||\mathbf{a } - \mathbf{w } + diag(\mathbf{w})||_{2}^{2 } ~+ i_{+}(\mathbf{w } ) \bigg ] \end{aligned}\ ] ] looking at the individual terms , \\ = \frac{1}{2 } \frac{\partial}{\partial \mathbf{w } } \bigg [ \text{tr } ( \mathbf{y } - \mathbf{wy } - \mathbf{x } \beta ) ( \mathbf{y } - \mathbf{wy } - \mathbf{x } \beta)^{t } \bigg ] \\= \frac{1}{2 } \bigg ( \frac{\partial}{\partial \mathbf{w } } tr \bigg [ -\mathbf{yy}^{t}\mathbf{w}^{t } - \mathbf{wyy}^{t } + \mathbf{wyy}^{t}\mathbf{w}^{t } \\ + \mathbf{wy}\beta^{t}\mathbf{x}^{t } + \mathbf{x}\beta \mathbf{y}^{t}\mathbf{w}^{t } \beta \bigg ] \bigg ) \\ = -\mathbf{yy}^{t } + \mathbf{wyy}^{t } + \mathbf{x}\beta \mathbf{y}^{t } \end{aligned}\ ] ] similarly , \\ = - \mathbf{\delta_{1 } } + \text{diag}(\mathbf{\delta_{1 } } ) \end{aligned}\ ] ] similarly , combining all the individual terms and setting them to zero , and applying the positivity constraint on , we get ^ { -1 } ( \mathbf{yy}^{t } - \mathbf{x}\beta \mathbf{y}^{t } + \mathbf{\delta_{1 } } - \text{diag}(\mathbf{\delta_{1 } } ) + \\ \rho_{1 } \mathbf{a } + \rho_{1 } \text{diag } ( \mathbf{a } ) ) \bigg ] _ + \end{aligned}\ ] ] \\ + \frac{\rho_{1}}{2 } ||\mathbf{a } - \mathbf{w } + diag(\mathbf{w})||_{2}^{2 } \bigg ] \end{aligned}\ ] ] applying the soft - thresholding operator , we will get \end{aligned}\ ] ] this can be solved as \\ = \frac{1}{2 } \frac{\partial}{\partial \beta } \bigg [ \text{tr } ( \mathbf{y } - \mathbf{wy } - \mathbf{x } \beta ) ( \mathbf{y } - \mathbf{wy } - \mathbf{x } \beta)^{t } \bigg ]\\ = \frac{1}{2 } \bigg ( \frac{\partial}{\partial \beta } \text{tr } \bigg [ -\mathbf{y } \beta^{t } \mathbf{x}^{t } + \mathbf{wy } \beta^{t } \mathbf{x}^{t } - \mathbf{x } \beta \mathbf{y}^{t } \\ + \mathbf{x } \beta \mathbf{y}^{t } \mathbf{w}^{t } + \mathbf{x } \beta \beta^{t } \mathbf{x}^{t } \bigg ] \bigg ) \\ = -\mathbf{x}^{t }\mathbf{y } + \mathbf{x}^{t } \mathbf{w y } + \mathbf{x}^{t } \mathbf{x } \beta ) \end{aligned}\ ] ] setting this to zero , we get \end{aligned}\ ] ] finally , we can update the lagrangian multiplier as * convergence .* this is run iteratively to convergence . sincethis optimization problem is convex , admm is guaranteed to attain a global minimum .we have used a stopping criterion where we measure the absolute difference between the primal and dual variables and stop when the error is stabilized below a given threshold .we test our approach on two data sets : ( a ) the boston housing data set , that provides the prices and explanatory variable measurements for 506 houses along with the latitude and longitude locations , and ( b ) a data set for 51 local government areas ( lgas ) of sydney , further subdivided into 862 suburbs , with 2015 prices of houses and related explanatory variables available for both area definitions .we have independently run the optimization model on cvx as well as our admm algorithm , and while the results from both are the same , the cvx takes much more time as the number of locations ( ) climbs even upto 500 .for example , testing the sydney suburbs dataset took about 960 seconds ( about 16 minutes ) on cvx , whereas it takes negligible time with admm .the inference of the spatial weights matrix should , as one of its main contributions , scale to identifying fine scale spatial clustering ( in this case , housing submarkets ) for latitude - longitude level microdata upto the individual house level .this would mean big data sets with millions of locations .hence , we establish the justification for proposing the admm algorithm . our results , contributions and validations for each data set are reported in three sections , discussed below .first , we present a comparison of the result of our model with the ordinary least squares ( ols ) and spatial autoregressive models ( sar ) , showing consistent estimates of across all the three cases . however , in our approach , we also learn the structure of spatial dependence , through the spatial weights matrix , that is ignored in ols and the form of which is assumed _ a priori _ as a data term in sar . while we use the sar model only as one of the possible spatial regression models , we note that almost all spatial regression models assume an a priori fixed .this is problematic , because a priori assumptions rest on modeling only local spatial connectivity , whereas the data may contain a mix of short range and long range connectivity as well as clustering .hence , to learn the structure of would have many advantages . the main contribution of the approach presented in this paper is learning this along with the consistent estimation of the regression parameters .we note here that the consistent estimation of regression parameters for the entire region is important , since regression parameters are required to be global " for most policy settings .thus , a validation check on any identified structure of is that its co - inference with does not lead to wildly deviant or fluctuating forms for and is largely consistent with ols results .for example , even though in principle , it is possible to run local regressions for each lga or census tract or suburb within a metropolitan or state region , policy decisions are usually made by authorities at the metropolitan or state level on the basis of the global paramters .it is also at the metropolitan or state level that housing submarket clusters need to be identified for the entire urban region .this would require that regression parameters be globally estimated for entire regions ( as they usually are ) .this approach provides an added global - local connection , since it does not preclude the possibility of carrying out local regression too .once the estimation of local spatial clusters are performed using , it is always possible to run local regression models for the identified submarket clusters ( that are hard to define otherwise , as noted in the background section ) .the boston housing data set was produced in 1978 , incorporating measurements of 14 variables across 506 census tracts , and is available as part of the matlab spatial econometrics package .the log of median value of owner - occupied homes in 10,000 , pupil - teacher ratio by town , the proportion of black population by town , and lower status of the population ) .we first report on the estimation of the regression parameters .figure [ bostonfig1 ] shows the prediction from ols , sar , and admm , respectively , along with the estimates and corresponding values .the spatial weights matrix for the sar estimation was computed using the inbuilt routines in the matlab spatial econometrics package .it is seen that the co - estimation of causes the to rise , at the same time , maintaining the consistency of the regression parameters . in the next sectionwe report on analysing the structure of the inferred , but simply note for now that the co - estimation of spatial dependence from data can actually improve prediction capability , through the higher that results while holding consistent . at the time of writing this paper , the housing market in sydneyis witnessing a particularly turbulent time . in 2014 - 15, there were warnings of an impending housing bubble " , prices have risen consistently and steeply in the last few years , and housing is deemed as acutely unaffordable especially for the younger and poorer sections of the population . by 2016 , there are now warnings of an impending fall in prices .it is an important problem to estimate the fine scale spatial organization of housing submarkets in especially sydney and to some extent melbourne , since they are the regions witnessing the highest rises in price as well as demand population . to the best of our knowledge ,no statistically or quantitatively based identification of sydney and melbourne housing submarkets have been performed to date , though there is ample evidence of policy interest in the issue and qualitative urban planning approaches that are directed towards tackling this problem .the australian bureau of statistics publishes capital city housing price index series , but assumes the entire metropolitan region as a single housing market , without the identification of clustered submarkets within the region that are witnessed and experienced by residents .large parts of cities being unaffordable and out of reach pose serious socio - economic affordability and equity concerns .the agency that records all of the residential property transactions is core logic / r p data .they have made the data available for university and academic research through an organization called sirca .we have obtained 9 months ( latest at the time of writing this paper ) of 2015 residential property transactions data for the sydney housing market , that we aggregate over each geographical area of analysis to produce an annual aggregate .the metropolitan region of sydney is divided into 43 local government areas ( lgas ) and about 8 outlying lgas that are adjacent to metropolitan sydney and seeing spillover growth .we have included all these 51 regions in our analysis .the 51 lgas further subdivide into about 862 susburbs ( roughly corresponding to individual post - code areas ) , that forms the lowest level of area definitions for which aggregated data is available from rp data .data is available for both houses and units ( apartments ) separately , and in this paper we consider the individual detached houses data , since sydney is pre - dominantly a detached dwelling houses market ( though the apartment market is becoming increasingly important recently due to increased population pressures ) .the dependent variable is median price per location ( lga or suburb ) , the log of which has been considered for the suburb level data , and the explanatory variables are the numbers of houses sold per location , the total monetary value in dollars of the total number of properties sold , the number of properties sold under 200,000 - 400,000 - 600,000 - 800,000 - 1 million - 2 million , the rental median for each area , the total number of dwellings listed for selling , and the total numbers of dwellings in the area .figure [ sydney_fig3 ] shows the predictions of regression parameters from both ols and the admm ( the predictions from sar are close to these ) .again , it is seen that with consistent emerging from all approaches , the values are higher for the admm , showing a better fit .it is also interesting to observe in the sydney case , that while the explanatory variables that measure how many properties are being sold within given price ranges of 12 & 12#1212_12%12[1][0] * * , ( ) _ _ ( , ) ( ) _ _ , vol .( , ) _ _ ( , ) _ _ ( , ) ( ) * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) `` , '' ( ) , `` , '' ( ) , * * , ( ) * * , ( ) * * , ( ) `` , '' ( ) , ( ) * * , ( ) * * , ( ) _ _ , vol .( , ) in _ _ ( , ) pp . * * , ( ) in _ _ ( , ) pp. http://arxiv.org/abs/1410.3699 [ * * ( ) ] in _ _ ( , ) pp .
|
inference methods in traditional statistics , machine learning and data mining assume that data is generated from an independent and identically distributed ( iid ) process . spatial data exhibits behavior for which the iid assumption must be relaxed . for example , the standard approach in spatial regression is to assume the existence of a contiguity matrix which captures the spatial autoregressive properties of the data . however all spatial methods , till now , have assumed that the contiguity matrix is given apriori or can be estimated by using a spatial similarity function . in this paper we propose a convex optimization formulation to solve the spatial autoregressive regression ( sar ) model in which both the contiguity matrix and the non - spatial regression parameters are unknown and inferred from the data . we solve the problem using the alternating direction method of multipliers ( admm ) which provides a solution which is both robust and efficient . while our approach is general we use data from housing markets of boston and sydney to both guide the analysis and validate our results . a novel side effect of our approach is the automatic discovery of spatial clusters which translate to submarkets in the housing data sets .
|
high speed communication systems such as flash memory , optical communication and free space optics require extremely fast and low complexity error correcting schemes . among existing decoding algorithms for ldpc codes on the bsc , the bit flipping ( serial or parallel ) algorithms are least complex yet possess desirable error correcting abilities .first described by gallager , the parallel bit flipping algorithm was shown by zyablov and pinsker to be capable of asymptotically correcting a linear number of errors ( in the code length ) for almost all codes in the regular ensemble with left - degree .later , sipser and spielman used expander graph arguments to show that this algorithm and the serial bit flipping algorithm can correct a linear number of errors if the underlying tanner graph is a good expander .note that their arguments also apply for regular codes with left - degree .it was then recently shown by burshtein that regular codes with left - degree are also capable of correcting a linear number of errors under the parallel bit flipping algorithm . despite being theoretically valuable ,the above - mentioned capability to correct a linear number of errors is not practically attractive .this is mainly because the fraction of correctable errors is extremely small and hence the code length must be large . besides , the above - mentioned results do not apply for column - weight - three codes , which allow very low decoding complexity . also , compared to hard decoding message passing algorithms such as the gallager a / b algorithm , the error performance of the bit flipping algorithms on finite length codes is usually inferior .this drawback is especially visible for column - weight - three codes for which the guaranteed error correction capability is upper - bounded by ( to be discussed later ) , where is the girth of a code .the fact that a code with or can not correct certain error patterns of weight two indeed makes the algorithm impractical regardless of its low complexity . in recent years, numerous bit - flipping - oriented decoding algorithms have been proposed ( see for a list of references ) .however , almost all of these algorithms require some soft information from a channel with capacity larger than that of the bsc .a few exceptions include the probabilistic bit flipping algorithm ( pbfa ) proposed by miladinovic and fossorier . in that algorithm , whenever the number of unsatisfied check nodes suggests that a variable ( bit ) node should be flipped , it is flipped with some probability rather than being flipped automatically .this random nature of the algorithm slows down the decoding , which was demonstrated to be helpful in practical codes whose tanner graphs contain cycles .the idea of slowing down the decoding can also be found in a bit flipping algorithm proposed by chan and kschischang .this algorithm , which is used on the additive white gaussian noise channel ( awgnc ) , requires a certain number of decoding iterations between two possible flips of a variable node . in this paper, we propose a new class of bit flipping algorithms for ldpc codes on the bsc .these algorithms are designed in the same spirit as the class of finite alphabet iterative message passing algorithms . in the proposed algorithms , an additional bitis introduced to represent the strength of a variable node .given a combination of satisfied and unsatisfied check nodes , the algorithm may reduce the strength of a variable node before flipping it . an additional bitcan also be introduced at a check node to indicate its reliability .the novelty of these algorithms is three - fold .first , similar to the above - mentioned pbfa , our class of algorithms also slows down the decoding .however they only do so when necessary and in a deterministic manner .second , their deterministic nature and simplicity allow simple and thorough analysis .all subgraphs up to a certain size on which an algorithm fails to converge can be found by a recursive algorithm .consequently , the guaranteed error correction capability of a code with such algorithms can be derived .third , the failure analysis of an algorithm gives rise to better algorithms .more importantly , it leads to decoders which use a concatenation of two - bit bit flipping algorithms .these decoders show excellent trade offs between complexity and performance . the rest of the paper is organized as follows .section [ sect_pre ] provides preliminaries .section [ sect_tbd ] motivates and describes the class of two - bit bit flipping algorithms .section [ sect_analysis ] gives a framework to analyze these algorithms .finally , numerical results are presented in section [ sect_numerical ] along with discussion .let denote an ( ) ldpc code over the binary field gf(2 ) . is defined by the null space of , an _ parity check matrix_. is the bi - adjacency matrix of , a tanner graph representation of . is a bipartite graph with two sets of nodes : variable nodes and check nodes . in a -left - regular code ,all variable nodes have degree .each check node imposes a constraint on the neighboring variable nodes .a check node is said to be satisfied by a setting of variable nodes if the modulo - two sum of its neighbors is zero , otherwise it is unsatisfied .a vector is a codeword if and only if all check nodes are satisfied .the length of the shortest cycle in the tanner graph is called the girth of . in this paper, we consider 3-left - regular ldpc codes with girth , although the class of two - bit bit flipping algorithms can be generalized to decode any ldpc code .we assume transmission over the bsc .a variable node is said to be corrupt if it is different from its original sent value , otherwise it is correct . throughout the paper ,we also assume without loss of generality that the all - zero codeword is transmitted .let denote the input to an iterative decoder . with the all - zero codeword assumption , the support of , denoted as is simply the set of variable nodes initially corrupt . in our case ,a variable node is corrupt if it is 1 and is correct if it is 0 .a simple hard decision decoding algorithm for ldpc codes on the bsc , known as the parallel bit flipping algorithm is defined as follows . forany variable node in a tanner graph , let and denote the number of satisfied check nodes and unsatisfied check nodes that are connected to , respectively .* in parallel , flip each variable node if . *repeat until all check nodes are satisfied .the class of two - bit bit flipping algorithms is described in this section .we start with two motivating examples .the first one illustrates the advantage of an additional bit at a variable node while the second illustrates the advantage at a check node . in this subsection, symbols and denote a correct and a corrupt variable node while and denote a satisfied and an unsatisfied check node .let be a 3-left - regular ldpc code with girth and assume that the variable nodes and form an eight cycle as shown in fig .[ fig_fp ] .also assume that only and are initially in error and that the parallel bit flipping algorithm is employed .in the first iteration illustrated in fig .[ fig_fp ] , and are unsatisfied while and are satisfied .since and , and are flipped and become correct .however , and are also flipped and become incorrect since and . in the second iteration ( fig .[ fig_fp ] ) , the algorithm again flips and .it can be seen that the set of corrupt variable nodes alternates between and , and thus the algorithm does not converge .[ fp13 ] [ fp24 ] [ fp12 ] the parallel bit flipping algorithm fails in the above situation because it uses the same treatment for variable nodes with and .the algorithm is too `` aggressive '' when flipping a variable node with .let us consider a modified algorithm which only flips a variable node with .this modified algorithm will converge in the above situation .however , if only and are initially in error ( fig . [ fig_fp ] ) then the modified algorithm does not converge because it does not flip any variable node .the modified algorithm is now too `` cautious '' to flip a variable node with .both decisions ( to flip and not to flip ) a variable node with can lead to decoding failure .however , we must pick one or the other due the assumption that a variable node takes its value from the set . relaxing this assumptionis therefore required for a better bit flipping algorithm .let us now assume that a variable node can take four values instead of two .specifically , a variable node takes its value from the set , where ( ) stands for `` strong zero '' ( `` strong one '' ) and ( ) stands for `` weak zero '' ( `` weak one '' ) .assume for now that a check node only sees a variable node either as 0 if the variable node is or , or as 1 if the variable node is or .recall that is the number of unsatisfied check nodes that are connected to the variable node .let be the function defined in table [ tb_f1 ] . .[ cols="^,^,^,^,^,^,^,^,^,^ " , ] [ tb_f1 ] consider the following bit flipping algorithm .initialization : each variable node is initialized to if and is initialized to if . * in parallel , flip each variable node to .* repeat until all check nodes are satisfied . compared to the parallel bit flipping algorithm andits modified version discussed above , the tbfa1 possesses a gentler treatment for a variable node with .it tries to reduce the `` strength '' of before flipping it .one may realize at this point that it is rather imprecise to say that the tbfa1 flips a variable node from to or vice versa , since a check node still sees as 0 .however , as the values of can be represented by two bits , i.e. , can be mapped onto the alphabet , the flipping of should be understood as either the flipping of one bit or the flipping of both bits .it is easy to verify that the tbfa1 is capable of correcting the error configurations shown in fig .[ fig_fp ] .moreover , the guaranteed correction capability of this algorithm is given in the following proposition .[ algo1cap ] the tbfa1 is capable of correcting any error pattern with up to errors in a left - regular column - weight - three code with tanner graph which has girth and which does not contain any codeword of weight .the proof is omitted due to page limits . _ remarks : _ * it can be shown that the guaranteed error correction capability of a 3-left - regular code with the parallel bit flipping algorithm is strictly less than . thus , the tbfa1 increases the guaranteed error correction capability by a factor of at least 2 . * in , we have shown that the gallager a / b algorithm is capable of correcting any error pattern with up to errors in a 3-left - regular code with girth . for codes with girth and minimum distance , the gallager a / b algorithm can only correct up to two errors .this means that the guaranteed error correction capability of the tbfa1 is at least as good as that of the gallager a / b algorithm ( and better for codes with ) .it is also not difficult to see that the complexity of the tbfa1 is much lower than that of the gallager a / b algorithm .now that the advantage of having more than one bit to represent the values of a variable node is clear , let us explore the possibility of using more than one bit to represent the values of a check node in the next subsection . in this subsection , we use the symbols and to denote a variable node and a variable node , respectively .the symbols used to denote a variable node and a variable node are shown in fig .[ fig_fvp ] where is a variable node and is a variable node .the symbols and still represent a satisfied and an unsatisfied check node .[ fvp1 ] [ fvp2 ] [ fvp3 ] assume a decoder that uses the tbfa1 algorithm .[ fig_fvp ] , and illustrates the first , second and third decoding iteration of the tbfa1 on an error configuration with four variable nodes and that are initially in error .we assume that all variable nodes which are not in this subgraph remain correct during decoding and will not be referred to . in the first iteration ,variable nodes and are strong and connected to two unsatisfied check nodes . consequently , the tbfa1 reduces their strength .since variable nodes and are strong and only connected to one unsatisfied check node , their values are not changed . in the second iteration, all check nodes retain their values ( satisfied or unsatisfied ) from the first iteration .the tbfa1 hence flips and from to and flips and from to . at the beginning of the third iteration , the value of any variable node is either or .every variable node is connected to two satisfied check nodes and one unsatisfied check node . sinceno variable node can change its value , the algorithm fails to converge .the failure of the tbfa1 to correct this error configuration can be attributed to the fact that check node is connected to two initially erroneous variable nodes and , consequently preventing them from changing their values .let us slightly divert from our discussion and revisit the pbfa proposed by miladinovic and fossorier .the authors observed that variable node estimates corresponding to a number close to unsatisfied check nodes are unreliable due to multiple errors , cycles in the code graph and equally likely a priori hard decisions .based on this observation , the pbfa only flips a variable node with some probability . in the above error configuration, a combination of two unsatisfied and one satisfied check nodes would be considered unreliable .therefore , the pbfa would flip the corrupt variable nodes and as well as the correct variable node and with the same probability .however , one can see that a combination of one unsatisfied and two satisfied check nodes would also be unreliable because such combination prevents the corrupt variable nodes and from being corrected .unfortunately , the pbfa can not flip variable nodes with less than unsatisfied check nodes since many other correct variable nodes in the tanner graph would also be flipped . in other words , the pbfa can not evaluate the reliability of estimates corresponding to a number close to unsatisfied check nodes .we demonstrate that such reliability can be evaluated with a new concept introduced below .revisit the decoding of the tbfa1 on the error configuration illustrated in fig .[ fig_fvp ] .notice that in the third iteration , except check node , all check nodes that are unsatisfied in the second iteration become satisfied while all check nodes that are satisfied in the second iteration become unsatisfied .we will provide this information to the variable nodes .[ checkdef ] a satisfied ( unsatisfied ) check node is called _ previously satisfied _ ( _ previously unsatisfied _ )if it was satisfied ( unsatisfied ) in the previous decoding iteration , otherwise it is called _ newly satisfied _( _ newly unsatisfied _ ) .the possible transitions of a check node are illustrated in fig .[ transcheck ] .let , , and be the number of previously satisfied check nodes , previously unsatisfied check nodes , newly satisfied check nodes and newly unsatisfied check nodes that are connected to a variable node , respectively .let be a function defined as follows : consider the following bit flipping algorithm : initialization : each variable node is initialized to if and is initialized to if . in the first iteration , check nodes are either previously satisfied or previously unsatisfied .* in parallel , flip each variable node to + . *repeat until all check nodes are satisfied .the tbfa2 considers a combination of one newly unsatisfied , one newly satisfied and one previously satisfied check node to be less reliable than a combination of one previously unsatisfied and two previously satisfied check nodes .therefore , it will reduce the strength of and at the end of the third iteration .consequently , the error configuration shown in fig . [ fig_fvp ] can now be corrected after 9 iterations .proposition [ algo1cap ] also holds for the tbfa2 ._ remarks : _ let be the set of all functions from .a natural question to ask is whether can be replaced with some such that the tbfa1 algorithm can correct the error configuration shown in fig .[ fig_fvp ] .brute force search reveals many of such functions .unfortunately , none of those functions allow the algorithm to retain its guaranteed error correction capability stated in proposition [ algo1cap ] .we recap this section by giving the formal definition of the class of two - bit bit flipping algorithms . for the class oftwo - bit bit flipping algorithms , a variable node takes its value from the set .a check node sees a and a variable node as 0 and sees a and a variable node as 1 .according to definition [ checkdef ] , a check node can be previously satisfied , previously unsatisfied , newly satisfied or newly unsatisfied .an algorithm is defined by a mapping , where is the column - weight of a code .different algorithms in this class are specified by different functions . in order to evaluate the performance of an algorithm ,it is necessary to analyze its failures . to that taskwe shall now proceed .in this section , we describe a framework for the analysis of two - bit bit flipping algorithms ( the details will be provided in the journal version of this paper ) .consider the decoding of a two - bit bit flipping algorithm on a tanner graph .assume a maximum number of iterations and assume that the channel makes errors .let denote the subgraph induced by the variable nodes that are initially in error .let be the set of all tanner graphs that contain .let be the subset of with the following property : if then there exists an induced subgraph of such that ( i ) is isomorphic to and ( ii ) the two - bit bit flipping algorithm fails to decode on after iterations if the initially corrupt variable nodes are variable nodes in .let be a subset of such that any graph contains a graph and no graph in contains another graph in . with the above formulation , we give the following proposition .[ proff ] algorithm will converge on after decoding iterations if the induced subgraph is not contained in any induced subgraph of that is isomorphic to a graph in .if fails to converge on after iterations then , hence must be contained in an induced subgraph of that are isomorphic to a graph in .we remark that proposition [ proff ] only gives a sufficient condition .this is because might be contained in an induced subgraph of that is not isomorphic to any graph in .nevertheless , can still be used as a benchmark to evaluate the algorithm .a better algorithm should allow the above sufficient condition to be met with higher probability .for a more precise statement , we give the following .[ profg ] the probability that a tanner graph is contained in a tanner graph with variable nodes is less than the probability that is contained in a tanner graph with variable nodes if let be a tanner graph with variable nodes such that contains .since and both have variable nodes , the probability that is contained in equals the probability that is contained in . on the other hand ,since contains , the probability that is contained in is less than the probability that is contained in by conditional probability .proposition [ profg ] suggests that a two - bit bit flipping algorithm should be chosen to maximize the size ( in terms of number of variable nodes ) of the smallest tanner graph in .given an algorithm , one can find all graphs in up to a certain number of variable nodes by a recursive algorithm .let denote the set of corrupt variable nodes at the beginning of the -th iteration .the algorithm starts with the subgraph , which is induced by the variable nodes in .let be the set of check nodes that are connected to at least one variable node in .in the first iteration , only the check nodes in can be unsatisfied .therefore , if a correct variable node becomes corrupt at the end of the first iteration then it must connect to at least one check node in .in all possible ways , the algorithm then expands recursively by adjoining new variable nodes such that these variable nodes become corrupt at the end of the first iteration . the recursive introduction of new variable nodes halts if a graph in is found .let be the set of graphs obtained by expanding .each graph in is then again expanded by adjoining new variable nodes that become corrupt at the end of the second iteration .this process is repeated times where is the maximum number of iterations .we demonstrate the performance of two - bit bit flipping algorithms on a regular column - weight - three quasi - cyclic ldpc code of length .the code has rate and minimum distance .two different decoders are considered .the first decoder , denoted as bfd1 , employs a single two - bit bit flipping algorithm .the bfd1 may perform iterative decoding for a maximum number of 30 iterations .the second decoder , denoted as bfd2 , is a concatenation of 55 algorithms , namely .associated with algorithm is a maximum number of iterations .the bfd2 operates by performing decoding using algorithm on an input vector for or until a codeword is found .the maximum possible number of decoding iterations performed by the bfd2 is .details on the algorithms as well as the parity check matrix of the quasi - cyclic ldpc code can be found in .simulations for frame error rate ( fer ) are shown in fig .[ fig_fer ] .both decoders outperform decoders which use the gallager a / b algorithm or the min - sum algorithm .in particular , the fer performance of the bfd2 is significantly better .more importantly , the slope of the fer curve of the bfd2 is larger than that of the bp decoder .this shows the potential of two - bit bit flipping decoders with comparable or even better error floor performance than that of the bp decoder .it is also important to remark that although the bfd2 uses 55 different decoding algorithms , at cross over probability , more than 99.99% of codewords are decoded by the first algorithm .consequently , the average number of iterations per output word of the bfd2 is not much higher than that of the bfd1 , as illustrated in fig .[ fig_iter ] .this means that similar to the bfd1 , the bfd2 has an extremely high speed .this work was funded by nsf under the grants ccf-0963726 and ccf-0830245 .s. planjery , d. declercq , s. chilappagari , and b. vasic and , `` multilevel decoders surpassing belief propagation on the binary symmetric channel , '' in _ ieee int .inf . theory _ , jun .2010 , pp . 769773 .s. k. chilappagari , d. v. nguyen , b. v. vasic , and m. w. marcellin , `` error correction capability of column - weight - three ldpc codes under the gallager a algorithm - part ii , '' _ ieee trans .inf . theory _ ,56 , no . 6 , pp . 26262639 , jun .`` error floors of ldpc codes - multi - bit bit flipping algorithm . ''[ online ] .available : http://www2.engr.arizona.edu/~vasiclab / projects / codingtheory / errorfloor% home.html[http://www2.engr.arizona.edu/~vasiclab / projects / codingtheory / errorfloor% home.html ]
|
in this paper , we propose a new class of bit flipping algorithms for low - density parity - check ( ldpc ) codes over the binary symmetric channel ( bsc ) . compared to the regular ( parallel or serial ) bit flipping algorithms , the proposed algorithms employ one additional bit at a variable node to represent its `` strength . '' the introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2 . an additional bit can also be employed at a check node to capture information which is beneficial to decoding . a framework for failure analysis of the proposed algorithms is described . these algorithms outperform the gallager a / b algorithm and the min - sum algorithm at much lower complexity . concatenation of two - bit bit flipping algorithms show a potential to approach the performance of belief propagation ( bp ) decoding in the error floor region , also at lower complexity .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.