article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
we consider data sets describing contacts between individuals , collected by the sociopatterns collaboration ( http://www.sociopatterns.org ) in three different settings : a workplace ( office building , invs ) , a high school ( thiers13 ) and a scientific conference ( sfhh ) .these data correspond to the close face - to - face proximity of individuals equipped with wearable sensors , at a temporal resolution of seconds .table [ tab : data ] summarises the characteristics of each data set .the contact data are represented by temporal networks , in which nodes represent the participating individuals and a link between two nodes and at time indicates that the two corresponding persons were in contact at that time .these three data sets were chosen as representative of different types of day - to - day contexts and of different contact network structures : the sfhh data correspond to a rather homogeneous contact network ; the invs and thiers13 populations were instead structured in departments and classes , respectively . moreover ,high school classes ( thiers13 ) are of similar sizes while the invs department sizes are unequal .finally , the high school contact patterns ( thiers13 ) are constrained by strict and repetitive school schedules , while contacts in offices are less regular across days . to quantify how the incompleteness of data , assumed to stem from a uniformly random participation of individuals to the data collection , affects the outcome of simulations of dynamical processes , we consider as ground truth the available data and perform population resampling experiments by removing a fraction of the nodes uniformly at random .( note that the full data sets are also samples of all the contacts that occurred in the populations , as the participation rate was lower than in each case . in the thiers13 casehowever , the participation rate was quite high . )we then simulate on the resampled data the paradigmatic susceptible - infectious - recovered ( sir ) and the susceptible - infectious - susceptible ( sis ) models of epidemic propagation . in these models ,a susceptible ( s ) node becomes infectious ( i ) at rate when in contact with an infectious node .infectious nodes recover spontaneously at rate . in the sir model , nodes then enter an immune recovered ( r ) state , while in the sis model , nodes become susceptible again and can be reinfected .the quantities of interest are for the sir model the distribution of epidemic sizes , defined as the final fraction of recovered nodes , and for the sis model the average fraction of infectious nodes in the stationary state .we also calculate for the sir model the fraction of epidemics that infect more than 20% of the population and the average size of these epidemics . for the sis model, we determine the epidemic threshold for different values of : it corresponds to the value of that separates an epidemic - free state ( ) for from an endemic state ( ) for , and is thus an important indicator of the epidemic risk .we refer to the methods section for further details on the simulations .we then present several methods for constructing surrogate data using only information contained in the resampled data .we compare for each data set the outcomes of simulations performed on the whole data set , on resampled data sets with a varying fraction of nodes removed , , and on the reconstructed data sets built using these various methods .missing data are known to affect the various properties of contact networks in different ways . in particular, the number of neighbours ( degree ) of a node decreases as the fraction of removed nodes increases , since removing nodes also removes links to these nodes . under the hypothesis of uniform sampling ,the average degree becomes for the resampled network . as a result ,the density of the resampled aggregated contact network , defined as the number of links divided by the total number of possible links between the nodes , does not depend on .the same reasoning applies to the density of links between groups of nodes and , defined as the number of links between nodes of group and nodes of group , normalised by the maximum possible number of such links , , where is the number of nodes of group ( for , the maximum possible number of links is ) : both the expected number of neighbours of group for nodes of group ( given by ) and the number of nodes in group are indeed reduced by a factor , so that remains constant .this means that the link density contact matrix , which gathers these densities and gives a measure of the interaction between groups ( here classes or departments ) , is stable under uniform resampling .we illustrate these results on our empirical data sets in supplementary figures 1 , 2 , 4 and 5 .table [ tab : sim_cm ] and supplementary figure 2 show in particular that the similarities between the original and resampled matrices are high for all data sets ( see supplementary figures 45 for the contact matrices themselves ) .finally , the temporal statistics of the contact network are not affected by population sampling , as already noted in for other data sets : the distributions of contact and inter - contact durations ( the inter - contact durations are the times between consecutive contacts on a link ) , of number of contacts per link and of cumulated contact durations ( i.e. , of the link weights in the aggregated network ) do not change when the network is sampled uniformly ( supplementary figure 1 ) . in the case of structured population ,an interesting property is moreover illustrated in supplementary figures 67 : although the distributions of contact durations occurring between members of the same group or between individuals belonging to different groups are indistinguishable , this is not the case for the distributions of the numbers of contacts per link nor , as a consequence , for the distributions of cumulated contact durations .in fact , both cumulated contact durations and numbers of contacts per link are more broadly distributed for links joining members of the same group .the figures show that this property is stable under uniform resampling . despite the robustness of these properties , the outcome of simulations of epidemic spread is strongly affected by the resampling . as fig .[ sampling ] illustrates for instance , the probability of large outbreaks in the sir model decreases strongly as increases and even vanishes at large .as mentioned above , such a result is expected , since the removed nodes act as if they were immunised : sampling hinders the propagation in simulations by removing transmission routes between the remaining nodes . as a consequence , the prevalence and the final size of the outbreaksare systematically underestimated by simulations of the sir model on the resampled network with respect to simulations on the whole data set ( for the sis model , the epidemic threshold is overestimated ) : resampling leads overall to a systematic underestimation of the epidemic risk , and fig .[ sampling ] illustrates the extent of this underestimation for the data at hand .we now present a series of methods to improve the estimation of the epidemic risk in simulations of epidemic spread on temporal network data sets in which nodes ( individuals ) are missing uniformly at random .note that we do not address here the problem of link prediction as our aim is not to infer the missing contacts .the hierarchy of methods we put forward uses increasing amounts of information corresponding to increasing amounts of detail on the group and temporal structure of the contact patterns , as measured in the resampled network .we moreover assume that the timelines of scheduled activity are known ( i.e. , nights and weekends , during which no contact occurs ) . for each data set , considered as ground truth ,we create resampled data sets by removing at random a fraction of the nodes .we then measure on each resampled data set a series of statistics of the resulting contact network and construct stochastic , surrogate versions of the missing part of the network by creating for each missing node a surrogate instance of its links and a synthetic timeline of contacts on each surrogate link , in the different ways described below ( see supplementary information and methods section for more details on their practical implementation ) .method 0 . as discussed above , the first effect of missing data is to decrease the average degree of the aggregate contact network , while keeping its density constant .hence , the simplest approach is to merely compensate this decrease .we therefore measure the density of the resampled contact network , as well as the average aggregate duration of the contacts , .we then add back the missing nodes and create surrogate links between these nodes and between these nodes and the nodes of the resampled data set at random , with the only constraint to keep the overall link density fixed to .we then attribute to each surrogate link the same weight and create for each link a timeline of randomly chosen contact events of equal length ( the temporal resolution of the data set ) whose total duration gives back .method w. the heterogeneity of aggregated contact durations is known to play a role in the spreading patterns of model diseases . we therefore refine method 0 by collecting in the resampled data the list of aggregate contact durations , or weights ( w ) .we build the surrogate links and surrogate timelines of contacts on each link as in method 0 , except that each surrogate link carries a weight extracted at random from , instead of the average .method ws . the fact that the population is divided into groups of individuals such as classes or departments can have a strong impact on the structure of the contact network and on spreading processes .we thus measure here the link density contact matrix of the resampled data , and construct surrogate links in a way to keep this matrix fixed ( equal to the value measured in the resampled data ) , in the spirit of stochastic block models with fixed numbers of edges between blocks .we collect in the resampled data two separate lists of aggregate contact durations : gathers the weights of links between individuals belonging to the same group , and is built with the weights of links joining individuals of different groups . for each surrogate link ,its weight is extracted at random either from if it joins individuals of the same group or from if it associates individuals of different groups .timelines are then attributed to links as in w. this method assumes that the number of missing nodes in each group is known , and preserves the group structure ( s ) of the population .method wt .several works have investigated how the temporal characteristics of networks ( such as burstiness ) can slow down or accelerate spreading . in orderto take these characteristics into account , we measure in the resampled data the distributions of number of contacts per link and of contact and inter - contact durations , in addition to the global network density .we build surrogate links as in method w , and construct on each link a synthetic timeline in a way to respect the measured temporal statistics ( t ) of contacts .more precisely , we attribute at random a number of contacts ( taken from the measured distribution ) to each surrogate link , and then alternate contact and inter - contact durations taken at random from the respective empirical distributions .method wst .this method conserves the distribution of link weights ( w ) , the group structure ( s ) , and the temporal characteristics of contacts ( t ) : surrogate links are built and weights assigned as in method ws , and contact timelines on each link as in method wt .each of these methods uses a different amount of information gathered from the resampled data .methods 0 , w and wt include an increasing amount of detail on the temporal structure of contacts : method 0 assumes homogeneity of aggregated contact durations , while w takes into account their heterogeneity , and wt reproduces heterogeneities of contact and inter - contact durations . on the other hand ,neither of these three methods assume any knowledge of the population group structure .this can be due either to an effective complete lack of knowledge about the population structure , as in the sfhh data , or also to the lack of data on the repartition of the missing nodes in the groups .methods ws and wst on the other hand reproduce the group structure as in a stochastic block model with fixed number of links within and between groups , and take into account the difference between the distributions of numbers of contacts and aggregate durations between individuals of the same or of different groups .indeed , links within groups correspond on average to larger weights , as found empirically in and discussed above ( supplementary figures 56 ) .overall , method wst is the one that uses most information measured in the resampled data .( additional properties such as the transitivity -which is also stable under resampling procedure , see supplementary figure 3- can also be measured in the resampled data and imposed in the construction of surrogate links , as detailed in the supplementary information .this comes however at a strong computational cost and we have verified that it does not impact significantly our results , as shown in the supplementary figure 20 . ) we check in table [ tab : sim_cm ] and supplementary figures 813 that the statistical properties of the resulting reconstructed ( surrogate ) networks , obtained by the union of the resampled data and of the surrogate links , are similar to the ones of the original data for the wst method .we emphasise again that our aim is not to infer the true missing contacts , so that we do not compare the detailed structures of the surrogate and original contact networks .figures [ fig : inferinvs ] , [ fig : inferthiers ] , [ fig : infersfhh ] and supplementary figures 1619 display the outcome of sir spreading simulations performed on surrogate networks obtained using the various reconstruction methods , compared with the outcome of simulations on the resampled data sets , for various values of .method 0 leads to a clear overestimation of the outcome and does not capture well the shape of the distribution of outbreak sizes .method w gives only slightly better results .the overall shape of the distribution is better captured for the three reconstruction methods using more information : ws , wt and wst ( note that for the sfhh case the population is not structured , so that w and ws are equivalent , as are wt and wst ) .the wst method matches best the shape of the distributions and yields distributions much more similar to those obtained by simulating on the whole data set than the simulations performed on the resampled networks .we also show in fig .[ fig : dist ] the fraction of outbreaks that reach at least of the population and the average epidemic size for these outbreaks . in the case of simulations performed on resampled data ,we rapidly lose information about the size and even the existence of large outbreaks as increases .simulations using data reconstructed with methods 0 and w , on the contrary , largely overestimate these quantities , which is expected as infections spread easier on random graphs than on structured graphs , especially if the heterogeneity of the aggregated contact durations is not considered . taking into account the population structure or using contact sequences that respect the temporal heterogeneities ( broad distributions of contact and inter - contact durations ) yield better results ( ws and wt cases , respectively ) .overall , the wst method , for which the surrogate networks respect all these constraints , yields the best results .we show in the supplementary information that similar results are obtained for different values of the spreading parameters .moreover , as shown in fig .[ fig : sis_invs ] and supplementary figures 1415 , the phase diagram obtained for the sis model when using reconstructed networks is much closer to the original than for resampled networks .overall , simulations on networks reconstructed using the wst method yield a much better estimation of the epidemic risk than simulations using resampled network data , for both sis and sir models . even when simulations are performed on reconstructed contact patterns built with the wst method , the maximal outbreak sizes are systematically overestimated ( figs .[ fig : inferinvs ] - [ fig : infersfhh ] ) , as well as , in most cases , the probability and average size of large outbreaks , especially for the sfhh case ( figs .[ fig : infersfhh ] - [ fig : dist ] ) .these discrepancies might stem from structural and/or temporal correlations present in empirical contact data that are not taken into account in our reconstruction methods . in order to test this hypothesis, we construct several reshuffled data sets and use them as initial data in our resampling and reconstruction procedure .we use both structural and temporal reshuffling as described in the methods section , in order to remove either structural correlations , temporal correlations , or both , from the original data sets .we then proceed to a resampling and reconstruction procedure ( using the wst method ) as for the original data , and perform numerical simulations of sir processes . as for the original data , simulations on resampled datalead to a strong underestimation of the process outcome , and simulations using the reconstructed data gives much better results .we show in the supplementary figures 21 - 22 that we still obtain discrepancies , and in particular overestimations of the largest epidemic sizes , when we use temporally reshuffled data in which the link structure of the contact network is maintained .if on the other hand we use data in which the network structure has been reshuffled in a way to cancel structural correlations within each group , the reconstruction procedure gives a very good agreement between the distributions of epidemic sizes of original and reconstructed data , as shown in fig .[ fig : infercmshuffled ] .more precisely we consider here `` cm - shuffled '' data , i.e. , contact networks in which the links have been reshuffled randomly but separately for each pair of groups , i.e. , a link between an individual of group and an individual of group is still between groups and in the reshuffled network .the difference with the case of non - reshuffled empirical data is particularly clear for the sfhh case .this indicates that the overestimation observed in figs .[ fig : inferinvs ] - [ fig : infersfhh ] is mostly due to the fact that the reconstructed data does not reproduce small scale structures of the contact networks : such structures might be due to e.g. groups of colleagues or friends , whose composition is neither available as metadata nor detectable in the resampled data sets . when the fraction of nodes excluded by the resampling procedure becomes large , the properties of the resampled data may start to differ substantially from those of the whole data set ( figs . s1 & s2 ) . as a result ,the distributions of epidemic sizes of sir simulations show stronger deviations from those obtained on the whole data set ( fig .[ fig : infer_high ] ) , even if the epidemic risk evaluation is still better than for simulations on the resampled networks ( fig .[ fig : dist ] ) .most importantly however , the information remaining in the resampled data at large can be insufficient to construct surrogate contacts .this happens in particular if an entire class or department is absent from the resampled data or if all the resampled nodes of a class / department are disconnected ( see methods for details ) .we show in the bottom plots of fig .[ fig : dist ] the failure rate , i.e. , the fraction of cases in which we are not able to construct surrogate networks from the resampled data . the failure rate increasesgradually with for the invs data since the groups ( departments ) are of different sizes . for the thiers13 data , all classes are of similar sizes so that the failure rate reaches abruptly a large value at a given value of . for the sfhh data, we can always construct surrogate networks as the population is not structured .another limitation of the reconstruction method lies in the need to know the number of individuals missing in each department or class .if these numbers are completely unknown , giving an estimation of outbreak sizes is impossible as adding arbitrary numbers of nodes and links to the resampled data can lead to arbitrarily large epidemics .the methods are however still usable if only partial information is available . for instance , if only the overall missing number of individuals is available , it is possible to use the wt method , which still gives sensible results .moreover , if is only approximately known , e.g. , is known to be within an interval of possible values $ ] , it is possible to perform two reconstructions using the respective hypothesis and and to give an interval of estimates .we provide an example of such procedure in supplementary figure 23 .the understanding of epidemic spreading phenomena has been vastly improved thanks to the use of data - driven models at different scales .high resolution contact data in particular have been used to evaluate epidemic risk or containment policies in specific populations or to perform contact tracing . in such studies ,missing data due to population sampling might represent however a serious issue : individuals absent from a data set are indeed equivalent to immunised individuals when epidemic processes are simulated . feeding sampled data into data - driven modelscan therefore lead to severe underestimations of the epidemic risk and might even a priori affect the evaluation of mitigation strategies if for instance some at - risk groups are particularly undersampled . herewe have put forward a set of methods to obtain a better evaluation of the outcome of spreading simulations for data - driven models using contact data from a uniformly sampled population . to this aim, we have shown how it is possible , starting from a data set describing the contacts of only a fraction of the population of interest ( uniformly sampled from the whole population ) , to construct surrogate data sets using various amounts of accessible information , i.e. , quantities measured in the sampled data .we have shown that the simplest method , which consists in simply compensating for the decrease in the average number of neighbours due to sampling , yields a strong overestimation of the epidemic risk . when additional information describing the group structure and the temporal properties of the data is added in the construction of surrogate data sets , simulations of epidemic spreading on such surrogate data yield results similar to those obtained on the complete data set .( we note that the issue of how much information should be included when constructing the surrogate data is linked to the general issue of how much information is needed to get an accurate picture of spreading processes on temporal networks . ) some discrepancies in the epidemic risk estimation are however still observed , due in particular to small scale structural correlations of the contact network that are difficult or even impossible to measure in the resampled data : these discrepancies are indeed largely suppressed if we use as original data a reshuffled contact network in which such correlations are absent .the methods presented here yield much better results than simulations using resampled data , even when a substantial part of the population is excluded , in particular in estimating the probability of large outbreaks .it suffers however from limitations , especially when the fraction of excluded individuals is too large .first , the construction of the surrogate contacts relies on the stability of a set of quantities with respect to resampling , but the measured quantities start to deviate from the original ones at large .the shape of the distribution of epidemic sizes may then differ substantially from the original one .second , large values of might even render the construction of the surrogate data impossible due to the loss of information on whole categories of nodes .finally , at least an estimate of the number of missing individuals in the population is needed in order to create a surrogate data set .an interesting avenue for future work concerns possible improvements of the reconstruction methods , in particular by integrating into the surrogate data additional information and complex correlation patterns measured in the sampled data .for instance , the number of contacts varies significantly with the time of day in most contexts : the corresponding activity timeline might be measured in the sampled data ( overall or even for each group of individuals ) , assumed to be robust to sampling and used in the reconstruction of contact timelines .more systematically , it might also be possible to use the temporal network decomposition technique put forward in on the sampled data , in order to extract mesostructures such as temporally - localized mixing patterns .the surrogate contacts could then be built in a way to preserve such patterns .indeed , correlations between structure and activity in the temporal contact network are known to influence spreading processes but are notoriously difficult to measure .if the group structure of the population is unknown , recent approaches based on stochastic block models might be used to extract groups from the resampled data ; this extracted group structure could then be used to build the corresponding contact matrix and surrogate data sets .we finally recall that we have assumed an uniform sampling of nodes , corresponding to an independent random choice of each individual of the population to take part or not to the data collection .other types of sampling or data losses can however also be present in data collected by wearable sensors , such as partial coverage of the premises of interest by the measuring infrastructure , non - uniform sampling depending on individual activity ( too busy persons or , on the contrary , asocial individuals , might not want to wear sensors ) , on group membership , or due to clusters of non - participating individuals ( e.g. , groups of friends ) .in addition , other types of data sets such as the ones obtained from surveys or diaries correspond to different types of sampling , as each respondent provides then information in the form of an ego - network .such data sets involve potentially additional types of biases such as underreporting of the number of contacts and overestimation of contact durations : how to adapt the methods presented here is an important issue that we will examine in future work .finally , the population under study is ( usually ) not isolated from the external world , and it would be important to devise ways to include contacts with outsiders in the data and simulations , for instance by using other data sources such as surveys .the present work is partially supported by the french anr project harms - flu ( anr-12-monu-0018 ) to m.g . anda.b , by the eu fet project multiplex 317532 to a.b . , c.c . and c.l.v . , by the a*midex project ( anr-11-idex-0001 - 02 ) funded by the `` investissements davenir '' french government program , managed by the french national research agency ( anr ) to a.b . , by the lagrange project of the isi foundation funded by the crt foundation to c.c . , and by the q - aracne project funded by the fondazione compagnia di san paolo to c.c .a.b . and c.c .designed and supervised the study . m.g ., c.l.v . , c.c . , and a.b .collected and post - processed the data , analyzed the data , carried out computer simulations and prepared the figures ., c.l.v . ,c.c . , and a.b .wrote the manuscript .the authors declare that no competing financial interests exist .we consider data sets collected using the sociopatterns proximity sensing platform ( http://www.sociopatterns.org ) based on wearable sensors that detect close face - to - face proximity of individuals wearing them .informed consent was obtained from all participants and the french national bodies responsible for ethics and privacy , the commission nationale de linformatique et des liberts ( cnil , http://www.cnil.fr ) , approved the data collections .the high school ( thiers13 ) data set is structured in 9 classes , forming three subgroups of three classes corresponding to their specialisation in mathematics - physics ( mp , mp , mp with respectively , and students ) , physics ( pc , pc , psi with respectively , and students ) , or biology ( 2bio1 , 2bio2 , 2bio3 with respectively , and students ) .the workplace ( invs ) data set is structured in departments : disq ( scientific direction , persons ) , dmct ( department of chronic diseases and traumatisms , persons ) , dse ( department of health and environment , persons ) , srh ( human resources , persons ) and sfle ( logistics , persons ) . for the conference data ( sfhh ), we do not have metadata on the participants , and the aggregated network structure was found to be homogeneous .simulations of sir and sis processes on the temporal networks of contacts ( original , resampled or reconstructed ) are performed using the temporal gillespie algorithm described in . for each run of the simulations ,all nodes are initially susceptible ; a node is chosen at random as the seed of the epidemic and put in the infectious state at a point in time chosen at random over the duration of the contact data .a susceptible node in contact with an infectious node becomes infectious at rate .infectious nodes recover at rate : in the sir model they then enter the recovered state and can not become infectious again , while in the sis model they enter the susceptible state again .if needed , the sequence of contacts is repeated in the simulation . for sir processes, we run each simulation , with the seed node chosen at random , until no infectious individual remains ( nodes are thus either still susceptible or have been infected and then recovered ) .we consider values of and yielding a non - negligible epidemic risk , i.e. , such that a rather large fraction of simulations lead to a final size larger than of the population ( see figs .[ sampling][fig : infersfhh ] ) : , ( invs ) or ( sfhh and thiers13 ) .other parameter values are explored in the supplementary information . for each set of parameters ,the distribution of epidemic sizes is obtained by performing simulations . for sis processes, simulations are performed using the quasi - stationary approach of .they are run until the system enters a stationary state as witnessed by the mean number of infected nodes being constant over time .simulations are then continued for 50,000 time - steps while recording the number of infected nodes . for each set of parameters ,the simulations are performed once with each node of the network chosen as the seed node .we consider a population of individuals , potentially organised in groups .we assume that all the contacts occurring among a subpopulation of these individuals , of size , are known .this constitutes our resampled data from which we need to construct a surrogate set of contacts concerning the remaining individuals for which no contact information is available : these contacts can occur among these individuals and between them and the members of .we assume that we know the group of each member of , and the overall activity timeline , i.e. , the intervals during which contacts take place , separated by nights and weekends .to construct the surrogate data ( wst method ) , we first compute from the activity timeline the total duration of the periods during which contacts can occur .then , we measure in the sampled data : * the density of links in the aggregated contact network ; * a row - normalised contact matrix , in which the element gives the probability for a node of group to have a link to a node of group ; * the list of contact durations ; * the lists and of inter - contact durations for internal and external links , _i.e. _ , for links between nodes of the same group and links between nodes that belong to two different groups , respectively ; * the lists and of numbers of contacts per link , respectively for internal ( within groups ) and external ( between groups ) links ; * the list of initial times between the start of the data set and the first contact between two nodes. given , we compute the number of additional links needed to keep the network density constant when we add the excluded nodes . we then construct each link according to the following procedure : * a node is randomly chosen from the set of excluded nodes ; * knowing the group that belongs to , we extract at random a target group with probability given by ; * we draw a target node at random from ( if , we take care that ) such that and are not linked ; * depending on whether and belong to the same group or not , we draw from or the number of contact events taking place over the link ; * from , we draw the initial waiting time before the first contact ; * from , we draw contact durations , ; * from or , we draw inter - contact durations , ; * if , we repeat steps ( d ) to ( g ) until we obtain a set of values such that ; * from and the and , we build the contact timeline of the link ; * finally , we insert in the contact timeline the breaks defined by the global activity timeline .the construction of the surrogate version of the missing links uses as an input the group structure of the subgraph that remains after sampling , as given by the contact matrix of the link densities between the different groups of nodes that are present in the subpopulation . depending on the characteristics of and of the corresponding contacts ,the construction method can fail in several cases : ( i ) if an entire group ( class / department ) of nodes in the population is absent from ; ( ii ) if the remaining nodes of a specific group ( class / department ) are all isolated in s contact network ; ( iii ) if , during the algorithm , a node of is selected in a certain group but can not create any more links because it already has links to all nodes in the groups such that ; ( iv ) if there are either no internal ( within groups ) or external ( between groups ) links in the contact network of : in this case one of the lists of link temporal characteristics is empty and the corresponding structures can not be reconstructed. cases ( i ) and ( ii ) correspond to a complete loss of information about the connectivity of a group ( class / department ) of the population , due to sampling .it is then impossible to reconstruct a sensible connectivity pattern for these nodes .case ( iii ) is more subtle and occurs in situations of very low connectivity between groups .for instance , within the contact network of , a group has links only with another specific group , and both and are small ; it is then possible that the nodes of exhaust the set of possible links to nodes of during the reconstruction algorithm .if a node of is again chosen to create a link , such a creation is not possible and the construction fails .case ( iv ) usually corresponds to situations in which the links between individuals of different groups which remain in the resampled data set correspond to pairs of individuals who have had only one contact event : in such cases , is empty and external links with more than one contact can not be built . in order to test the effect of correlations in the temporal network , we use four shuffling methods , based on the ones defined in . link shuffling .the contact timelines associated with each link are randomly redistributed among the links .correlations between timelines of links adjacent to a given node are destroyed , as well as correlations between weights and topology .the structure of the network is kept , as well as the global activity timeline .time shuffling . from the contact data we build the lists , and of , respectively , contact durations , inter - contact durations and number of contacts per link .we also measure the list of initial times between the start of the data set and the first contact between two nodes . for each link , we draw randomly a starting time , a number of contacts from , contact durations from and inter - contact durations from , so that the total duration of the timeline does not exceed the total available time .we then construct the contact timelines , thus destroying the temporal correlations among contacts .the structure of the network is instead kept fixed .cm shuffling .we perform a link rewiring separately on each compartment of the contact matrix , _i.e. _ , we randomly redistribute links with their contact timelines within each group , and within each pair of groups .we thus destroy the structural correlations inside each compartment of the contact matrix , while preserving the group structure of the network as given by the link density contact matrix and the contact matrix of total contact times between groups .cm - time shuffling .we perform both a cm shuffling and a time shuffling .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 , & __ ( , ) . , , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . in _ _( , , ) ._ et al . _ . in _ _ , vol . , ( ) . , & . __ * * , ( ) ., , & . in _ _ , mswim 10 , ( , , ) ._ et al . _ . __ * * , ( ) . , & . in _ _ , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . __ * * , ( ) .et al . _ . __ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al ._ _ * * , ( ) ._ et al . _ ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) ._ et al . _ ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * ( ) . , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ * * , ( ) ._ et al . _ . __ * * , ( ) . ._ _ * * , ( ) . ._ _ * * , ( ) . , , & .in _ _ , stoc 05 , ( , , ) . ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , , & ( ) . , &_ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) . & ._ _ * * , ( ) . , &_ _ * * , ( ) ._ et al . _ . __ * * , ( ) . ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) .& _ _ * * , ( ) ._ _ * * , ( ) . , & ._ _ * * , ( ) . , ,_ * * , ( ) ._ et al . _ . __ * * , ( ) ., , & ( ) .( ) . , & ._ _ * * , ( ) ., , & ( ) . & ( ) . , & _ _ * * , ( ) ..[tab : data]*data sets . * for each data set we specify the type of social situation , the number of individuals whose contacts were measured , the corresponding participation rate , the duration and the dates of the data collection . [cols="^,^,^,^,^,^",options="header " , ] * sir epidemic simulations on resampled contact networks .* we plot the distributions of epidemic sizes ( fraction of recovered individuals ) at the end of sir processes simulated on top of resampled contact networks , for different values of the fraction of nodes removed .the plot shows the progressive disparition of large epidemic outbreaks as increases .the parameters of the sir models are and ( invs ) or ( thiers13 and sfhh ) .the case corresponds to simulations using the whole data set , i.e. , the reference case . for each value of , independent simulations were performed . ]* sir simulations for the invs ( workplace ) case .* we compare of the outcome of sir epidemic simulations performed on resampled and reconstructed contact networks , for different methods of reconstruction .we plot the distribution of epidemic sizes ( fraction of recovered individuals ) at the end of sir processes simulated on top of resampled ( sample ) and reconstructed contact networks , for different values of the fraction of nodes removed , and for the 5 reconstruction methods described in the text ( 0 , w , ws , wt , wst ) .the parameters of the sir models are and .the case corresponds to simulations using the whole data set , _i.e. _ , the reference case . for each value of , independent simulations were performed.,scaledwidth=90.0% ] * sir simulations for the thiers13 ( high school ) case .* we compare of the outcome of sir epidemic simulations performed on resampled ( top left ) and reconstructed contact networks , for different methods of reconstruction .we plot the distribution of epidemic sizes ( fraction of recovered individuals ) at the end of sir processes simulated on top of resampled ( sample ) and reconstructed contact networks , for different values of the fraction of nodes removed , and for the 5 reconstruction methods described in the text ( 0 , w , ws , wt , wst ) .the parameters of the sir models are and .the case corresponds to simulations using the whole data set , i.e. , the reference case . for each value of , independent simulations were performed.,scaledwidth=90.0% ] * sir simulations for the sfhh ( conference ) case .* we compare of the outcome of sir epidemic simulations performed on resampled and reconstructed contact networks , for different methods of reconstruction .we plot the distribution of epidemic sizes ( fraction of recovered individuals ) at the end of sir processes simulated on top of resampled ( sample ) and reconstructed contact networks , for different values of the fraction of nodes removed , and for three reconstruction methods described in the text ( 0 , w , wt ) . in this case , as the population is not structured , methods w and ws on the one hand , wt and wst on the other hand , are equivalent .the parameters of the sir models are and .the case corresponds to simulations using the whole data set , i.e. , the reference case . for each value of , independent simulations were performed.,scaledwidth=90.0% ] * accuracy of the different reconstruction methods . *we perform sir epidemic simulations for each case , for different values of the fraction of missing nodes , for both sampled networks and networks reconstructed with the different methods .we compare in each case , and as a function of , the fraction of outbreaks that lead to a final fraction of recovered individuals larger than of the population ( a , b ,c ) , and the average size of these large outbreaks ( d , e , f ) .the dashed lines give the corresponding values for simulations performed on the complete data sets .the different methods are : reconstruction conserving only the link density and the average weight of the resampled data ( 0 ) ; reconstruction conserving only the link density and the distribution of weights of the resampled data ( w ) ; reconstruction preserving , in addition to the w method , the group structure of the resampled data ( ws ) ; reconstruction conserving link density , distribution of weights and distributions of contact times , of inter - contact times and of numbers of contacts per link measured in the resampled data ( wt ) ; full method conserving all these properties ( wst ) .we also plot as a function of the failure rate of the wst algorithm , _i.e. _ , the percentage of failed reconstructions ( g , h , i ) . for the sfhh case ,as the population is not structured into groups , methods w and ws are equivalent , as well as methods wt and wst ; moreover , reconstruction is always possible .the sir parameters are and ( invs ) or ( thiers13 and sfhh ) and each point is averaged over independent simulations . ] * sis simulations for the invs ( workplace ) case .* we perform sis epidemic simulations and report the phase diagram of the sis model for the original , resampled and reconstructed contact networks .each panel shows the stationary value of the prevalence in the stationary state of the sis model , computed as described in methods , as a function of , for several values of .simulations are performed in each case using either the complete data set ( continuous lines ) , resampled data ( dashed lines ) or reconstructed contact networks using the wst method ( pluses ) .the fraction of excluded nodes in the resampling is for a , c , e , g and for b , d , f , h. ] * sir simulations on shuffled data .* we compare of the outcome of sir epidemic simulations performed on resampled and reconstructed contact networks , for shuffled data .we plot the distribution of epidemic sizes ( fraction of recovered individuals ) at the end of sir processes simulated on top of either resampled ( a , c , e ) or reconstructed ( b , d , f ) contact networks , for different values of the fraction of nodes removed .we use here the wst reconstruction method , and the data set considered consists in a cm - shuffled version ( see methods ) of the original data , in which the shuffling procedure removes structural correlations of the contact network within each group .the parameters of the sir models are and ( invs ) or ( thiers13 and sfhh ) .the case corresponds to simulations using the whole data set , _i.e. _ , the reference ( reshuffled data ) case . for each value of , independent simulations were performed.,scaledwidth=90.0% ] * sir simulations for very large numbers of missing nodes .* we simulate sir processes on reconstructed contact networks for large values of the fraction of removed nodes .we plot the distributions of epidemic sizes for simulations on reconstructed networks and on the whole data set ( case ) , for large values of the fraction of removed nodes . here and ( invs ) or ( thiers13 and sfhh ) and simulations were performed for each value of .the distributions of epidemic sizes for simulations performed on resampled data sets are not shown since at these high values of , almost no epidemics occur . ]as described in the main text , we consider temporally resolved networks of contacts in a population of individuals and we perform a resampling experiment by selecting a subpopulation of these individuals , of size .we assume that only the contacts occurring among the subpopulation are known and we compare the properties of the corresponding resampled subnetwork with those of the original network .supplementary figure [ fig : sampling_network ] shows how population sampling affects several statistical properties of the contact networks .on the one hand , the degree distribution of the aggregated network of contacts systematically shifts towards smaller degree value .this is expected as each remaining node has in the resampled network a degree which is at most its degree in the original network , and is strictly smaller if some of its neighbours are not part of the resampled population . on the other hand , the statistical distributions of several quantities of interestare not affected by sampling : this is the case of the quantities attached either to single contacts or to single links , namely contact and inter - contact durations , number of contacts per link and link weights ( the weight of a link is given by the total duration of the contacts between the two corresponding nodes ) . moreover , as shown in supplementary figure [ fig : sampling ] , the density of the aggregated network , i.e. the ratio between the number of links and the number of possible links , is on average conserved by the random resampling procedure .it varies however for different realisations of the resampling , and the corresponding variance increases with the fraction of excluded nodes .supplementary figure [ fig : clust_trans ] shows how the average clustering coefficient of the aggregated network varies with the resampling : notably , it remains high and close to its original value until large values of are reached .the transitivity of the network , defined as three times the number of triangles divided by the number of connected triplets ( connected subgraphs of nodes and edges ) , is even less affected than the clustering coefficient by the resampling procedure . in the case of structured populations , supplementary figures [ fig : cml_sample_invs ] & [ fig : cml_sample_thiers13 ]show that the stability of the resampled network s density holds at the more detailed level of the contact matrices of link densities . in such matrices , the element is given by the number of links between individuals of groups and , normalised by the total number of possible links between these two groups ( if denotes the number of individuals in group , the number of possible links is equal to for and to for ) .these figures clearly illustrate how the diagonal and block - diagonal structures are preserved , and supplementary figure [ fig : sampling ] gives a quantitative assessment of this stability by showing that the cosine similarity between contact matrices between the resampled and original aggregated contact networks remains high even for when a large fraction of the nodes are excluded .we moreover illustrate in supplementary figures [ fig : int_ext_invs ] and [ fig : int_ext_thiers ] the difference in statistical properties of contacts and links within and between groups , still for structured populations : * the distributions of contact durations are indistinguishable ; * the distribution of link weights ( aggregated contact durations ) is broader for links between individuals belonging to the same group than for links joining individuals of different groups ; * this is due to the difference in the distributions of numbers of contacts per link , which is broader for links within groups than for links between groups ; * the distributions of inter - contact durations differ also slightly , with smaller averages for within - group links . most importantly ,all these properties and distributions remain stable under resampling , showing that reliable information on the distributions of contact and inter - contact durations , aggregated contact durations , numbers of contacts per link , can be obtained in the resampled data , including the statistical differences between links joining members of different groups and links between two individuals of the same group .as described in the main text and in particular in the methods section , we construct a surrogate set of contacts concerning the individuals excluded by the resampling .we compare here the properties of the resulting contact networks ( obtained by merging the resampled contact network and the surrogate set of contacts ) and of the original contact network , .supplementary figure [ fig : infer_network ] shows that the degree distribution , which is not constrained by the reconstruction procedure , deviates from the original distribution . on the other hand , the distributions of contact durations , inter - contact durations , number of contacts per link and link weights are preserved .moreover , the link density contact matrices of the reconstructed networks ( supplementary figure [ fig : cml_infer_invs ] & [ fig : cml_infer_thiers13 ] ) share a high similarity with the original contact matrices , even for high fractions of nodes excluded ( supplementary figure [ fig : simcm ] ) . for completeness, we also compute the contact matrices in contact time density ( cmt ) , in which each element is given by the total time in contact between individuals of groups and , normalised by the total number of possible links between these two groups : it gives the average time spent in contact by two random individuals of groups and .supplementary figures [ fig : simcm ] , [ fig : cmt_infer_invs ] and [ fig : cmt_infer_thiers13 ] show that the structure of these matrices is well recovered by the reconstruction methods , with high similarity with the original matrices .we observe for the high school and the conference the same effect on the phase diagram of the sis model as in the workplace : sampling leads to a shift of the epidemic threshold to higher values and thus to an underestimation of the epidemic risk .the phase diagram and the epidemic threshold are estimated more accurately by using reconstructed networks , thus giving a better evaluation of the epidemic risk ( supplementary figures [ fig : sis_thiers ] & [ fig : sis_sfhh ] ) . in the main text ,we have considered values of the sir model parameters leading to a non - negligible epidemic risk and a value of corresponding to slow processes .we consider here several other values of the parameters , corresponding either to faster processes ( supplementary figures [ fig : beta.004 ] - [ fig : beta.04_large ] ) or to smaller epidemic risk ( supplementary figure [ fig : beta.004_small ] ) .in all cases , simulations performed on the resampled contact networks lead to a strong underestimation of the epidemic sizes , with distributions shifting to smaller values as increases , while the use of reconstructed data sets leads to a better estimation and generally speaking a slight overestimation of the epidemic risk .we give here details on the alternative reconstruction methods mentioned in the main text , which use less information than the * wst * method . in each casewe consider the same setup as the complete method : a population of individuals ( the nodes of the contact network ) , potentially organised in groups , for which we know all the contacts taking place among a subpopulation of size . for the remaining individuals ,no contact information is available , but we know to which group they belong .we also have access to the overall activity timeline , _i.e. _ , to the successive intervals during which contacts can happen ( daytimes ) , and are excluded ( nights and weekends ) .the alternative reconstruction methods are the following : 0 : : we perform the reconstruction using only the network density and the average link weight , both measured in the resampled network .the algorithm goes as follows : + 1 .we measure in the resampled data : * the density of links in the time - aggregated network ; * the average link weight ( the weight of a link is defined as the total contact time between the two linked nodes ) ; 2 .we compute the number of links that must be added to keep the network density constant when we add the excluded nodes ; 3 .we construct links according to the following procedure : * a node is randomly chosen from the set of excluded nodes ; * a node is randomly chosen from the set of all other nodes ; * we compute , where is the temporal resolution of the data set , and we randomly choose time windows of length within the activity windows defined by the activity timeline as contact events between and .w : : we perform the reconstruction using only the network density and the distribution of link weights , both measured in the resampled network .the algorithm goes as follows : + 1 .we measure in the resampled data : * the density of links in the time - aggregated network ; * the list of link weights ( the weight of a link is defined as the total contact time between the two linked nodes ) ; 2 .we compute the number of links that must be added to keep the network density constant when we add the excluded nodes ; 3 .we construct links according to the following procedure : * a node is randomly chosen from the set of excluded nodes ; * a node is randomly chosen from the set of all other nodes ; * from , we draw the weight of the link ; * we compute , where is the temporal resolution of the data set , and we randomly choose time windows of length within the activity windows defined by the activity timeline as contact events between and .ws : : we perform the reconstruction using the network density , the distributions of link weights for internal ( within groups ) and external ( between groups ) links , and the structure of the aggregated network given by the link density contact matrix , all measured in the resampled network .the algorithm goes as follows : + 1 .we measure in the resampled data : * the density of links in the time - aggregated network ; * a _ row - normalised _contact matrix , in which the element gives the probability for a node in group to have a link to a node of group ; * the lists and of link weights for respectively internal and external links( internal links are links between nodes that belong to the same group , external links are links between nodes from different groups ) ; 2 .we compute the number of links that must be added to keep the network density constant when we add the excluded nodes ; 3 .we construct links according to the following procedure : * a node is randomly chosen from the set of excluded nodes ; * knowing the group that belongs to , we extract at random a target group with probability given by ; * we draw a target node at random from ( if , we check that ) ; * depending on whether nodes and belong to the same group or not , we draw from or the weight of the link ; * as for the method , we extract at random contact events of length within the activity timeline .wt : : we perform the reconstruction using the network density , the distribution of link weights and the temporal structure of the contacts given by the distributions of contact durations , inter - contact durations , number of contacts per link and initial waiting times before the first contact , all measured in the resampled network .the algorithm goes as follows : + 1 .we compute from the activity timeline the time as the total duration of the periods during which contacts can occur . 2 .we measure in the resampled data : * the density of links in the time - aggregated network ; * the list of contact durations ; * the list of inter - contact durations ; * the list of numbers of contacts per link ; * the list of initial waiting times before the first contact for each link ; 3 .we compute the number of links that must be added to keep the network density constant when we add the excluded nodes ; 4 .we construct links according to the following procedure : 1 .a node is randomly chosen from the set of excluded nodes ; 2 .a node is randomly chosen from the set of all other nodes ; 3 .we draw from the number of contact events taking place over the link ; 4 . from ,we draw the initial waiting time before the first contact ; 5 . from ,we draw contact durations , ; 6 . from , we draw inter - contact durations , ; 7 . while , we repeat steps ( c ) to ( f ) ; 8 . from ,the and , we build the contact timeline of the link ; 9 . finally , we insert in the contact timeline the breaks defined by the activity timeline . 1. we measure in the resampled data the transitivity of the time - aggregated network ; 2 . for the construction of each link of a node : * we calculate the current transitivity of the network ; * we list the potential targets in two lists and , depending on whether the creation of a link between and would close a triangle or not ; * * * if , we draw a target node at random from such that and are not linked ; * * else if , we draw a target node at random from such that and are not linked .
data describing human interactions often suffer from incomplete sampling of the underlying population . as a consequence , the study of contagion processes using data - driven models can lead to a severe underestimation of the epidemic risk . here we present a systematic method to alleviate this issue and obtain a better estimation of the risk in the context of epidemic models informed by high - resolution time - resolved contact data . we consider several such data sets collected in various contexts and perform controlled resampling experiments . we show how the statistical information contained in the resampled data can be used to build a series of surrogate versions of the unknown contacts . we simulate epidemic processes on the resulting reconstructed data sets and show that it is possible to obtain good estimates of the outcome of simulations performed using the complete data set . we discuss limitations and potential improvements of our method . human interactions play an important role in determining the potential transmission routes of infectious diseases and other contagion phenomena . their measure and characterisation thus represent an invaluable contribution to the study of transmissible diseases . while surveys and diaries in which volunteer participants record their encounters have provided crucial insights ( see however for recent investigations of the memory biases inherent in self - reporting procedures ) , new approaches have recently emerged to measure contact patterns between individuals with high resolution , using wearable sensors that can detect the proximity of other similar devices . the resulting measuring infrastructures register contacts specifically within the closed population formed by the participants wearing sensors , with typically high spatial and temporal resolutions . in the recent years , several data gathering efforts have used such methods to obtain , analyse and publish data sets describing the contact patterns between individuals in various contexts in the form of temporal networks : nodes represent individuals and , at each time step , a link is drawn between pairs of individuals who are in contact . such data has been used to inform models of epidemic spreading phenomena used to evaluate epidemic risks and mitigation strategies in specific , size - limited contexts such as schools or hospitals , finding in particular outcomes consistent with observed outbreak data or providing evidence of links between specific contacts and transmission events . despite the relevance and interest of such detailed data sets , as illustrated by these recent investigations , they suffer from the intrinsic limitation of the data gathering method : contacts are registered only between participants wearing sensors . contacts with and between individuals who do not wear sensors are thus missed . in other words , as most often not all individuals accept to participate by wearing sensors , many data sets obtained by such techniques suffer from population sampling , despite efforts to maximise participation through e.g. scientific engagement of participants . hence , the collected data only contains information on contacts occurring among a fraction of the population under study . population sampling is well - known to affect the properties of static networks : various statistical properties and mixing patterns of the contact network of a fraction of the population of interest may differ from those of the whole population , even if the sampling is uniform , and several works have focused on inferring network statistics from the knowledge of incomplete network data . both structural and temporal properties of time - varying networks might as well be affected by missing data effects . in addition , a crucial though little studied consequence of such missing data is that simulations of dynamical processes in data - driven models can be affected if incomplete data are used . for instance , in simulations of epidemic spreading , excluded nodes are by definition unreachable and thus equivalent to immunised nodes . due to herd vaccination effects , the outcome of simulations of epidemic models on sampled networks is thus expected to be underestimated with respect to simulations on the whole network . ( we note however , that in the different context of transportation networks , it was found in that the inclusion of the most important transportation nodes can be sufficient to describe the global worldwide spread of influenza - like illnesses , at least in terms of times of arrival of the spread in various cities . ) how to estimate the outcome of dynamical processes on contact networks using incomplete data remains an open question . here we make progresses on this issue for incompletely sampled data describing networks of human face - to - face interactions , collected by infrastructures based on sensors , under the assumption that the population participating to the data collection is a uniform random sample of the whole population of interest . ( we do not therefore address here the issue of non - uniform sampling of contacts that may result from other measurement methods such as diaries or surveys . ) we proceed through resampling experiments on empirical data sets in which we exclude uniformly at random a fraction of the individuals ( nodes of the contact network ) . we measure how relevant network statistics vary under such uniform resampling and confirm that , although some crucial properties are stable , numerical simulations of spreading processes performed using incomplete data lead to strong underestimations of the epidemic risk . our goal and main contribution consists then in putting forward and comparing a hierarchy of systematic methods to provide better estimates of the outcome of models of epidemic spread in the whole population under study . to this aim , we do not try to infer the true sequence of missing contacts . instead , the methods we present consist in the construction of surrogate contact sequences for the excluded nodes , using only structural and temporal information available in the resampled contact data . we perform simulations of spreading processes on the reconstructed data sets , obtained by the union of the resampled and surrogate contacts , and investigate how their outcomes vary depending on the amount of information included in the reconstruction method . we show that it is possible to obtain outcomes close to the results obtained on the complete data set , while , as mentioned above , using only the incomplete data severely underestimates the epidemic risk . we show the efficiency of our procedure using three data sets collected in widely different contexts and representative of very different population structures found in day - to - day life : a scientific conference , a high school and a workplace . we finally discuss the limitations of our method in terms of sampling range , model parameters and population sizes .
in this paper we introduce a numerical code designed to solve the einstein field equations for axisymmetric spacetimes . even though the predominant focus in numerical relativity in recent years has been to study situations of relevance to gravitational wave detection , and hence lacking symmetries , there are still numerous interesting problems , both physical and computational , that can be tackled with an axisymmetric code .the advantages of restricting the full 3d problem to axisymmetry ( 2d ) are that the complexity and number of equations are reduced , as are the computational requirements compared to solving a similar problem in 3d .prior numerical studies of axisymmetric spacetimes include head - on black hole collisions , collapse of rotating fluid stars , the evolution of collisionless particles applied to study the stability of star clusters and the validity of cosmic censorship , evolution of gravitational waves , black hole - matter - gravitational wave interactions , and the formation of black holes through gravitational wave collapse and corresponding critical behavior at the threshold of formation .our goals for creating a new axisymmetric code are not only to explore a wider range of phenomena than those studied before , but also to provide a framework to add adaptive mesh refinement ( amr ) and black hole excision to allow more thorough and detailed investigations than prior works .the outline of the rest of the paper is as follows . in section [ sec : formalism ]we describe the decomposition of spacetime that we adopt to arrive at our system of equations. the formalism is the familiar adm space + time decomposition ( in this case ) applied to a dimensionally reduced spacetime obtained by dividing out the axial killing vector , following a method devised by geroch . in section [ sec : coords_and_vars ] we specialize the equations to our chosen coordinate system , namely cylindrical coordinates with a conformally flat . at this stagewe do not model spacetimes with angular momentum , and we include a massless scalar field for the matter source . in section [ sec : apph ]we discuss how we search for apparent horizons during evolution . in section [ sec : implementation ] we describe our numerical implementation of the set of equations derived in section [ sec : coords_and_vars ] .a variety of tests of our code are presented in section [ sec : test ] , which is followed by conclusions in section [ sec : conclusion ] . some details concerning our finite difference approximations ,solution of elliptic equations via the multi - grid technique , and a spherically symmetric code used for testing purposes are given in appendices a and b. unless otherwise specfied , we use the units and conventions adopted by misner , thorne and wheeler .the most common approach in numerical relativity is to perform the so - called , or adm , split of the spacetime that one would like to evolve . in this procedure , a timelike vector fieldis chosen together with spatial hypersurfaces that foliate the spacetime .if axisymmetry is assumed , it is usually incorporated once the adm decomposition has been done , and is reflected in the independence of the various metric and matter quantities on the ignorable , angular coordinate , .our approach , which follows geroch and nakamura _ et al _ , reverses this procedure .we assume axisymmetry from the outset and perform a reduction of the spacetime based on this assumption .once we have projected out the symmetry , we perform an adm - like split ( now a split ) of the remaining . more specifically , we begin with a spacetime metric manifold and latin letters to denote coordinates in the reduced . ] on our manifold .the axisymmetry is realized in the existence of a spacelike killing vector x^= ( ) ^ , with closed orbits .we define the projection operator , allowing us to project tensors from the 4dimensional manifold with metric to a manifold with metric , as g _= _ - x_x _ , where is the norm of the killing vector s^2=x^x_. with the definition of the vector y_= x _ , the metric on the full , spacetime can be written as _= ( cc g_ab + s^2 y_a y_b & s^2 y_a + s^2 y_b & s^2 ) . projecting , or dividing out the symmetry amounts to expressing quantities in terms of quantities on the .for instance , the connection coefficients associated with the metric are ^(4 ) ^ _ & = & ^(3 ) ^ _ + ^ _ + & = & ^(3 ) ^ _ + s^2 g^ + y^ , [ christof ] where are the connection coefficients constructed from the , and we have defined the antisymmetric tensor z _ = _ y _ - _ y_. notice that is an intrinsically object , in that . with some algebra , the ricci tensor on the , ,can now be written as the ricci tensor on the reduced space , , together with additional terms involving fields coming from the dimensional reduction ^(4)r _= ^(3)r _ + d _ ^ _ - d_^_ + ^ _ _ ^- ^ _ ^ _ , [ ricci_4 ] where is the covariant derivative on the dimensional manifold . expressed in terms of , , and , the components of ( [ ricci_4 ] ) are ^(4)r _ & = & 14 s^4 z_bcz^bc - s d^a d_a s , [ eq : r_phiphi ] + ^(4)r_a & = & 12s d^c(s^3z_ac ) + y_a , [ eq : r_rhia ] + ^(4)r_ab & = & ^(3)r_ab - 1s d_a d_bs - 12 s^2 z_acz_b^c - 1s d^cy_b ) + y_a y_b .[ eq : r_ab ] taking the trace of ( [ ricci_4 ] ) , and using the definitions described above , gives the decomposition of the ricci scalar as ^(4)r = ^(3)r - 2s d^a d_as - 14 s^2 z_bcz^bc .[ eq : ricci ] the einstein equations , with a stress - energy tensor , are ^(4)r _ - ^(4)r _= 8t_. [ eq : einstein ] using equations ( [ eq : r_phiphi]-[eq : ricci ] ) , we can write the einstein equations as d^ad_a s & = & - w_aw^a 2s^3 - 8s ( t _ - 12 s^2 t_^ ) , [ eq : s_eom ] + d_[a w_b ] & = & 8 s _ ab^c t_c , [ eq : w_eom ] + ^(3)r_ab & = & 1s d_a d_bs + 12 s^2 z_acz_b^c + 8 ( t _ g^_a g^_b - 12 g_ab t_^ ) , [ eq : r_eom ] where we have introduced the twist of the killing vector w_= s^42 _ y^z^ , and the four and three dimensional levi - civita symbols and , respectively .the twist vector is intrinsically , _i.e. _ . furthermore , is divergence free d^a=0 .[ eq : div_w_eom ] at this point , the first reduction is essentially done .equation ( [ eq : r_eom ] ) can be viewed as the einstein equations , coupled to the projection of the stress - energy tensor and to induced `` matter fields '' and ( or ) .the equations of motion for and are given by ( [ eq : s_eom ] ) and ( [ eq : w_eom ] ) respectively ; additional equations of motion will need to be specified for whatever true matter fields one incorporates into the system .this procedure so far is completely analogous to the kaluza - klein reduction from five to four dimensions , in which the geometry becomes gravity in dimensions coupled to electromagnetism and a scalar field . here, however , the reduced has no dynamics and we have the rather appealing picture of having divided out " the dynamics of the four dimensional gravitational system and reinterpreted the two degrees of freedom in the gravitational field as scalar ( ) and `` electromagnetic '' ( , or ) degrees of freedom .dimensions has only a single degree of freedom as compared to the two degrees of freedom in dimensions . ]we now perform the adm split of the remaining spacetime .this is done by first foliating the spacetime into a series of spacelike hypersurfaces with unit , timelike normal vector .then , similar to the dimensional reduction above , we decompose quantities into components orthogonal and tangent to , using the projection tensor h_ab = g_ab + n_a n_b .we now define the components of , and the induced metric ,-dimensional tensor components . ] using the following decomposition of the g_ab dx^a dx^b = -^2 dt^2 + h_ab ( dx^a + ^adt ) ( dx^b + ^bdt ) , where is the lapse function and the shift vector .the gravitational equations now become the evolution equations for the components of the and the curvature _t h_ab & = & -2k_ab + _ a _ b + _ b _ a [ eq : hdot ] + _ t k_ab & = & ^c _ c k_ab + k_ac _ b ^c + k_bc _ a ^c + + & & - 2k_ack_b^c - _ a_b - ^(3)r_ab , [ eq : kdot ] [ eq : k_ab_dot ] the hamiltonian constraint equation ^(2)r - k^a_bk^b_a + k^2 = ^(3)r + 2 ^(3)r_abn^a n^b [ eq : hc ] and the and momentum constraint equations _ a k_b^a - _ ak = - ^(3)r_cd n^d h^c_b . [ eq : mc ] in the above , is the covariant derivative compatible with the , , and and are the ricci tensor and ricci scalar , respectively .note that because gravity in dimensions has no propagating degrees of freedom , the constraint equations fix the geometry completely .thus , if desired , one can use the constraint equations ( [ eq : hc]-[eq : mc ] ) instead of the evolution equations ( [ eq : hdot]-[eq : kdot ] ) to solve for .the freely specifiable degrees of freedom of the are encoded in and , which are evolved using ( [ eq : s_eom ] ) , ( [ eq : w_eom ] ) and ( [ eq : div_w_eom ] ) . note that ( [ eq : w_eom ] ) and ( [ eq : div_w_eom ] ) constitute four equations for the three components of purely spatial have four equations for the three components of .the purely spatial part of ( [ eq : w_eom ] ) is , in language , the angular momentum constraint equation and only needs to be solved at the initial time in a free evolution of .we note that the restricted class of axisymmetric spacetimes having no angular momentum ( rotation ) is characterized by the existence of a _ scalar twist _, , such that . in the vacuum case, generally represents odd parity gravitational waves , while encodes even parity , or brill waves .we further note that this class includes the special case , which will be the focus of our discussion below .in this section we describe a particular coordinate system and set of variables which , in the context of the formalism described in the previous section , provides us with the concrete system of partial differential equations that we solve numerically .we also detail the outer boundary conditions we use , and the on - axis regularity conditions necessary to obtain smooth solutions to these equations .we only consider spacetimes with zero angular momentum , and no odd - parity gravitational waves ; therefore .we choose a conformally flat , cylindrical coordinate system for the h_ab dx^a dx^b = ( , z , t)^4 ( d^2 + dz^2 ) .[ hab_def ] this choice for exhausts the coordinate freedom we have to arbitrarily specify the two components of the shift vector and . in order to maintain the form ( [ hab_def ] ) during an evolution, we use the momentum constraints , which are elliptic equations , to solve for and at each time step .the hamiltonian constraint provides a third elliptic equation that we can use to solve for the conformal factor . for a slicing condition, we use maximal slicing of hypersurfaces _ in the manifold__i.e ._ we impose , where is the trace of the extrinsic curvature tensor of slices of .this condition ( specifically ) gives us an elliptic equation for the lapse . instead of directly evolving the norm of the killing vector , , we evolve the quantity , defined by s = ^2 e^| , and furthermore , we convert the resultant evolution equation for ( [ eq : s_eom ] ) to one that is first order in time by defining the quantity , which is `` conjugate '' variable to , via |&= & - 2k_^- k_z^z + & = & - n^a ( s)_,a + .[ eq : bo_def ] part of the motivation behind using and as fundamental variables is to simplify the enforcement of on - axis regularity . in particular , regularity as implies that and must exhibit leading order behavior of the form and respectively , and experience has shown it to be easier to enforce such conditions , than to enforce the leading order behavior of ( or its time derivative ) near the axis , which is . as mentioned previously , the only matter source we currently incorporate is a massless scalar field , which satisfies the usual wave equation .we convert this equation to first - order - in - time form by defining a conjugate variable : ^2 n^a _ , a .the stress - energy tensor for the scalar field is t _ = 2 _ , _ , - _ ^ , _ , . using all of the above definitions and restrictions within the formalism detailed in the previous section, we end up with the following system of equations that we solve with our numerical code , described in the next section .the maximal slicing condition results in the following elliptic equation for - \frac{\psi^4}{6\alpha } \bigl [ 2 \alpha \rho \bo + \beta^\rho{}_{,\rho } - \beta^z{}_{,z } \bigr]^2 = 16 \pi \alpha \pi^2 .\label{eq : cons_alpha}\end{aligned}\ ] ] the hamiltonian constraint gives an elliptic equation for + { \psi^4\over 6\alpha^2 } \bigl [ 2\alpha\rho\bar{\omega } + \beta^\rho{}_{,\rho } - \beta^z{}_{,z } \bigr]^2 \quad\qquad\qquad \nonumber\\ \qquad \ , = \ , - 16 \pi \left ( \pi^2 + \phi_{,\rho}{}^2 + \phi_{,z}{}^2 \right ) % - 6 \left ( \rho^2 \left(\rho \bs\right)_{,\rho } \right)_{,\rho^3 } - 6 \bigl ( \rho^2 \left(\rho \bs\right)_{,\rho } \bigr)_{,\rho^3 } \nonumber\\ % - 2 \left ( \left ( \rho \bs \right)_{,\rho } \right)^2 % - 2 \left ( \rho \bs \right)_{,zz } % - 2 \left ( \left ( \rho \bs \right)_{,z } \right)^2 . - 2 \bigl ( \left ( \rho \bs \right)_{,\rho } \bigr)^2 - 2 \bigl ( \rho \bs \bigr)_{,zz } - 2 \bigl ( \left ( \rho \bs \right)_{,z } \bigr)^2 .\label{eq : cons_psi}\end{aligned}\ ] ] the and momentum constraints , which provide elliptic equations that we use to solve for and , are % - \frac{8}{3 } \alpha \bar{\omega } % -\frac{2 \alpha \rho } { 3 } \left [ % 6 \bo \frac{\psi_{,\rho } } { \psi } % + \bo_{,\rho } % + 3 \bo \left(\rho \bs \right)_{,\rho } % \right]=0 , \frac{2}{3}\beta^{\rho}{}_{,\rho\rho } + \beta^{\rho}{}_{,zz } + \frac{1}{3 } \beta^z{}_{,z\rho } -\frac{2 \alpha \rho } { 3 } \bigl [ 6 \bo \frac{\psi_{,\rho } } { \psi } + \bo_{,\rho } + 3 \bo \left(\rho \bs \right)_{,\rho } \bigr ] - \frac{8}{3 } \ , \alpha \bar{\omega } \qquad\qquad\qquad\qquad\qquad\qquad \nonumber\\ - \frac{2}{3 } \bigl [ \frac { \alpha_{,\rho } } { \alpha } - 6 \frac { \psi_{,\rho } } { \psi } \bigr ] \bigl ( \beta^\rho{}_{,\rho } - \beta^z{}_{,z } \bigr ) - \bigl [ \frac { \alpha_{,z } } { \alpha } - 6 \frac { \psi_{,z } } { \psi } - \left ( \rho\bs \right)_{,z } \bigr ] \bigl ( \beta^\rho{}_{,z } + \beta^z{}_{,\rho } \bigr ) \ ; = \ ; - 32\pi { \alpha \over \psi^2 } \pi_{,\rho } , \label{eq : cons_betarho}\end{aligned}\ ] ] and % \left ( \beta^\rho{}_{,z } + \beta^z{}_{,\rho } \right ) % \nonumber\\ % + \left [ 2 \left ( \rho \bs \right)_{,z } % - \frac{4}{3 } \left ( \frac { \alpha_{,z } } { \alpha } % - 6 \frac { \psi_{,z } } { \psi } \right ) % \right ] % \left ( \beta^z{}_{,z } - \beta^\rho{}_{,\rho } \right ) % - \frac{2 \alpha\rho}{3 } \left ( %6 \bo \frac{\psi_{,z}}{\psi } % + \bo_{,z } %\right ) % \nonumber\\ % + 32\pi { \alpha \over \psi^2 } \pi_{,z } % - 2\alpha(\bs_{,z})\rho^2\bo=0 .\beta^z{}_{,\rho\rho } + \frac{4}{3}\beta^z{}_{,zz } - \frac{1}{3 } \beta^{\rho}{}_{,z\rho } - \frac{2 \alpha\rho}{3 } \bigl [ 6 \bo \frac{\psi_{,z}}{\psi } + \bo_{,z } + 3 \bo ( \rho \bs)_{,z } \bigr ] \qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad \nonumber\\ + \frac{4}{3 } \ , \bigl [ \frac { \alpha_{,z } } { \alpha } - 6 \frac { \psi_{,z } } { \psi } - \frac{3}{2 } \left ( \rho \bs \right)_{,z } \bigr ] \bigl ( \beta^\rho{}_{,\rho } - \beta^z{}_{,z } \bigr ) + \bigl [ \frac { 2 \alpha } { \psi^6 } \bigl ( \frac{\rho \psi^6}{\alpha } \bigr)_{,\rho^2 } \ , + \left ( \rho \bs \right)_{,\rho } \bigr ] \bigl ( \beta^\rho{}_{,z } + \beta^z{}_{,\rho } \bigr ) \ ; = \ ; - 32\pi { \alpha\over \psi^2 } \pi_{,z } .\label{eq : cons_betaz}\end{aligned}\ ] ] the definition of in eq .( [ eq : bo_def ] ) gives an evolution equation for ( where the overdot denotes partial differentiation with respect to ) {,\rho}. - \bigl ( { \beta^{\rho } \over \rho } \bigr)_{,\rho}.\ ] ] the evolution equation for is + 64\pi{\alpha\over \psi^4 } \rho ( \phi_{,\rho^2})^2 .\label{eq : evol_bo}\end{aligned}\ ] ] we also have an evolution equation for , which we optionally use instead of the hamiltonian constraint ( [ eq : cons_psi ] ) to update the definition of and the wave equation for give and + \frac{\alpha}{\psi^2 } \left [ \left ( \rho \bs \right)_{,\rho } \phi_\rho + \left ( \rho \bs \right)_{,z } \phi_z \right ] .\label{eq : evol_pi}\end{aligned}\ ] ] to complete the specification of our system of equations , we need to supply boundary conditions . in our cylindrical coordinate system , where ranges from to and ranges from to , we have two distinct boundaries : the physical outer boundary at , , and ; and the axis , at .historically , the axis presented a stability problem in axisymmetric codes .we solve this problem by enforcing regularity on the axis , and , as described in section [ sec : implementation ] , adding numerical dissipation to evolved fields .the regularity conditions can be obtained by inspection of the equations in the limit , or more formally , by transforming to cartesian coordinates and demanding that components of the metric and matter fields be regular and single valued throughout .garfinkle and duncan have further proposed that in order to ensure smoothness on the axis , one should use quantities that have either even or odd power series expansions in as , but which do not vanish faster than .it is interesting that the quantities which we found to work best also obey this requirement . as discussed earlier , the particular choice of and as fundamental variables was partly motivated by regularity concerns .the results are _ , ( 0,z , t ) & = & 0 [ eq : bc_alpha ] + _ , ( 0,z , t ) & = & 0 + ^z_,(0,z , t ) & = & 0 + ^(0,z , t ) & = & 0 [ eq : bc_betarho ] + ( 0,z , t ) & = & 0 + ( 0,z , t ) & = & 0 + _ , ( 0,z , t ) & = & 0 + _ , ( 0,z , t ) & = & 0 at the outer boundary , we enforce asymptotic flatness by requiring _r ( r , t ) & = & 1 + + o(r^-2 ) [ eq : obc_alpha ] + _ r ( r , t ) & = & 1 + + o(r^-2 ) + _ r ^z(r , t ) & = & + o(r^-2 ) + _ r ^(r , t ) & = & + o(r^-2 ) , [ eq : obc_betarho ] for undetermined functions , , , , and .these latter relations are converted to mixed ( robin ) boundary conditions ( see appendix a for details ) and then are imposed at the outer boundaries of the computational domain : , , and .we have also experimented with the use of dirichlet conditions on and at the outer boundaries ( specifically and there ) , and have found that these work about as well as the robin conditions . for the scalar field , we assume that near the outer boundary we can approximate the field as purely radially outgoing , and require ( r ) _ , t + ( r ) _ , r = 0 . for scalar field configurations far from spherical symmetry , this approximation suffers and reflections are relatively large .however , in general , the reflections do not grow and are somewhat damped . for the other two evolved quantities , and , we use this same naive condition for lack of any better , more physically motivated conditions .while this condition proves to be stable with damped reflections , a better condition is sought and this issue remains under investigation .for initial conditions , we are free to set . once the free data is chosen , we then use the constraint and slicing equations to determine .specifically , we define a general pulse shape g_x(,z ) = a_x [ g_x ] characterized by six parameters and then choose initial data of the form ( 0,,z ) & = & g_(,z ) ( 0,,z ) & = & g_(,z ) ( 0,,z ) & = & g_(,z ) ( 0,,z ) & = & 0 .for , these pulses are gaussian , spherical shells centered at with radius and pulse width . for and ,the pulses are spherical .the factor of in the initial data for and ensures the correct behavior on axis for regularity .for the evolutions presented here , we let so that the initial configuration represents a moment of time symmetry .we note , however , that we are also able to generate and evolve approximately ingoing initial data .in this section we describe the equation and technique we use to search for apparent horizons ( ahs ) within spatial slices of the spacetime ( see for descriptions of some of the methods available to find ahs in axisymmetry ) .we restrict our search to isolated , simply connected ahs . in axisymmetry , such an ah can be described by a curve in the plane , starting and ending on the axis at .we define the location of the ah as the level surface , where and the ah is the outermost , marginally trapped surface ; hence , we want to find an equation for such that the outward null expansion normal to the surface , is zero .to this end , we first construct the unit spatial vector , normal to then , using and the hypersurface normal vector , we construct future - pointing outgoing ( ) and ingoing ( ) null vectors as the normalization of the null vectors is ( arbitrarily ) .the outward null expansion is then the divergence of projected onto using the definition of the extrinsic curvature , and substituting ( [ null ] ) into ( [ exp1 ] ) , we arrive at the familiar form for the null expansion when written in terms of adm variables note that because the normalization of is arbitrary , so ( to some extent ) is that of .the above normalization is chosen so that measures the fractional rate of change of area with time measured by an observer moving along . substituting ( [ fs2 ] ) and ( [ sadef ] ) into ( [ exp2 ] ) , and setting , we are left with an ordinary differential equation for .this equation takes the following form , where a prime denotes differentiation with respect to : is a rather lengthy function of its arguments , non - linear in and ; for brevity we do not display it explicitly .all of the metric functions and their gradients appearing in ( [ rpp_eqn ] ) are evaluated along a given curve of integration , and hence are implicitly functions of . ranges from to , and regularity of the surface about the axis requires .integration of ( [ rpp_eqn ] ) therefore proceeds by specifying at ( for instance ) , and then `` evolving '' until either , or diverges at some value of . if an ah exists , and assuming is inside the ah , then the ah can be found by searching for the ( locally ) unique ) augmented with the conditions .we want the outermost of these surfaces . ]initial value such that integration of ( [ rpp_eqn ] ) ends at , with and finite . for larger than ( outside the ah ) , the integration will end at with , indicating an irregular point on the surface ; similarly , for slightly smaller than ( inside the ah ) the integration will end with . therefore ,if we can find a reasonable bracket about the unknown , we can use a bisection search to find .currently , we find a bracket to search by testing a set of initial points , equally spaced in at intervals of . this seems to work well in most situations , and the search is reasonably fast .we use a second - order runge - kutta method to integrate equation ( [ rpp_eqn ] ) .the metric functions appearing in are evaluated using bilinear interpolation along the curve .in this section we describe the numerical code that we have written to solve the equations listed in section [ sec : coords_and_vars ] .some details are deferred to appendix [ app : solver ] .we use a uniform grid of size points in by points in , with equal spacing in the and directions .the value of a function at time level and location within the grid , corresponding to coordinate , is denoted by . for the temporal discretization scale we use , where is the courant factor , which for the type of differencing we employ ,should be less than one for stability ; typically we use . the hyperbolic equations ( [ eq : evol_bs]-[eq : evol_pi ] )are discretized using a second - order accurate crank - nicholson type scheme , whereby we define two time levels , and , and obtain our finite difference stencils by expanding in taylor series about .this gives the following second - order accurate approximation to the first derivative of with respect to time second - order accurate approximations to functions and spatial derivative operators at are obtained by averaging the corresponding quantity , , in time : thus , after discretization of the evolution equations using ( [ ref_cn_time ] ) and ( [ ref_cn_spat ] ) , function values are only referenced at times and , even though the stencils are centered at time .specific forms for all the finite difference stencils that we use can be found in appendix [ app : solver ] .we add kreiss - oliger dissipation to the evolution of equations of and ( in addition to during partially constrained evolution ) , as described in appendix [ app : solver ] . to demonstrate that this is _ essential _ for the stability of our numerical scheme , we compare in fig .[ fig : diss ] the evolution of from simulations without and with dissipation , but otherwise identical . the elliptic equations ( [ eq : cons_alpha]-[eq : cons_betaz ] )are solved using brandt s fas multigrid ( mg ) algorithm , described in some detail in appendix [ app : solver ] .there are no explicit time derivatives of functions in these equations , and we discretize them at a single time level ( _ i.e. _ we do not apply the crank - nicholson averaging scheme for the elliptics ) .we use either a fully constrained evolution , solving for and using the constraint equations and slicing condition , or a partially constrained evolution where instead of using the hamiltonian constraint to update , we use the evolution equation ( ) .partially constrained evolution has proven to be useful due to the occasional failure of the mg solver in the strong - field regime ( _ i.e. _ close to black hole formation ) .use of the evolution equation for ( rather than the hamiltonian constraint ) circumvents this problem in many instances ; however , in certain brill - wave dominated spacetimes , free evolution of is not sufficient to ensure convergence of the mg process .we are currently working to make the mg solver more robust in these situations .the code is written in a combination of rnpl ( rapid numerical prototyping language ) and fortran 77 .the hyperbolic equations are implemented in rnpl , which employs a point - wise newton - gauss - seidel iterative relaxation scheme to solve these equations , while the mg solver is implemented in fortran ( see appendix [ app : solver ] for more mg details ) .a pseudo - code description of the time - stepping algorithm used is as follows : .... a ) as an initial guess to the solution at time t+dt , copy variables from t to t+dt b ) repeat until ( residual norm < tolerance ) : 1 : perform 1 newton - gauss - seidel relaxation sweep of the evolution equations , solving for the unknowns at time t+dt 2 : perform 1 mg vcycle on the set of elliptic equations at time t+dt end repeat .... for the residual norm used to terminate the iteration we use the the infinity norm of the residuals of all updated unknowns .in this section , we describe some of the tests we have performed to check that we are solving the correct set of equations .the first test consists of checking the equations against those derived with a computer algebra system . by inputting the metric andcoordinate conditions , the computer derived equations can then be subtracted from our equations and simplified . by finding that the differences simplify to 0, we can conclude that two sets of equations agree . for diagnostic purposes and as tests of the equations and of their discretization , we compute several quantities during the numerical evolution .the first is the adm mass where the integral is evaluated on a flat 3-space , _ i.e. _ with metric .the spatial 3-metric is that from our curved space solution , but has its indices raised and lowered with the flat metric .integrating around the boundaries of our numerical grid , the normal vectors are and .after some algebra , the adm mass becomes m_adm & = & _ z_max ^4 d & & - _z_min ^4 d & & + _ _ max ^4 dz . a second set of quantities we calculate are the -norms of the residuals of the evolution equations for the extrinsic curvature ( [ eq : k_ab_dot ] ) , which we denote .because we do not directly evolve individual components of the extrinsic curvature , these residuals will not be zero ; however , they _should _ converge to zero in the limits as the discretization scale , and the outer boundary positions .note that we include these last conditions because it is only in the limit that our outer boundary conditions are fully consistent with asymptotic flatness .the convergence properties of our code are measured by computing the convergence factor , , associated with a given variable , , obtained on grids with resolution , and via q_u = .in particular , for the case of ( second order ) convergence , we expect as .the first set of tests we present here are comparisons of and from the evolution of spherically symmetric initial data to the corresponding functions computed by a 1d spherically symmetric code , the details of which are presented in appendix [ app : spherical ] . in general , the results from the two codes are in good agreement . a sample comparison is illustrated in fig .[ fig : spherical ] which shows the scalar field obtained with the 1d code as well as two radial slices of the corresponding solution calculated using the axisymmetric code .note , however , that we do not expect _ exact _ agreement in the limit for a fixed outer boundary location , as the `` rectangular '' boundaries of the axisymmetric code are , in general , incompatible with precise spherical symmetry .in the second series of tests , we examine evolutions of brill waves and non - spherical scalar pulses . figs . [fig : brill_10][fig : oblate_20 ] show results from two typical initial data sets , each computed using two distinct outer boundary positions .each figure plots ( a ) the adm mass , ( b ) the -norm of the residual of the component of the evolution equation for the extrinsic curvature , and , ( c ) the convergence factor of , as functions of time ( the convergence factor for other functions exhibit similar behavior as , and so for brevity we do not show them ) . here, one expects to see an improvement of the results namely trends toward mass conservation early on , a zero residual , and a convergence factor of 4in the limits and .after energy has reached the outer boundary , and to a lesser extent before ( as is evident in the scalar field example in figs .[ fig : oblate_10 ] and [ fig : oblate_20 ] ) , we fail to get consistency with the evolution equation ( [ eq : k_ab_dot ] ) as , for _ fixed _ .this is a measure of the inaccuracy of our outer boundary conditions ( [ eq : obc_alpha]-[eq : obc_betarho ] ) ; though the trends suggest that we _ do _ achieve consistency in the limit .= 8.5 cm = 8.5 cm = 8.5 cm = 8.5 cm finally , we show some results of a simulation of black hole formation from the collapse of a spherically symmetric distribution of scalar field energy . again , by looking at spherically symmetric collapse we can compare with the 1d code ( obtaining the same level of agreement as seen in the example in fig .[ fig : spherical ] ) .however , here we want to show the behavior of our coordinate system ( in particular the maximal slicing ) in the strong - field regime , which demonstrates the need to incorporate black hole excision techniques and/or adaptive mesh refinement ( amr ) before attempting any serious investigation of physics with this code .[ fig : bh_plots ] shows plots of the adm mass estimate ( [ madm ] ) , an estimate of the black hole mass , where is the area of the apparent horizon , and the minimum value of the lapse as a function of time from the simulation .[ fig : bh_psi ] shows the conformal factor at several times , in the central region of the grid .maximal slicing is considered _ singularity avoiding _ , because as the singularity is approached in a collapse scenario , the lapse tends to 0 , as demonstrated in fig .[ fig : bh_plots ] .this effectively freezes the evolution inside the black hole , though it causes a severe distortion in the slices as one moves away from the black hole .this particular coordinate pathology is evident in fig .[ fig : bh_psi ] .recall from the ( [ hab_def ] ) that determines proper length scales in the and directions ; thus the rapid growth with time of shown in fig .[ fig : bh_psi ] means that a given coordinate area represents increasing proper area .furthermore , the increase in magnitude of in the strong - field regime ( which happens even when black holes do not form , and in non - spherical scalar field and brill wave evolution , though not to the same extent as shown in fig .[ fig : bh_psi ] ) implies that our effective numerical resolution decreases in those regions , as some feature of the solution with a given characteristic size will span less of the coordinate grid .thus , in the end , even though maximal slicing may prevent us from reaching a physical singularity , the `` grid - stretching '' effect is just as disastrous for the numerical code , preventing any long - term simulation of black hole spacetimes . for these reasons we will add black hole excision techniques and amr before exploring physics with this code ; our efforts in this regard are well underway , and will be described elsewhere .we have described a gravitational evolution model which evolves axisymmetric configurations of gravitational radiation and/or a scalar field .a thorough battery of tests confirms that the correct equations are being solved . in particular, we have provided evidence that the code is second - order convergent , consistent and conserves mass in the limit where the outer boundary position goes to infinity .the unigrid code described here is the first step towards our long - term goal of studying a range of interesting theoretical and astrophysical phenomena in axisymmetry .these include gravitational collapse of various matter sources and gravitational waves , the corresponding critical phenomena at the threshold of black hole formation , head - on black hole collisions and accretion disks . to this end, we need to include support for angular momentum and additional matter fields in the code , as well as to add additional computational and mathematical infrastructure adaptive mesh refinement , black hole excision and the capability of running in parallel on a network of machines .all of these projects are under development , and results will be published as they become available .another goal of this project is to provide a platform from which to develop computational technology for 3d work . in particular , we see development of amr in axisymmetry as a precursor to its incorporation in 3d calculations . likewise ,accurate and stable treatment of boundary conditions presents a continual challenge in numerical relativity , and it is possible that we can develop an effective treatment of boundaries in axisymmetry that will generalize to the 3d case .the authors would like to thank david garfinkle for helpful discussions .mwc would like to acknowledge financial support from nserc and nsf phy9722068 .fp would like to acknowledge nserc , the izaak walton killam fund and caltech s richard chase tolman fund for their financial support .ewh and sll would like to acknowledge the support of nsf grant phy-9900644 .sll acknowledges support from grant nsf phy-0139980 as well as the financial support of southampton college .ewh also acknowledges the support of nsf grant phy-0139782 . the majority of the simulations described here were performed on ubc s * vn * cluster ( supported by cfi and bckdf ) , and the * maci * cluster at the university of calgary ( supported by cfi and asra )in this appendix we briefly mention some aspects of our multigrid ( mg ) routine , and list the set of finite difference approximations that we use .the constraint equations ( [ eq : cons_alpha]-[eq : cons_betaz ] ) are four elliptic equations which , for a fully constrained system , must be solved on every time slice ( _ i.e. _ spatial hypersurface ) . as such , it can be expected that the time taken by a given evolution will be dominated by the elliptic solver and hence we look for the fastest possible solver .currently , multigrid methods are among the most efficient elliptic solvers available , and here we have implemented a standard full approximation storage ( fas ) multigrid method with v cycling ( see ) to solve the four nonlinear equations simultaneously .( when using the evolution equation for in a partially constrained evolution , we use the same multigrid routine described here , except we only solve for the three quantities and during the v - cycle ; is then simply considered another `` source function '' . )a key component of the mg solver is the relaxation routine that is designed to _ smooth _ the residuals associated with the discretized elliptic equations .we use point - wise newton - gauss - seidel relaxation with red - black ordering ( see fig . [fig : redblack ] ) , simultaneously updating all four quantities at each grid - point during a relaxation sweep .in addition to its use for the standard pre - coarse - grid - correction ( pre - cgc ) and post - cgc smoothing sweeps , the relaxation routine is also used to _ solve _ problems on the coarsest grid .we use half - weighted restriction to transfer fields from fine to coarse grids and ( bi)linear interpolation for coarse to fine transfers .we generally use 3 pre - cgc and 3 post - cgc sweeps per -cycle , and likewise normally use a single -cycle per crank - nicholson iteration .one complicating factor here is the treatment of the boundary conditions eqs .( [ eq : bc_alpha]-[eq : bc_betarho ] ) and eqs .( [ eq : obc_alpha]-[eq : obc_betarho ] ) . in accordance with general multi - grid practice, we view the boundary conditions as logically and operationally distinct from the interior equation equation .the outer boundary conditions can generally be expressed as rx [ ap : bc ] where .taking the derivative of ( [ ap : bc ] ) with respect to , we arrive at the differential form that is applied on the outer boundaries of the computational domain ( ) : x - x _ , - z x_,z = 0 . on the -axis ( ) , the conditions eqs .( [ eq : bc_alpha]-[eq : bc_betarho ] ) take one of the following two forms : a_i , = 0 [ eq : qfit ] or a_i = 0 [ eq : zfit ] equations of the former form are discretized using an backwards difference approximation to the derivative . the interior and boundary differential equations are solved in tandem via the multigrid approach : 1 .the residual is smoothed using some number of relaxation sweeps . for the interior , eqs .( [ eq : cons_alpha]-[eq : cons_betarho ] ) are relaxed using red - black ordering as discussed above . after each call of this relaxation routine , a second routine that `` relaxes '' the boundary pointsis called .2 . for quantities restricted from a fine to a coarse grid ,discrete forms of ( [ eq : qfit ] ) and ( [ eq : zfit ] ) are applied during the -cycle . at the other boundaries ,straightforward injection is used .the key idea here is to ensure that the boundary relaxation process does not substantially impact the smoothness of the interior residuals , because it is only for smooth residuals that a coarsened version of a fine - grid problem can sensibly be posed .finally , in table [ table : diff ] , we show all of the difference operators we use to convert the differential equations listed in [ sec : coords_and_vars ] to finite difference form , using the crank - nicholson scheme described in section [ sec : implementation ] .in addition , as discussed in section [ sec : implementation ] , we use kreiss - oliger dissipation to maintain smoothness in the evolved fields .specifically , we add the kreiss - oliger filter to discretized evolution equations _t a^n = _ t f^n ( ) by replacing the crank - nicholson time difference operator with : ^_t a^n = _ t f^n ( ) empirically , we find that a value of generally keeps our fields acceptably smooth ..finite difference operators and their correspondence to differential operators .here , is an arbitrary grid function defined via , where and are the spatial and temporal grid spacings , respectively . denotes either of the two spatial coordinates or , with the dependence of on the other suppressed .the parameter represents a user - specifiable `` amount '' of kreiss - oliger dissipation .[ cols="<,<,<",options="header " , ]one simple test of the code compares the results for spherically symmetric initial data with the output of a code which explicitly assumes spherical symmetry . herewe present the equations for this 1d code .the spacetime metric is : ds^2 = -(^2 + ^4 ^ 2)dt^2 + 2 ^ 4dt dr + ^4 ( dr^2 + r^2 d^2 ) , where , and , are functions of and , is the line element on the unit 2-sphere , and is the radial component of the shift vector ( i.e. ) . adopting maximal slicing to facilitate direct comparison to the axisymmetric code , we have k^i_j = diag(k^r_r(r , t),0,0 ). then a sufficient set of equations for the coupled einstein - massless - scalar system is + + 2 + ( k^r_r)^2 ^5 & = & 0 [ eq : ss_psi ] + ( k^r_r ) + 3 k^r_r + & = & 0 [ eq : ss_krr ] + - & = & 0 [ eq : ss_alpha ] ( ) & = & [ eq : ss_beta ] + & = & + [ eq : ss_ph ] + & = & ( + ) [ eq : ss_phi ] + & = & + & & - .[ eq : ss_pi ] here dot and prime denote derivatives with respect to and , respectively .the evolution equations ( [ eq : ss_ph]-[eq : ss_pi ] ) are discretized using an crank - nicholson scheme .( [ eq : ss_psi ] ) and ( [ eq : ss_krr ] ) are similarly discretized using finite difference approximations , then solved iteratively for and at each time step .once , , and have been determined , and are found from finite - difference versions of eqs .( [ eq : ss_alpha ] ) and ( [ eq : ss_beta ] ) .the code is stable and second - order convergent . 1 l. l. smarr , ph.d .dissertation , university of texas at austin , unpublished ( 1975 ) k. r. eppley , ph.d .dissertation , princeton university , unpublished ( 1977 ) s. l. shapiro and s. a. teukolsky , `` collisions of relativistic clusters and the formation of black holes '' , phys . rev .* d45 * , 2739 ( 1992 ) p. anninos , d. hobill , e. seidel , l. smarr and w. suen , `` the collision of two black holes , '' phys .lett . * 71 * , 2851 ( 1993 ) [ gr - qc/9309016 ] .j. baker , a. abrahams , p. anninos , s. brandt , r. price , j. pullin and e. seidel , `` the collision of boosted black holes '' , phys .* d55 * , 829 ( 1997 ) p. anninos and s. brandt `` head on collision of two unequal mass black holes , '' phys .lett . * 81 * , 508 ( 1998 ) [ gr - qc/9806031 ] .t. nakamura , `` general relativistic collapse of axially symmetric stars leading to the formation of rotating black holes , '' _ prog . of theor .physics _ * 65 * , 1876 - 1890 ( 1981 ) .r. f. stark and t. piran , `` gravitational wave emission from rotating gravitational collapse , '' phys .. lett . * 55 * , 891 ( 1985 ) .t. nakamura , k. oohara and y. kojima , `` general relativistic collapse of axially symmetric stars '' , prog .. suppl . *90 * , 13 ( 1987 ) m. shibata , `` axisymmetric simulations of rotating stellar collapse in full general relativity criteria for prompt collapse to black holes '' prog .phys . * 104 * , 325 ( 2000 ) .a. m. abrahams , g. b. cook , s. l. shapiro and s. a. teukolsky , `` solving einstein s equations for rotating space - times : evolution of relativistic star clusters , '' phys .d * 49 * , 5153 ( 1994 ) .shapiro and s.a .teukolsky , `` formation of naked singularities : the violation of cosmic censorship , '' phys .* 66 * , 994 ( 1991 ) .d. garfinkle and g. c. duncan , `` numerical evolution of brill waves , '' phys .rev . * d 63 * , 044011 ( 2001 ) [ gr - qc/0006073 ] . m. alcubierre , s .brandt , b .brugmann , d .holz , e .seidel , r .takahashi , j .thornburg `` symmetry without symmetry : numerical simulation of axisymmetric systems using cartesian grids '' int .j. mod .d * 10 * , 273 ( 2001 ) .s. brandt , j.a .font , j.m .ibanez , j. masso and e. seidel , `` numerical evolution of matter in dynamical axisymmetric black hole spacetimes '' , comput .. commun . * 124 * 169 ( 2000 ) s. brandt and e. seidel , `` the evolution of distorted rotating black holes i : methods and tests '' , phys . rev .* d52 * , 856 ( 1995 ) s. brandt and e. seidel , `` the evolution of distorted rotating black holes ii : dynamics and analysis '' , phys . rev . *d52 * , 870 ( 1995 ) s. brandt and e. seidel , `` the evolution of distorted rotating black holes iii : initial data '' , phys . rev .* d54 * , 1403 ( 1996 ) a. m. abrahams and c. r. evans , `` trapping a geon : black hole formation by an imploding gravitational wave ., '' phys .d. * 46 * , 4117 ( 1992 ) .a. m. abrahams and c. r. evans , `` critical behavior and scaling in vacuum axisymmetric gravitational collapse , '' phys .* 70 * , 2980 ( 1993 ) .k. maeda , m .sasaki , t .nakamura , s .miyama `` a new formalism of the einstein equations for relativistic rotating systems . ''phys . * 63 * , 719 ( 1980 ) .k. maeda , `` [ ( 2 + 1)+1]-dimensional representation of the einstein equations , '' in _ proceedings of third marcel grossmann meeting on general relativity _ , ed .hu ning , ( science press , 1983 ) .r. geroch , `` a method for generating solutions of einstein s equations , '' j. math .* 12 * , 918 ( 1971 ) .misner , k.s .thorne and j.a .wheeler , _ gravitation _ , new york , w.h .freeman and company ( 1973 ) j.m .bardeen and t. piran , `` general relativistic axisymmetric rotating systems : coordinates and equations '' , phys . rep .* 96 * 205 ( 1983 ) .t. appelquist , a. chodos , and p.g.o .freund , `` modern kaluza - klein theories , '' addison - wesley , menlo park , 1987 ) .j. thornburg , `` finding apparent horizons in numerical relativity '' , phys . rev .* d54 * , 4899 ( 1996 ) m. alcubierre , s. brandt , b. brugmann , c. gundlach , j. masso , e. seidel and p. walker , `` test beds and applications for apparent horizon finders in numerical relativity '' , class .* 17 * 2159 ( 2000 ) kreiss , h .- o . , and oliger , j. , `` methods for the approximate solution of time dependent problems '' , global atmospheric research program publication no . 10 , world meteorological organization , case postale no . 1, ch-1211 geneva 20 , switzerland ( 1973 ) .software available from http://laplace.physics.ubc.ca/members/matt/rnpl/index.html we use waterloo maple along with a tensor package written by one of the authors ( mwc ) .a. brandt , `` multilevel adaptive solutions to boundary value problems , '' _ math . of computation_ * 31 * , 333 - 390 ( 1977 ). a. brandt , `` guide to multigrid development , '' in _ lecture notes in mathematics _ * 960 * , 220 - 312 , ( springer - verlag , new york , 1982 ) .choptuik , `` a study of numerical techniques for radiative problems in general relativity , '' ph.d .thesis , the university of british columbia , unpublished ( 1982 ) . a subtle point hereconcerns the `` relaxation '' occurring on the boundaries .the finite - difference equations on the boundary yield algebraic equations which determine the given fields there `` exactly . ''the subtlety arises because these algebraic conditions couple neighboring boundary points and thus despite the fact that we solve these equations exactly , the residual ( as computed after the entire boundary is `` relaxed '' ) will not be identically zero .we find this procedure suffices to keep residuals sufficiently smooth over both interior and boundary domains .
we present a new numerical code designed to solve the einstein field equations for axisymmetric spacetimes . the long term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry , including gravitational collapse , critical phenomena , investigations of cosmic censorship , and head - on black hole collisions . our objective here is to detail the ( 2 + 1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them . we are able to obtain stable evolution , despite the singular nature of the coordinate system on the axis , by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations .
one of the differences between laboratory and space laser interferometer gravitational wave detectors is that , in the laboratory , the two arms of the interferometer that is used to detect changes in the spacetime geometry are maintained at precisely equal lengths .therefore , when the signals from the two perpendicular arms are combined , the laser phase noise in the differenced signals cancels exactly . in space , a laser interferometer gravitational wave detector such as lisa will have free - flying spacecraft as the end masses , and precise equality of the arms is not possible .other methods must then be used to eliminate laser phase noise from the system .these methods involve a heterodyne measurement for each separate arm of the interferometer and data processing that combines data from both arms to generate a signal that is free of laser phase noise . in a previous paper ( ,hereafter called paper i ) , the sensitivity curves for space detectors using these techniques were generated by explicitly calculating transfer functions for signal and noise , as modified by the data processing algorithms . while the algorithms have been shown , in principle , to eliminate the laser phase noise in the detectors regardless of the lengths of the two arms ,the transfer functions have previously only been calculated for the case of equal arms . in this paperwe extend the calculation of the noise and signal transfer functions to the case of arbitrarily chosen armlengths .one of the goals of paper i was to provide a uniform system for evaluating the sensitivity of various configurations of space gravitational detectors .this paper extends that capability to configurations in which the armlengths are significantly different from each other .for example , a proposal by bernard schutz at the 2000 lisa symposium in golm , germany , suggested a modification to the current lisa design in which a fourth spacecraft is inserted in the middle of one of the legs of the interferometer to produce two independent interferometers , each with one leg half the length of the other ( see fig . [ fig : detector ] ) .the goal of such a design was to be able to cross - correlate the independent interferometers to search for the stochastic cosmic gravitational wave background .using the analysis presented here , one will be able to determine the sensitivity of such an interferometer and judge the scientific value of the proposed modification .as in paper i , the analysis begins with the response of a round - trip electromagnetic tracking signal to the passage of a gravitational wave , as derived by estabrook and wahlquist . a gravitational wave of amplitude will produce a doppler shift in the received frequency , relative to the outgoing signal with fundamental frequency .the shift is given by , \label{dopplersignal}\end{aligned}\ ] ] where is the one - way light travel time between spacecraft , is the angle between the line connecting the spacecraft and the line of sight to the source , and is a principal polarization angle of the quadrupole gravitational wave .it is desirable to work in frequency space , so is written in terms of its fourier transform .if the doppler record is sampled for a time then is related to its fourier transform by where the normalization factor is used to keep the power spectrum roughly independent of time . using this definition of the fourier transform , the frequency shift of eq .( [ dopplersignal ] ) can be written as e^{i \omega t } d\omega \ , \label{dopplersignal2}\end{aligned}\ ] ] where .the quantity that is actually read out in a laser interferometer tracking system is phase , so eq . ( [ dopplersignal2 ] ) is integrated to find the phase in cycles in paper i , a strain - like variable was formed by dividing the in eq .( [ phasesignal ] ) by and the analysis was done using this variable . since both arms had roughly the same length in paper i and carried nearly the same frequency , there was only a scale difference between using and using as the observable , and linear combinations of were the same as linear combinations of .however , when the two armlengths are different , this is no longer the case , and one must be careful as to what is taken to be the observable for use in noise - cancelling data analysis . in the laser phase - noise - cancellation algorithms that will be presented in sectionii , it is _ relative phase _ and _ not strain _ that can be combined to create laser - noise - free signals . to understand how this arises ,consider a case where laser signals in two arms are phase - locked to each other , with as the frequency of the master laser in the first arm and as the frequency in the second arm , with as the ratio of the two frequencies .then a phase noise excursion in the first arm will produce a phase noise excursion in the second arm .thus it will be linear combinations of that will allow the two noise terms to cancel .therefore , in this paper , the gravitational wave observable in the arm is defined to be \frac{1}{\omega } e^{i \omega t } \ , \label{zsignal}\end{aligned}\ ] ] where eq .( [ dopplersignal2 ] ) has been used to expand and where arbitrary constant phases have been set to zero in the integration .it should be noted that is a different observable than the strain variable that was labelled in paper i. it should also be noted that , as it is now defined , has units of time , so eq . ( [ zsignal ] )gives the _ time delay _ in seconds produced by the passage of a gravitational wave through the detector .tinto and armstrong originally showed that the preferred signal for purposes of data analysis is not the traditional michelson combination ( difference of both arms ) , but rather a new combination , given in the time domain by assumes that the lasers in the end spacecraft are phase - locked to the signals they receive from the central spacecraft .there is a form for that does not make this assumption and that can thereby be converted to interferometers centered on the other spacecraft in the constellation .however , the sensitivity we derive using this form will be valid for the more general form as well , and will therefore apply to signals formed with any spacecraft as vertex . ] where is the data stream from the interferometer arm , composed of the signal of interest ( given by eq.([zsignal ] ) ) and the combined noise spectra in each of the interferometer arms , .the armlengths are taken to be unequal , with armlength in the arm .this combination is devoid of laser phase noise for all values of the two armlengths and to determine the sensitivity using the variable , it is necessary to establish a relationship between the amplitude of a gravitational wave incident on the detector and the size of the signal put out by the instrument .the noise in the detector will limit this sensitivity , and must also be included in the analysis .the part of containing the gravitational wave signal is was called in the limit where . ] : the transfer function , which connects the spectral density of the instrument output , with the spectral density in frequency space , is defined via where the bar over the in eq .( [ transferequation ] ) indicates an average over source polarization and direction .the gravitational wave amplitude spectral density is defined by where is the fourier amplitude defined in eq .( [ fouriertransform ] ) , so that the mean - square gravitational wave strain is given by similarly , the instrumental response is defined such that where the brackets indicate a time average . in the next section ,the transfer function from the gravitational wave amplitude to the instrument signal is worked out .let us take the ratio of the two armlengths in the interferometer to be an adjustable parameter , , taking on values between and , such that and .the average power in the part of which contains the gravitational wave signal is given by where is defined by eq .( [ lambdadef ] ) . using the definition of from eq .( [ zsignal ] ) this can be expanded to yield \ , \label{power}\ ] ] where \ , \label{t1 } \\t_{2}(u ) & = & \cos^{2}(2 \psi_{2 } ) \cdot 4 \sin^{2}(u ) \left [ \mu_{2}^{2}\left(1 + \cos^{2}(\beta u ) - 2 \cos(\beta u)\ \cos(\beta u \mu_{2 } ) \right ) \right .\nonumber \\ & - & \left . 2 \mu_{2 } \sin(\beta u)\ sin(\beta u \mu_{2 } ) + \sin^{2}(\beta u ) \right ] \ , \label{t2 } \\t_{3}(u ) & = & \cos ( 2 \psi_{1 } ) \\cos ( 2 \psi_{2 } ) \cdot 4 \sin(u ) \sin(\beta u )\ \eta(u ) \ , \label{t3}\end{aligned}\ ] ] with , , and where \left [ \cos ( \beta u ) - \cos ( \beta u \mu_{2 } ) \right ] \mu_{1 } \mu_{2 } \nonumber \\ + \left [ \sin ( u ) - \mu_{1 } \sin ( u \mu_{1 } ) \right ] \left [ \sin ( \beta u ) - \mu_{2 } \sin ( \beta u \mu_{2 } ) \right ] \label{eta}\end{aligned}\ ] ] has been defined for convenience. the propagation angles and principal polarization angles are defined with respect to the arm using the geometric conventions of paper i. the expression for the power in the detector , as given by eq .( [ power ] ) , is a complicated function of frequency and of the orientation between the propagation vector of the gravitational wave and the interferometer , and represents the antenna pattern for the detector .it is customary to describe the average sensitivity of the instrument by considering the isotropic power , obtained by averaging the antenna pattern over all propagation vectors and all polarizations ) is linearly polarized .as in paper i , the averaging procedure over all linearly polarized states produces the same response function as averaging over a more general elliptically polarized state with an appropriately weighted distribution ] .using the definition of from eq .( [ transferequation ] ) , with the average isotropic power computed using the geometric averaging procedure of paper i with eqs .( [ t1 ] - [ eta ] ) , the gravitational wave transfer function is found to be \right .\nonumber \\ & + & \left .2 \sin^{2}(u ) \left [ \left(1 + \cos^{2 } ( \beta u ) \right ) \left(\frac{1}{3 } - \frac{2}{(\beta u)^{2 } } \right ) + \sin^{2 } ( \beta u ) + \frac{4}{(\beta u)^{3 } } \sin ( \beta u ) \ \cos ( \beta u ) \right ] \right .\nonumber \\ & - & \left .\frac{1}{\pi } \sin(u ) \sin(\beta u ) \int_{0}^{2\pi } d\epsilon \int_{-1}^{+1 } d\mu_{1 } \\left(1 - 2 \sin^{2 } \alpha \right ) \eta(u , \theta_{1 } , \theta_{2 } ) \right\ } \ . \label{transferfunction}\end{aligned}\ ] ] the remaining integral can be evaluated using simple numerical techniques , after relating the angular variables as described in paper i , where : and here is the opening angle of the interferometer , and is the inclination of the gravitational wave propagation vector to the plane of the interferometer .the complete gravitational wave transfer function is plotted in fig .[ fig : gwtransfers ] for ( `` equal arm '' ) and fig . [ fig : gwtransfers2 ] for ( `` unequal arm '' ) examples . as may be seen in the figure , the low - frequency ( small ) response of the detector to a gravitational wave signal is four orders of magnitude lower for the detector than for the equal arm detector , implying that the ( amplitude ) signal will be two orders of magnitude lower the detected signal level is proportional to the length of the shortest arm . however , once the period of the gravitational wave falls inside the light - time of the longest arm , , the equal - arm detector ( ) response begins to fall off while the unequal - arm detector ( ) response is roughly flat up to a period corresponding to the light - time in the shortest arm .the dropoff at low frequencies is a result of the fact that the variable is formed by subtracting each from itself , offset by the light - time in the opposite arm .thus , in the low - frequency limit , the two copies of the signal strongly overlap and the signal is almost entirely subtracted away . for equal arms , the response of the detector is likewise subtracted to zero when an integer number of wavelengths fits in the arm length , as seen in the high - frequency portion of the curve . for the unequal - arm case , this does not occur , because the subtraction of two versions of the signal in each arm are done at different light - times in the two arms , so whatever period signal cancels in one arm will not cancel in the other .however , as may be seen in the case , the response drops sharply to zero at ( equivalent to hz for lisa armlength of m ) , where exactly one wavelength fits into the short arm and exactly one hundred fit into the long arm . however , the response of the detector s signal is not the whole story .the ability of a detector to detect a signal depends on both the signal in the detector and on the competing noise .as we shall see in the next section , when the variable is formed , the noise in each arm is likewise subtracted away in most of the places where the signal is lost ( _ e.g. _ , at low frequency ) , so the ratio of signal to noise remains high .the noise sources for lisa may be divided into categories in two different ways .first , a noise source may be either one - way ( affecting only the incoming or the outgoing signal at a spacecraft , but not both ) or two - way , ( affecting both incoming and outgoing signals at the same time ) .a one - way noise source will have a transfer function of , since there are spacecraft in each leg contributing equal amounts of such noise . as , where is the noise spectral density of a single type in a single spacecraft ] the transfer function for two - way noise sources , however , will be more complicated due to the internal correlation .a single two - way noise fluctuation in the central spacecraft of the interferometer will affect the incoming signal immediately , and then , a round - trip light - time later , will affect the measured signal again in the same way . in the time domain , the effect in the arm of a fluctuation will be .the transfer function for this time - delayed sum is .if an end spacecraft has noise that affects both incoming and outgoing beams , it will affect them at almost the same time , with no delay , giving a transfer function contribution of 4 .the noise transfer function for a single arm for a two - way noise source is therefore examples of one - way noise are thermal noise in the laser receiver electronics or a mechanical change in the optical pathlength in the outgoing laser signal before it gets to the main telescope optics .examples of two - way noise are parasitic forces on the accelerometer proof mass or thermal changes in the optical pathlength in the main telescope .a second way in which noise sources may be classified is by how they scale when there is a change in armlength in the interferometer .the first type of noise in this classification scheme is what we call `` position noise '' , in which the size of the noise in radians of phase is independent of the length of the arm .accelerometer noise and thermal noise in the laser electronics are examples of position noise .the second type of noise is what we call `` strain noise '' , in which the size of the noise scales with armlength .examples of strain noise include shot noise and pointing jitter ( if it is dominated by low power in the incoming beacon ). position noises may be either one - way or two - way , but we can think of no two - way strain noise sources .the transfer functions that connect the noise in the instrument to the variable depend on the type of noise .we begin by considering the noise terms in eq .( [ xsignal ] ) : \ . \label{noisepart}\ ] ] we then go to the frequency domain , squaring and time - averaging to obtain the power spectrum . \ , \label{noisepower}\ ] ] where cross - terms ( _ e.g. _ , ) have been neglected under the assumption that noise in the two arms will be independent and uncorrelated .note that is the power spectrum in the long arm ( length ) and is the power spectrum in the short arm ( length ) . since the noise in the detectors includes different types , with different transfer functions , it is not possible to write a single transfer function giving the response of the variable to noise , so let us consider the various noise categories one at a time .we first consider position noise , for which .then , using eq .( [ noisepower ] ) , we find the transfer function for one - way position noise to be where , as we noted above , there is a factor of representing the noise from the two spacecraft in each arm .two - way position noise must include the transfer function from eq .( [ twoway ] ) , giving \ .\label{twowaytransfer}\ ] ] strain noise scales with armlength , and is hence smaller in the shorter arm , so that .its transfer function is therefore where the factor of for the two spacecraft has again been included . when , the transfer functions for strain noise and one - way position noise ( eqs .[ positiontransfer ] and [ shottransfer ] ) are identical and have zeros at , where is zero or a positive integer .these are exactly the places where the transfer function for gravitational wave signal ( fig .[ fig : gwtransfers ] ) has its zeros . when , the situation is more complicated .both and share the term which will go to zero at and at multiples of .the terms in and have their zeros at multiples of the lower frequency , . in , this term will be larger than term at low frequencies , since near , , while . in , these terms will be equal in the low - frequency limit , because of the factor that multiplies the term .thus , in the low frequency limit , the strain noise transfer function will be times the the one - way position noise transfer function .when , the transfer function for one - way position noise will have sharp drops at multiples of , down to the level of its term .these behaviors are shown in fig .[ fig : noisetransfer ] and fig .[ fig : noisetransfer2 ] . the signal tonoise ratio is the ratio of the signal power in the detector to the noise power in the detector : where , , and are the spectra of strain noise and one - way and two - way position noise , respectively , and is the gravitational wave transfer function given by eq.([transferfunction ] ) . setting and solving for yields the instrument sensitivity curve as defined in paper i : where is the gravitational wave transfer function , given by eq .( [ transferfunction ] ) .figures [ fig : equalarmsense ] and [ fig : unequalarmsense ] show the sensitivity curves , computed using eq .( [ hf ] ) , for and respectively .the noise values used are taken to be the lisa target design values ( computed as described in paper i ) .the shot noise and acceleration noise levels are set at the standard lisa values . in addition, a flat one - way position noise spectrum is assumed at the lisa shot - noise value . also plotted in figures [ fig : equalarmsense ] and [ fig : unequalarmsense ] are sensitivity curves representing each of the three components of the total noise , taken one at a time .as may be seen in fig .[ fig : unequalarmsense ] , the low - frequency sensitivity for unequal arms , being set by the two - way position noise in the accelerometer , is degraded over the equal - arm case by the ratio of the two arms . in other words ,the sensitivity at lowest frequencies is set by the sensitivity of the shortest arm . at middle and high frequencies , the situation is more complicated . if the dominant noise is strain noise , then the sensitivity is independent of in this frequency range . however ,if the dominant noise is position noise , then the sensitivity curve at high frequencies will rise in proportion to , though its flat floor will extend to higher frequency , from the of the equal - arm case to when the armlength ratio is .the implications of these results for mission design are obvious .if the armlengths are not equal , the low - frequency sensitivity is degraded by a factor , the ratio of the armlengths .if the high - frequency noise can be guaranteed to be strain noise , even in the shorter arm , then the high - frequency sensitivity is unaffected by the unequal arms .if the noise at high frequency is dominated by position noise , then the high frequency sensitivity is degraded by the factor , but the sensitivity remains flat up to a frequency , where it turns over and joins the strain noise curve .thus , as long as the position - noise sources can be kept well below the shot noise and other strain - noise contributions , a change in armlength ratio from strict equality will not degrade the high - frequency portion of the sensitivity curves .as the length of one of the arms is shortened , small position noise sources will become important and eventually dominate .let us consider the example of schutz s 4-spacecraft configuration ( fig .[ fig : detector ] ) .since this configuration will have , the low - frequency sensitivity curve will be a factor of 2 higher ( hence less sensitive ) .the current error budget for lisa assumes that the high - frequency portion of the window is dominated by position noise approximately three times the shot noise .if this remains the case , then the high - frequency section of the curve will likewise be a factor of 2 higher up to a frequency twice as high as the lisa sensitivity `` knee '' at , at which point it would turn up and join the current lisa high - frequency ramp .the shot noise is determined by the power of the laser and by the size and efficiency of the optics , and there is nothing beyond brute - force improvements in these parameters that will lower the shot noise .the contributions to position noise , on the other hand , are due to optics quality , the attitude control system , brownian noise in the electronics , thermal noise in the optical path length , _etc_. these are more complex and are amenable to reduction by careful or innovative engineering design .if these noise sources can be reduced to a fraction of the shot noise , not only will the lisa noise floor be reduced by a factor of 4 , but the schutz modification will have high - frequency performance that is undiminished by the reduction of the length of one arm . finally , we describe a totally unfeasible mission design that is nevertheless interesting for instructive purposes .let us consider a two - spacecraft `` interferometer '' , where one of the spacecraft contains a fiber optic delay line , of length 5 km , that acts as the second arm of the interferometer .if the distance between the two spacecraft is km , we will have .the use of the variable will eliminate laser phase noise , exactly as it does in arms that are more nearly equal .a rigidly - attached reflector at the far end of the fiber - optic line would eliminate accelerometer noise , but , of course , would replace it with thermal fluctuation in the optical path length in the fiber .however , a concatenation of fibers with well - chosen thermal pathlength coefficients could produce a fiber tuned to have a coefficient very near zero .this , combined with multilevel thermal isolation , could keep this noise source very small .the key to the sensitivity of this configuration is the position noise .if a way can be found to reduce position noise to less than of the lisa shot noise , then this two - spacecraft interferometer would have the same sensitivity as a conventional three - spacecraft interferometer .s. l. l. acknowledges support for this work under lisa contract number po 1217163 , and the nasa epscor program through cooperative agreement ncc5 - 410 . the work of w. a. h. was supported in part by nsf grant no .phy-0098787 and the nasa epscor program through cooperative agreement ncc5 - 579 .r. w. h. was supported by nasa grant nags5 - 11469 and ncc5 - 579 .
unlike ground - based interferometric gravitational wave detectors , large space - based systems will not be rigid structures . when the end - stations of the laser interferometer are freely flying spacecraft , the armlengths will change due to variations in the spacecraft positions along their orbital trajectories , so the precise equality of the arms that is required in a laboratory interferometer to cancel laser phase noise is not possible . however , using a method discovered by tinto and armstrong , a signal can be constructed in which laser phase noise exactly cancels out , even in an unequal arm interferometer . we examine the case where the ratio of the armlengths is a variable parameter , and compute the averaged gravitational wave transfer function as a function of that parameter . example sensitivity curve calculations are presented for the expected design parameters of the proposed lisa interferometer , comparing it to a similar instrument with one arm shortened by a factor of 100 , showing how the ratio of the armlengths will affect the overall sensitivity of the instrument .
consider the following generative model for independent component analysis ( ica ) where the elements of the non - gaussian source vector are mutually independent with zero mean , is an unknown nonsingular mixing matrix , is an observable random vector ( signal ) , and is a shift parameter . let be the whitened data of , where .an equivalent expression of model ( [ ica ] ) in -scale is where is the mixing matrix in -scale .it is reported in literature that prewhitening the data can make the ica inference procedure more stable . in the rest of the discussion, we will work with model ( [ ica.z ] ) in estimating the mixing matrix based on the prewhitened .it is easy to transform back to the original -scale via .note that both and are unknown , and there exists the identifiability problem .this can be seen from the fact that for any nonsingular diagonal matrix . to make identifiable , we assume the following conventional conditions for : where is the identity matrix .it then implies that and which means that the mixing matrix in -scale is orthogonal .we will use notation to denote the space of orthogonal matrices in . note that, if is a parameter of model ( [ ica.z ] ) , so is .thus , to fix one direction , we consider , where consists of orthogonal matrices with determinant one .this set is called the special orthogonal group .the main purpose of ica is to estimate the orthogonal based on the whitened data , or equivalently , to look for a recovering matrix so that components in have the maximum degree of independence . in the latter case, provides an estimate of .we first briefly review some existing methods for ica .one idea is to estimate via _ minimizing the mutual information_. let be the joint probability density function of , and be the marginal probability density function of . the mutual information , denoted by , among random variables , is defined to be where and are the shannon entropy . ideally , if is properly chosen so that has independent components , then and , hence , .thus , via minimizing with respect to , it leads to an estimate of .another method is to estimate via _ maximizing the negentropy _ , which is equivalent to minimizing mutual information as described below .the negentropy of is defined to be where is a gaussian random vector having the same covariance matrix as ( hyvrinen and oja , 2000 ) .it can be deduced that where the second equality holds since , by , . moreover , as with , we have , which does not depend on .that is , the negentropy is invariant under orthogonal transformation .thus , minimizing the mutual information is equivalent to maximizing the negentropy .the negentropy , however , involves the unknown density . to avoid nonparametric estimation of , one can use the following approximation ( hyvrinen , 1998 ) via a non - quadratic contrast function , ^ 2,\label{ne_approx}\end{aligned}\ ] ] where is a random variable having the standard normal distribution . here can be treated as a measure of non - gaussianity , and minimizing the sample analogue of to search corresponds to the fast - ica ( hyvrinen , 1999 ) . another widely used estimation criterion for is via _ maximizing the likelihood_. under model ( [ ica.z ] ) and by modeling with some known probability density function , the density function of takes the form since and hence .the mle - ica then searches the optimum via where is the kullback - leibler divergence ( kl - divergence ) , and is the empirical distribution of .possible choices of include for sub - gaussian models , and for super - gaussian models , where and are constants so that is a probability density function .it can be seen from ( [ likelihood.z ] ) that , for any row permutation matrix , we have .that is , we can estimate and identify only up to its row - permutation . as will become clear later that the above mentioned methods are all related to _ minimizing the kl - divergence _ , which is not robust in the presence of outliers .outliers , however , frequently appear in real data analysis , and a robust ica inference procedure becomes necessary .for the purpose of robustness , instead of the kl - divergence , mihoko and eguchi ( 2002 ) considers the _minimum -divergence _ estimation for ( -ica ) .the issues of consistency and robustness of -ica are discussed therein . on the other hand , the -divergence , which can be induced from -divergence ,is shown to be super robust ( fujisawa and eguchi , 2008 ) against data contamination .it is our aim in this paper to propose a unified ica inference procedure by minimum divergence estimation .moreover , due to the property of super robustness , we will focus on the case of -divergence and propose a robust ica procedure , called -ica .hyvrinen , karhnen and oja ( 2001 ) have provided a sufficient condition to ensure the validity of mle - ica under the orthogonality constraint of , in the sense of being able to recover all independent components .amari , chen , and cichocki ( 1997 ) studied necessary and sufficient conditions for consistency under a different constraint of , and this consistency result is further extended by mihoko and eguchi ( 2002 ) to the case of -ica . in this work , we also derive necessary and sufficient conditions for the consistency of -ica . in the limiting case ,our necessary and sufficient condition for the consistency of mle - ica is weaker than the condition stated in hyvrinen , karhnen and oja ( 2001 ) . to the best of our knowledge, this result is not explored in existing literature .some notation is defined here for the convenience of reference . for any ,let be the commutation matrix such that ; ( resp . ) means is strictly positive ( resp .negative ) definite ; and is the matrix exponential . note that for any nonsingular square matrix . for a lower triangular matrix with 0 diagonals, stacks the nonzero elements of the columns of into a vector with length .there exist matrices and such that and .each column vector of is of the form , , where is a vector with a one in the -th position and 0 elsewhere , and is the kronecker product . is the identity matrix and is the -vector of ones .the rest of this paper is organized as follows .a unified framework for ica estimation by minimum divergence is introduced in section 2 .a robust -ica procedure is developed in section 3 , wherein the related statistical properties are studied . a geometrical implementation algorithm for -icais further illustrated in section 4 . in section 5 ,the issue of selecting value is discussed .numerical studies are conducted in section 6 to demonstrate the robustness of -ica .the paper is ended with a conclusion in section 7 .all the proofs are placed in appendix .in this section we introduce a general framework for ica by means of a minimum -divergence , which covers the existing methods reviewed in section 1 .the aim of ica is to search a matrix so that the joint probability density function for is as close to marginal product as possible .this aim then motivates estimating by minimizing a distance metric between and .a general estimation scheme for can be formulated through the following minimization problem where is a divergence function .different choices of will lead to different estimation criteria for ica . herewe will consider a general class of divergence functions , the -divergence ( murata et al . , 2004 ; eguchi , 2009 ) , as described below .the -divergence is a very general class of divergence functions .consider a strictly convex function defined on , or on some interval of where is well - defined .let be the inverse function of .consider which defines a mapping from to , where .define the -cross entropy by and the -entropy by .then the -divergence can be written as in the subsequent subsections , we will introduce some special cases of -divergence , which will lead to specific methods of ica . by taking the pair corresponding -divergence is equivalent to the kl - divergence . in this case, it can be deduced that where is the mutual information defined in ( [ mi ] ) .as described in section 1 that we conclude that the following criteria , minimum mutual information , maximum negentropy , and fast - ica , are all special cases of ( [ div_ica ] ) . on the other hand , observe that where is the joint probability density function of .if we consider the model , and if we estimate by its empirical probability mass function , minimizing ( [ mutual_mle ] ) is equivalent to mle - ica in ( [ mle_ica_z ] ) . in summary , choosing the kl - divergence covers minimum mutual information , maximum negentropy , fast - ica , and mle - ica .consider the convex set .take the pair the resulting -divergence defined on is calculated to be which is called -divergence ( mihoko and eguchi , 2002 ) , or density power divergence ( basu et al . , 1998 ) .note that if and only if for some . in the limiting case ,it gives the kl - divergence .if we replace in ( [ mle_ica_z ] ) by , it gives the -ica of mihoko and eguchi ( 2002 ) .the -divergence can be obtained from -divergence through a -volume normalization , where is defined the same way as ( [ beta_div ] ) with the plug - in , and where is some normalizing constant .here we adopt the following normalization , called the volume - mass - one normalization , it leads to .then , it can be seen that -divergence is scale invariant .moreover , if and only if for some .the -divergence , indexed by a power parameter , is a generalization of kl - divergence . in the limiting case , it gives the kl - divergence .it is well known that mle ( based on minimum kl - divergence ) is not robust to outliers . on the other hand ,the minimum -divergence estimation is shown to be super robust ( fujisawa and eguchi , 2008 ) against data contamination .hence , we will adopt -divergence to propose our robust -ica procedure .in particular , the main idea of -ica is to replace in ( [ mle_ica_z ] ) by .though the idea is straightforward , there are many issues need to be studied .detailed inference procedure and statistical properties of -ica are discussed in section 3 .the ica is actually a two - stage process .first , we need to whiten the data .the whitened data are then used for the recovery of independent sources . since the main purpose of this study is to develop a robust ica inference procedure ,the robustness for both data prewhitening and independent source recovery should be guaranteed . herewe will utilize the -divergence to introduce a robust prewhitening method called -prewhitening , followed by illustrating -ica based on the prewhitened data . in practice , the value for -divergence should also be determined . in the rest of discussion, we will assume is given , and leave the discussion of its selection to section [ sec.select_gamma ] .although prewhitening is always possible by a straightforward standardization of , there exists the issue of robustness of such a whitening procedure .it is well known that empirical moment estimates of are very sensitive to outliers . in mollah ,eguchi and minami ( 2007 ) , the authors proposed a robust -prewhitening procedure .in particular , let be the probability density function of -variate normal distribution with mean and covariance , and let be the empirical distribution based on data . with a given , mollah et al .( 2007 ) proposed the following minimum -divergence estimators and then suggested to use for whitening the data .interestingly , can also be derived from the minimum -divergence as at the stationarity of ( [ gamma.prewiten.obj ] ) , the solutions will satisfy where the robustness property of can be found in mollah et al .we call the prewhitening procedure the -prewhitening .the whitened data then enter the -ica estimation procedure .we are now in the position to develop our -ica based on the -prewhitened data . as discussed in section [ sec.gamma.div ] , the estimator is derived from where and is the working model for . since , which does not involve .thus , can be equivalently obtained via finally , the mixing matrix is estimated by .let and ^\top,\;\ ; { \rm where}\;\;\phi_j(y)=\frac{d}{dy}\ln f_j(y).\ ] ] we have the following proposition .[ prop.stationarity ] at the stationarity , the maximizer defined in ( [ gamma_ica_z0 ] ) will satisfy ^\top-\phi(\widehat w^\top z_i)\left[\widehat w^\top z_i\right]^\top\right\}=0.\end{aligned}\ ] ] from proposition [ prop.stationarity ] , it can be easily seen the robustness nature of -ica : the stationary equation is a weighted sum with the weight function .when , an outlier with extreme value will contribute less to the stationary equation . in the limiting case of , which corresponds to mle - ica , the weight becomes uniform and , hence , is not robust .a critical point to the likelihood - based ica method is to specify a working model for .a sufficient condition to ensure the consistency of mle - ica can be found in hyvrinen , karhnen and oja ( 2001 ) .here the ica consistency means recovery consistency .that is , an ica procedure is said to be recovery consistent if it is able to recover all the independent components .note that the consistency of mle - ica does not rely on the correct specification of working model , but only on the positivity of ] , where is some constant .we will deduce necessary and sufficient conditions such that -ica is recovery consistent .the main result is summarized below .[ thm.consistency ] assume the ica model ( [ ica.z ] ) .assume the existence of for ] for all and all .then , for , the associated -ica is recovery consistent if and only if where , , ] , and ] such that 1 .=0 ] for , for all pairs , .then , for every , the associated -ica can recover all independent components . to understand the meaning of condition ( b ), we first consider an implication of corollary [ cor.consistency ] in the limiting case of , which corresponds to the mle - ica . in this case ,condition ( a ) becomes , which is automatically true by the model assumption of .moreover , since , condition ( b ) becomes + e[\phi_k(s_k)s_k-\phi_k'(s_k)]>0,\quad \mbox{for all pairs , }.\label{consistency.mle}\end{aligned}\ ] ] a sufficient condition to ensure the validity of ( [ consistency.mle ] ) is >0,\quad\forall j , \label{consistency.mle2}\ ] ] which is the same condition given in theorem 9.1 of hyvrinen , karhnen and oja ( 2001 ) for the consistency of mle - ica .we should note that ( [ consistency.mle ] ) is a weaker condition than ( [ consistency.mle2 ] ) .in fact , from the proof of theorem [ thm.consistency ] , ( [ consistency.mle ] ) is also a necessary condition .one implication of ( [ consistency.mle ] ) is that , we can have at most one to be wrongly specified or at most one gaussian component involved , and mle - ica is still able to recover all independent components .this can also be intuitively understood that once we have determined directions in , the last direction is automatically determined . however, this fact can not be observed from ( [ consistency.mle2 ] ) which requires all to be correctly specified .we summarize the result for mle - ica below .assume the ica model ( [ ica.z ] ) .then , mle - ica is recovery consistent if and only if + e[\phi_k(s_k)s_k-\phi_k'(s_k)]>0 ] is a good candidate for possible values .we then apply the cross - validation method developed in section 5 to determine the optimal .the estimated values of are plotted in figure [ fig.cv ] , from which we select for -prewhitening and for -ica .the recovered pictures are placed in figures [ fig.lena_gamma_recover]-[fig.lena_fast_recover ] , where for each figure the first row is for scenario-1 and the second row is for scenario-2 . it can be seen that -ica is the best performer under both scenarios , while mle - ica and fast - ica can not recover the source images well when data is contaminated .it also demonstrates the applicability of the proposed -selection procedure .we detect that mle - ica and fast - ica perform better when using filtered images , but can still not reconstruct images as good as -ica does .interestingly , -ica has a reverse performance , where the best reconstructed images are from the original images instead of the filtered ones .the filtering process , which aims to achieve robustness , replaces the original pixel value by the median of the pixel values over its neighborhood .therefore , while filtering process will alleviate the influence of outlier , it is also possible to lose useful information at the same time .for instance , a pixel without being contaminated will still be replaced by certain median value during the filtering process .-ica , however , works on the original data that possesses all the information available , and then weights each pixel according to its observed value to achieve robustness .hence , a better performance for -ica based on the original images is reasonably expected .in this paper , we introduce a unified estimation framework by means of minimum -divergence . for the reason of robustness consideration , we further focus on the specific choice of -divergence , which gives the proposed -ica inference procedure .statistical properties are rigorously investigated . a geometrical algorithm based on gradient flows on orthogonal groupis introduced to implement our -ica .the performance of -ica is evaluated through synthetic and real data examples .there are still many important issues that are not covered by this work .for example , we only consider full ica problem , i.e. , simultaneous extraction of all independent components , which is unpractical in the case of large .it is of interest to extend our current -ica to partial -ica .another issue of interest is also related to the large--small- scenario . in this work , data have to be prewhitened before entering the -ica procedure .prewhitening can be very unstable especially when is large . how to avoid such a difficulty is an interesting and challenging issue .tensor data analysis is now becoming popular and attracts the attention of many researchers .many statistical methods include ica have been extended to deal with such a data structure by means of multilinear algebra techniques .an extension of -ica to a multilinear setting to cover tensor data analysis is also of great interest for future study .eguchi , s. ( 2009 ) .information divergence geometry and the application to statistical machine learning . in _ information theory and statistical learning _ , f. emmert - streib and m. dehmer ( eds . ) , 309 - 332 .springer , berlin .parmar , s. d. and unhelkar , b. ( 2009 ) . performance analysis of ica algorithms against multiple - sources interference in biomedical systems ._ international journal of recent trends in engineering _ , 2 , 19 - 21 . ,and for the true sources .in each plot , the red dots are observations without contamination , and the blue pluses are contaminated ones .( a)-(e ) : uniform source ( scenario-1 ) , ( f)-(j ) : source ( scenario-2 ) ., title="fig : " ] , and for the true sources . in each plot , the red dots are observations without contamination , and the blue pluses are contaminated ones .( a)-(e ) : uniform source ( scenario-1 ) , ( f)-(j ) : source ( scenario-2 ) ., title="fig : " ] with for ( a ) -prewhitening and ( b ) -ica for the lena data analysis .the red dot indicates the place where the minimum value is attained.,title="fig:",height=226 ] with for ( a ) -prewhitening and ( b ) -ica for the lena data analysis . the red dot indicates the place where the minimum value is attained.,title="fig:",height=226 ]
independent component analysis ( ica ) has been shown to be useful in many applications . however , most ica methods are sensitive to data contamination and outliers . in this article we introduce a general minimum -divergence framework for ica , which covers some standard ica methods as special cases . within the -family we further focus on the -divergence due to its desirable property of super robustness , which gives the proposed method -ica . statistical properties and technical conditions for the consistency of -ica are rigorously studied . in the limiting case , it leads to a necessary and sufficient condition for the consistency of mle - ica . this necessary and sufficient condition is weaker than the condition known in the literature . since the parameter of interest in ica is an orthogonal matrix , a geometrical algorithm based on gradient flows on special orthogonal group is introduced to implement -ica . furthermore , a data - driven selection for the value , which is critical to the achievement of -ica , is developed . the performance , especially the robustness , of -ica in comparison with standard ica methods is demonstrated through experimental studies using simulated data and image data . + * key words and phrases * : -divergence ; -divergence ; geodesic ; independent component analysis ; minimum divergence ; robust statistics ; special orthogonal group .
let be a random vector whose index set is . throughout the paper, we use the convention that when is a set of indices .suppose that are distributed independently according to the poisson distribution and consider the distribution of when is given .that is , follows the multinomial distribution with probability and total sum : let be subsets of satisfying .the main technical result of the paper is to provide an algorithm to evaluate the conditional expectation ,\ ] ] where is an arbitrary function of .we also consider a generalization where are distributed as a class of distributions including the multinomial distribution .this problem arises from the evaluation of the multiplicity - adjusted -value of scan statistics .we begin with stating the scan statistics in spatial epidemiology , which is a typical example in this framework . in a certain country, there are districts .let be the set of districts . for each district , we suppose that the number for event we are focusing on ( e.g. , number of patients with some disease ) as well as its expected frequency estimated from historical data are available . is assumed to be distributed according to the poisson distribution with parameter , , independently for .the parameter is known as the standardized mortality ratio ( smr ) .figure [ fig : yamagata - smr ] depicts an example of a choropleth map of smrs . for 44 districts ,we indicate the values of the smrs with different colors . a set of adjacent districts with smrs higher than other areas is called a _disease cluster_. the detection of such disease clusters is a major interest in spatial epidemiology . to detect a disease cluster ,we settle a family of subsets as candidates of a disease cluster , and define a scan statistic for each . is called the _scan window_. the choice of the scan windows is an important research topic in spatial epidemiology .when is larger than a threshold , say , we declare that is a disease cluster .as such a scan statistic , proposed the use of the likelihood ratio test ( lrt ) statistic for the null hypothesis ( constant ) against the alternative under the conditional distribution with given . the conditional inference ( inference under the conditional distribution ) leads to a similar test independent of the nuisance parameter .the expression of is given in section [ subsec : scan ] .when the hypothesis holds , the disease cluster is called a hotspot .this is a typical problem of multiple comparisons .the -value to assess the significance should be adjusted to incorporate the multiplicity effect .one method is to define the -value from the distribution of the maximum under : , \label{conditional}\ ] ] where .this is of the form of ( [ expectation ] ) .note that when is given , the distribution of is , where . in spatial epidemiology , the -valueis usually estimated using monte carlo simulation .although this is convenient and practical in most cases , when the true -value is very small , it is difficult to obtain a precise value even when the number of random numbers in the monte carlo is large .therefore , we have good reason to conduct exact computation according to the definition ( [ conditional ] ) by enumerating all possibilities .however , this is generally difficult because of the computational complexity . in the area of multiple testing comparisons ,many techniques to reduce computational time have been proposed .for example , demonstrated that in a change point problem , the computational time for the distribution of the maximum could be reduced using the markov property among statistics .see also , and references therein . in this paper, we develop a similar computation technique by taking advantage of the markov structure among scan statistics .the proposed method is based on the theory of a chordal graph , which is the foundation of the theory of graphical models .the chordal graph theory provides rich tools , in not only statistics but also many fields of mathematical science .in particular , in numerical analysis , this is a major tool used to conduct large - scale matrix computation ( e.g. , , ) .we also apply the chordal graph theory to retrieve the markov structure to reduce the computational time by using the recursive summation / integration technique .our technique is then similar to those used in the efficient computation of maximum likelihood estimator of graphical models ( e.g. , , , , , ) .the remainder of the paper is organized as follows .section [ sec : recursive ] provides the recurrence computational formula in the multinomial distribution under the assumption that the running intersection property holds .we evaluate the computational complexity , and show that the recurrence computation methodology works for a class of distributions including the poisson distribution ( that is , the conditional distribution is the multinomial ) .section [ sec : markov ] proposes a method to detect the markov property , and section [ sec : examples ] presents illustrative real data analyses to detect temporal and spatial clustering .section [ sec : summary ] briefly summarizes our results and discusses further research .in this section , we provide an algorithm to compute the expectation ( [ expectation ] ) when the sequence of subsets of has a nice property , which is called the running intersection property given in definition [ def : rip ] .we will consider the general case in the next section . in this section ,we use the symbol instead of .we start with case . for ,let and .suppose that is a random vector distributed according to the multinomial distribution with summation and probability .we consider the evaluation of the expectation \label{naive}\ ] ] exactly , where and are arbitrary functions .obviously , and are not independent ; there is an overlap unless is empty .moreover , there is a restriction that instead of the problem of evaluating the expectation , by a change of viewpoint , we first consider the problem of generating random numbers , where , . can be generated according to the following three steps : where and are independent given .correspondingly , we divide the expectation in ( [ naive ] ) into three parts as & = e^{(m_2,m_1)|n}\bigl [ e^{x_{r_2}|m_2}\bigl [ e^{x_{r_1}|m_1}[\chi_1(x_{b_1 } ) \chi_2(x_{b_2 } ) ] \bigr]\bigr ] \nonumber \\ & = e^{(m_2,m_1)|n}\bigl [ e^{x_{r_2}|m_2}\bigl [ \chi_2(x_{b_2 } ) e^{x_{r_1}|m_1}[\chi_1(x_{b_1 } ) ] \bigr]\bigr ] \nonumber \\ & = e^{(m_2,m_1)|n}\bigl [ e^{x_{r_2}|m_2 } [ \chi_2(x_{b_2 } ) \xi_1(m_1,x_{c_1 } ) ] \bigr ] , \label{e}\end{aligned}\ ] ] where \nonumber \\ & = e^{x_{r_1}|m_1}[\chi_1(x_{c_1},x_{r_1 } ) ] .\label{xi}\end{aligned}\ ] ] the procedure for the numerical computation of ( [ e ] ) is as follows .( i ) for possible values of and , compute in ( [ xi ] ) . here, the expectation is taken over according to ( [ xr1 ] ) with fixed .the results are stored in memory as a tabulation .( ii ) compute ] in ( [ xi ] ) , the variable runs over all nonnegative vectors whose sum is .because we see that the number of summations to prepare the table is in the expectations ] with by updating the tables , .we can evaluate the number of required summations as before .the result is summarized below without proof .[ thm : no_sum ] the number of summations required in the algorithm of theorem [ thm : recursive ] is where note that when , the value is needed for . whereas , when , only the value for is needed .this is the reason why the case is exceptional in ( [ order ] ) .the number of summations ( [ order ] ) is smaller than in the absent of this recursive computation technique : as shown , the proposed algorithm has an advantage in time complexity .whereas , it requires memory space to restore the tables . since in ( [ xi0 ] ) , and the elements of are arbitrary nonnegative integers such that , the size of the table for is however , in the process of computation of ( [ recursive ] ) for , the table can be deleted ( i.e. , the memory space can be released ) once is computed .therefore , the problem of space complexity does not matter in practice .although theorem [ thm : recursive ] is stated for the multinomial distribution , it also works for a class of distributions including the multinomial distribution .let and be index sets such that , and let , and again .suppose that is distributed independently for according to a certain distribution . under the conditional distribution where is given , if we pose additional conditions that is given , then and become independent . therefore , the three steps ( [ m2m1])([xr1 ] ) for generating random numbers , and the corresponding decomposition of the expectation continue to hold for an arbitrary distribution of . if explicit expressions for probability density functions of the conditional distributions , , , and are available , we have the the computation formula of the type ( [ e ] ). in general , if some explicit formula is available for the probability density function of the conditional distribution where the are subsets of such that , we can construct the recurrence computation formula of the type of theorem [ thm : recursive ] .the class of distributions having the explicit conditional density function of ( [ generalcond ] ) includes the normal distribution , the gamma distribution , the binomial distribution and the negative binomial distribution .the conditional distributions of ( [ generalcond ] ) corresponding to the above four distributions are the ( degenerate ) normal distribution , the dirichlet distribution , the multivariate hypergeometric distribution , and the dirichlet - multinomial distribution , respectively .for these distributions , the recurrence computation formula of theorem [ thm : recursive ] works by replacing summations with integrations when the distribution is continuous .as shown in the previous section , when the sequence of subsets of has the running intersection property , the expectation ( [ expectation ] ) can be evaluated efficiently with the recursive summation / integration technique proved in theorem [ thm : recursive ] . however , in general , does not have this property . to apply this technique to general cases ,one method is to prepare another sequence having the running intersection property such that each is included into at least one of .if we have such , by defining a function so that , the expectation ( [ expectation ] ) is written as , \quad \chi_i(x_{b_i } ) : = { \prod}_{j\in \tau^{-1}(i)}\chi_j(x_{z_j}),\ ] ] which can be dealt with by theorem [ thm : recursive ] .such a sequence , , is known to be obtained through a chordal extension in algorithm [ alg : chordal_extension ] . here , we summarize some notions and basic facts on chordal graphs .let be a connected undirected graph , where is a set of vertices and is a set of edges . is called complete if every pair of vertices is joined by an edge . for a subset ,let denote a subgraph induced by .that is , when is complete , is called a clique .a clique that is maximal with respect to inclusion is called a maximal clique . is called chordal if every cycle with length greater than three has a chord .let be a sequence of all maximal cliques of satisfying the running intersection property ( [ rip ] ) in definition [ def : rip ] .the sequence is called perfect if for every , is a clique of .denote by the set of maximal cliques of .a vertex is called simplicial if its adjacent vertices form a clique in .a perfect elimination ordering of is an ordering of vertices of such that for every , is a simplicial in .let denote the set of simplicial vertices of in .then is called a boundary clique if there exists , , such that the following proposition on the property of chordal graphs is well known ( e.g. , and ) .let be an undirected graph .the four statements below are equivalent .\(i ) is chordal .\(ii ) has a perfect sequence of the maximal cliques .\(iii ) vertices can be ordered to have a perfect elimination ordering .we generate a perfect sequence from according to the following procedure .[ alg : chordal_extension ] * define an undirected graph with vertices and edges where . *add edges of to so that the extended graph is a chordal graph .* identify the perfect sequence of the maximal cliques of .this sequence has the running intersection property and . the procedure for step 1 is referred to as the _ chordal extension_. constructing the chordal extension suchthat the maximum size of the maximal clique is minimum is known to be a np - hard problem . in this section ,we propose a heuristic method to construct an approximately best chordal extension . for step 2 ,we provide a method proved in theorem [ thm : hara ] . the other method for the same purpose based on the maximum cardinality search procedureis also known .we now explain steps 02 in detail using a small example .suppose that the vertex set is , and the family of subsets of is given by the associated undirected graph is shown in figure [ fig : g ] ( left ) .step 1 is divided into three parts . in this substep , we renumber the vertices . consider the following procedure to make all vertices removed from the graph sequentially : first , find a vertex , say , that has the minimum number of adjacent edges in the graph .then , remove as well as its adjacent edges from the graph . then , find a vertex , say , that has the minimal number of adjacent edges in the graph with vertices .continuing with this procedure , we have a sequence of vertices .this renumbering is called the minimum degree ( md ) ordering .the matlab function symamd is available to obtain an md ordering . in our example , we illustrate the resulting renumbered graph in figure [ fig : g ] ( right ) .[ cols= " < , < , < " , ] [ fig : part ]in this paper , we proposed a recursive summation method to evaluate a class of expectation in multinomial distribution , and applied it to the evaluation of the -values of temporal and spatial scan statistics .this approach enabled us to evaluate the exact multiplicity - adjusted -values .our proposal has an advantage where the true -value is too small and is barely estimated precisely by monte carlo simulations .the proposed algorithm is easily modified to a class of distributions including the normal distribution , the dirichlet distribution , the multivariate hypergeometric distribution , and the dirichlet - multinomial distribution by replacing recursive summations with recursive numerical integrations if necessary . on the other hand ,our proposed method has a limitation that it only works when the total data count and the window size are not large .the limitation is due to , ( the sizes of the maximum clique ) , and . as shown in section [ subsec : yamagata ] , when the size of the scan windows is large , we can divide the whole scan windows into a number of groups , then compute the -value for each group , and sum them .we have implemented the proposed algorithms .however , they are still under development .our algorithm requires for loop calculations , where the numbers of the nests and the ranges of running variables depend on the input ( scan windows ) .the dimensions of the arrays also depend on the data .these make source coding complicated , and the resulting code inefficient .one approach to overcome this difficulty may be the use of a preprocessor to generate the c program .this approach has been successfully used in computer algebra ( e.g. , page 674 of ) .the authors are grateful to takashi tsuchiya , anthony j. hayter , nobuki takayama , and yi - ching yao for their helpful comments .this work was supported by jsps kakenhi grant numbers 21500288 and 24500356 .blair , j.r.s . andpeyton , b.w .an introduction to chordal graphs and clique trees , in `` graph theory and sparse matrix computation , '' a. george , j.r .gilbert and j.w.h.liu , eds . ,pp.129 , springer , new york .buneman , p. ( 1974 ) . a characterization on rigid circuit graphs , _ discrete mathematics _ , * 9 * ( 3 ) , 205212 . fukuda , m. , kojima , m. , murota , k. and nakata , k. ( 2000 ) . exploiting sparsity in semidefinite programming via matrix completion i :general framework , _ siam journal on optimizations _ , * 11 * ( 3 ) , 647674 .hara , h. and takemura , a. ( 2006 ) .boundary cliques , clique trees and perfect sequences of maximal cliques of a chordal graph , arxiv : cs/0607055 .hara , h. and takemura , a. ( 2010 ) . a localization approach to improve iterative proportional scaling in gaussian graphical models , _ communications in statistics : theory and methods _ , * 39 * ( 8 - 9 ) , 16431654 .hirotsu , c. and marumo , k. ( 2002 ) .changepoint analysis as a method for isotonic inference , _ scandinavian journal of statistics _ , * 29 * ( 1 ) , 125138 .koyama , t. , nakayama , h. , nishiyama , k. and takayama , n. ( 2014 ) .holonomic gradient descent for the fisher - bingham distribution on the -dimensional sphere , _ computational statistics _ , * 29 * ( 3 - 4 ) , 661683 .kulldorff , m. ( 1997 ) . a spatial scan statistic , _ communications in statistics : theory and methods _ , * 26 * ( 6 ) , 14811496 .kulldorff , m. ( 2006 ) .tests of spatial randomness adjusted for an inhomogeneity : a general framework , _ journal of the american statistical association _ , * 101 * ( 475 ) , 12891305 . neff , n.d . and naus , j.i .( 1980 ) . _selected tables in mathematical statistics , vol.vi : the distribution of the size of the maximum cluster of points on a line _ , american mathematical society , providence , rhode island .worsley , k.j .confidence regions and tests for a change - point in a sequence of exponential family random variables , _ biometrika _ , * 73 * ( 1 ) , 91104 ., guo , j.h . andhe , x. ( 2011 ) . an improved iterative proportional scaling procedure for gaussian graphical models , _ journal of computational and graphical statistics _ , * 20 * ( 2 ) , 417431 .xu , p.f . , guo , j.h . ,tang , m.l .( 2014 ) . a localized implementation of the iterative proportional scaling procedure for gaussian graphical models , _ journal of computational and graphical statistics _ ,* 24 * ( 1 ) , 205229 .
let be a finite set of indices , and let , , be subsets of such that . let , , be independent random variables , and let . in this paper , we propose a recursive computation method to calculate the conditional expectation $ ] with given , where is an arbitrary function . our method is based on the recursive summation / integration technique using the markov property in statistics . to extract the markov property , we define an undirected graph whose cliques are , and obtain its chordal extension , from which we present the expressions of the recursive formula . this methodology works for a class of distributions including the poisson distribution ( that is , the conditional distribution is the multinomial ) . this problem is motivated from the evaluation of the multiplicity - adjusted -value of scan statistics in spatial epidemiology . as an illustration of the approach , we present the real data analyses to detect temporal and spatial clustering . _ keywords and phrases : _ change point analysis , chordal graph , graphical model , markov property , spatial epidemiology .
the strong sensitivity of certain quantum states to small variations of external parameters opens up great opportunities for devising high - precision measurements , e.g. of length and time , with unprecedented accuracy .a particularly important physical measurement technique is interferometry .its numerous variations include ramsey spectroscopy in atomic physics , optical interferometry in gravitational wave detectors , laser gyroscopes and optical imaging to name but a few .understanding limits on its performance in realistic situations under given resources is therefore of fundamental importance to metrology . in this paperwe examine the fundamental limits of the precision of optical interferometry in the presence of photon losses for quantum states of light with definite photon number .optical interferometry aims to estimate the relative phase of two modes , or two `` arms '' , of the interferometer .this estimation process requires a certain amount of resources which is typically identified to be the number of photons , , used for the measurement . the best precision which can be obtained using classical states of light scales like , the so - called standard quantum limit ( sql ) . using non - classical states of light this precisioncan be greatly improved , ideally leading to heisenberg - limited scaling , .indeed , recent years have seen many experimental proof - of - principle demonstrations of beating the sql using quantum strategies in various interferometric setups . unfortunately , highly non - classical states of light which potentially lead to heisenberg - limited sensitivity are very fragile with respect to unwanted but unavoidable noise in experiments . in quantum - enhanced optical interferometry ,the loss of photons is the most common and potentially the most devastating type of noise that one encounters .in particular it was noted that highly entangled quantum states , optimal for interferometry in the lossless case n00n states are extremely fragile . even for moderate lossesthey are outperformed by purely classical states .a different approach has been taken in , where the noise arising from imperfect preparation of a state has been investigated . in ,the first systematic approach was taken in order to determine the structure of optical states optimal for interferometry in the presence of losses .the best possible precision using input states with definite photon number was given .in this paper we elaborate and extend the ideas presented in .our treatment is based on general quantum measurement theory .the quantity of interest is the lowest possible uncertainty attainable in parameter estimation , inversely proportional to the square root of the quantum fisher information .the quantum fisher information depends only on the state of the system and not on the measurement procedure .we show that the optimization of the quantum fisher information can be done effectively over the class of input states which have a definite photon number .since these input states are subject to unavoidable photon losses they will degrade into mixed states and their suitability for phase estimation is compromised .our optimization takes this into account yielding the most suitable input states in the presence of photon losses leading to the highest possible quantum fisher information , and hence to the best possible precision .we give a detailed description of the noise model and calculate an analytic expression for the quantum fisher information .the latter is shown to be a concave function on a convex set and therefore suited for efficient convex optimization methods .we numerically determine the optimal input states , compare them to alternative quantum and classical strategies , and show that they can beat the sql .we note that the corresponding precision , which lies between the sql and the heisenberg limit ( depending on the loss rates ) , defines the best possible precision for optical two - mode interferometry .in addition to this we discuss a measurement procedure that allows one to achieve the optimal precision , in terms of a positive operator - valued measure ( povm ) , and the possibility of using states with indefinite photon number or distinguishable photons .we show that neither of these generalizations improves the estimation precision , and consequently the state with definite photon number and indistinguishable photons are optimal .the paper is organized as follows . in sec .[ sec : phasestimation ] a general scheme of quantum phase estimation is presented , and the notion of optimality is defined . in sec .[ sec : fisher ] we introduce the quantum fisher information and discuss its most important properties . in sec .[ sec : interferometry ] we derive an explicit formula for the quantum fisher information and prove that it is a concave function of input state parameters . in sec .[ sec : optimalstates ] we discuss the structure of the optimal states for interferometry and compare them to alternative strategies and states . in sec .[ sec : measurement ] we discuss a measurement with which it is possible to achieve optimal precision . finally , in sec .[ sec : generalstates ] we discuss possible generalizations of the considered quantum states , particularly states with indefinite photon number and the case when photons are distinguishable .acquires a phase relative to channel .measurements are performed on the output state yielding an estimated value , , of the phase .the beam splitters symbolize photon losses.,width=302 ] we consider a general interferometer with two arms as shown in fig .a pure input state is fed into the interferometer and acquires a phase in the channel relative to the channel .both channels , or `` arms '' , of the interferometer are subject to photon losses which can be modelled by fictitious beam splitters inserted at arbitrary locations in both channels .the output of the interferometer therefore needs to be described in general by a mixed state . a measurement , represented by a positive operator valued measure ( povm ) which defines a probability distribution for the measurement outcomes , ,\ ] ] is subsequently performed on the output state .an estimated value of the true phase is obtained by applying an estimator that assigns to a particular measurement result an estimated value .we aim to estimate the phase as precisely as possible .two more elements need to specified in order to make the problem of finding the optimal phase estimation strategy well defined : an a priori knowledge on the phase distribution and a cost function which can be seen as a measure for the uncertainty in the estimated phase . for phase estimation ,the optimal choice of the input state , measurement and estimator is the one that minimizes the average cost function at this point two different approaches are most often pursued . in the _ global approach _one assumes initial ignorance about the actual value of , which corresponds to the choice .the solution of the problem then yields an estimation strategy which performs equally well irrespectively of the actual value of the estimated phase . in the _ local approach _, on the other hand , the assumption is that the value of the actual phase lies in the vicinity of a known phase .more precisely , the a priori probability is chosen to be , while the estimator is required to be _ _ locally unbiased _ _ the above condition is equivalent to a statement that the estimator will on average yield the true value of up to the first order in .notice that without local unbiasedness the estimation problem would be trivial ( and also useless ) since in order to minimize in eq .( [ eq : cost ] ) , with , the optimal choice for the estimator would be simply , and the choice of the measurement would be irrelevant .the local approach is useful when we are interested in small deviations of the phase from a known one .a significant advantage of the local approach over the global one is that for a natural choice of a quadratic cost function , there exist explicit lower bounds on based on the fisher information ( see sec .[ sec : fisher ] ) . in many practical situationsthese bounds are tight and the optimization over the measurement and the estimator can be avoided. moreover , the local approach may also be useful in situations when there is no a priori knowledge on the phase .if many copies of a state are given , one can first perform a rough measurement ( even not optimal ) on a small fraction of copies in order to narrow down the range of potential values of so that they lie in a vicinity of a known phase , and then perform the optimal estimation using the local approach .this strategy will yield a high accuracy estimation for the global approach , without the need of optimizing the measurement and the estimator , since the rough measurement performed on small fraction of copies , even if not optimal , will not significantly influence the final accuracy .in what follows we take the local approach . since in this casewe deal with small deviations of estimated phases from the true one , it is natural to choose as the cost function .our goal is to find the optimal state , measurement and locally unbiased estimator minimizing the expression ^ 2.\ ] ] to minimize the standard deviation given by the above formula we use an upper bound on based on the fisher information . for a given povm and state defining the probabilities ,the cramr - rao inequality bounds the variance that can be obtained using any locally unbiased estimator , where the fisher information is given by if an experiment is repeated times the bound reads for large the cramr - rao bound is asymptotically achieved by the maximum likelihood estimator .optimization over the measurements yields the quantum cramr - rao bound where the quantum fisher information is given by .\ ] ] the hermitian operator is called the `` symmetric logarithmic derivative '' ( sld ) and is implicitly defined via the relation .\ ] ] in the eigenbasis of , is given by {ij},\ ] ] where and the are the eigenvalues of ( whenever we set ) .it has been shown that a measurement saturating the quantum cramr - rao bound exists and is given by a projective measurement on the eigenbasis of .for the sake of completeness we state some important properties of ( see e.g. ) : \(i ) let , be two density matrices supported on orthogonal subspaces , , which do not cease to be orthogonal for an infinitesimal change of , i.e. , then is linear on the direct sum \nonumber\\ & = pf_q[\rho(\varphi ) ] + ( 1-p)f_q[\sigma(\varphi)].\end{aligned}\ ] ] ( ii ) is convex \nonumber\\ & \leq pf_q[\rho(\varphi ) ] + ( 1-p)f_q[\sigma(\varphi)]\end{aligned}\ ] ] ( iii ) for pure states , reads ,\ ] ] where .property ( i ) is due to the fact that the sld for is a direct product of slds for and , respectively .property ( ii ) is a consequence of the fact that by ( i ) the right hand side of ( ii ) can be viewed as the quantum fisher information of the state , where , are orthogonal ancillary states , while the left hand side is of the state after tracing out the ancillary system .furthermore , is non - increasing under stochastic operations ( tracing out the ancilla is an example ) . property ( iii ) is a consequence of eqs .( [ eq : qcr],[eq : sld ] ) , since for a pure state the measurement saturating the quantum cramr - rao bound in this case is a von neumann measurement projecting on any orthonormal basis containing two vectors where is the normalized vector orthogonal to lying in the space spanned by and .assuming that we have photons at our disposal , we aim to find the input state that allows performing phase estimation with the best precision possible , i.e. yielding the highest value of the quantum fisher information .in particular we consider the most general pure two - mode input state with definite photon number , where abbreviates the fock state .this class of states includes the n00n state which , in the absence of losses , leads to heisenberg limited precision , but is very fragile in the presence of noise .we are therefore looking for states which lead possibly to a lower precision than the n00n state , but which are more robust with respect to photon losses .moreover , although states of the form ( [ eq : input ] ) seem to be a restriction , we show in sec .[ sec : generalstates ] that our treatment effectively includes states with indefinite photon number . in the following subsectionswe show how states of the form ( [ eq : input ] ) are influenced by photon losses , calculate its quantum fisher information and show that the latter can be maximized ( thus minimizing ) by means of convex optimization methods .losses are modeled by fictitious beam splitters of transmissivity , in channels and respectively , and cause a fock state to evolve into where represents the state of two ancillary modes carrying and photons lost from modes and respectively , while including the phase accumulation and tracing out the ancillary modes results in the output density matrix where is the conditional pure state corresponding to the event when and photons are lost in modes and respectively , and is the normalization factor corresponding to the probability of that event .equivalently , the loss process can be described by a master equation for two independently damped harmonic oscillators with loss rates , where is time , the solution of which is given by with kraus operators where is the annihilation operator for mode , and analogously for mode .this state acquires a phase through the transformation .notice that thanks to the relation we can commute the phase operator with the kraus operators since the phase terms cancels out .it is therefore irrelevant if photons are lost before , during or after channel acquires its relative phase with respect to . using eq .( [ eq : rhoout ] ) for the output state , one can calculate with the help of eqs .( [ eq : qcr ] ) and ( [ eq : sld ] ) .this requires diagonalization of which can be carried out in the case of one - arm losses . in the more general case of losses in both arms an analytic calculation of out to be infeasible . nevertheless , we are able to determine an upper bound to the quantum fisher information which , although not strictly tight for general input states , is very close to for the states we consider in sec . [sec : optimalstates ] .we consider first the case , , i.e. when losses are present in only one arm .as can be seen from eq .( [ eq : rhoout ] ) , in this case only states with contribute to . moreover , we have ,hence we can write the output state as a direct sum making use of eqs .( [ eq : lindir ] ) and ( [ eq : fishpure ] ) we get a formula for with explicit dependence on the input state parameters , where . in a more compact way the above formula can be rewritten as where is a vector containing variables , while the elements of the vector and the matrix are given by if losses are present in both arms , then , in the most general case , all contribute to .states with different total number of lost photons , , are still orthogonal . using eq .( [ eq : lindir ] ) we can therefore write ,\ ] ] where ] , maximization of this expression over yields where the optimal choice of for the three regimes indicated above is , ( i.e. the solution of ) and .we see that for very small transmissivities the precision is equal to the sil , specified in eq .( [ eq : sil ] ) . for higher transmissivities the chopping strategy beats the sil , although only by a constant factor rather than in terms of scaling . for very high transmissivities the best strategy is to use un - chopped n00n states .we note that a similar strategy for multipartite qubit states has been devised in .as mentioned above the n00n state ceases to be optimal below a threshold in which case the coefficient obtains a nonzero value . by adding an infinitesimal change to at the expense of ( which we assume are kept in the proportion which is optimal in the absence of ), we can calculate the corresponding change in the quantum fisher information .the change of by increases by , but at the same time due to decreasing weight of the other coefficients decreases it by . on the wholethe change of reads : substituting from eq .( [ eq : fisher1mode ] ) at , , ( ) and calculating we determine the value below which is positive .this implies that for an increase in will increase .writing explicitly we get : .\end{gathered}\ ] ] the roots of the expression in square brackets can be found numerically .there is one real root in the interval ] defining .since the polynomial is rather cumbersome we do not quote it here , however , the relevant root is again given in very good approximation by where is obtained by a fit to the roots between and .for example for ( corresponding to figs .[ fig : optstates2modes10 ] and [ fig : optfisher2modes10 ] ) we get .analogous to the case of losses in only one arm we can examine a n00n chopping strategy by sending `` smaller '' n00n states through the interferometer .the corresponding precision reads figure [ fig : optstates2modes10 ] shows the estimation precision for the optimal state ( blue line ; here we use ) , the n00n state ( red line ) , the n00n chopping strategy ( dark green line ) , and the state ( [ eq : stateoptglobal ] ) ( bright green line ) . in the latter casethe precision is indeed quite high in the regime of high losses , outperforming the n00n chopping strategy but for small losses it is significantly worse than both the optimal and the n00n chopping strategies . ) , outperforms even the optimal states we have presented , arise from taking a different perspective in quantifying the resources .see sec .[ sec : indefiniten ] for more discussion . ]as discussed in sec . [ sec : fisher ] , a measurement saturating the quantum cramr - rao bound always exists . for the cases whenthe states introduced in sec .[ sec : interferometry ] are orthogonal for different ( i.e. when ) we can derive an explicit povm saturating the bound . to this endwe first assume that we perform a measurement determining the total number of photons lost , , which projects the system onto a particular pure state .subsequently , we perform the measurement saturating the cramr - rao bound on pure states ( see sec . [sec : fisher ] ) , i.e. a projection on a basis containing the vectors where , are the mean photon number and the variance of the photon number in mode for the state .the above povm depends , in general , on the phase .there are cases in which a single povm saturating the cramr - rao bound for all values of can be found . in a single photon case ( ) ,states of the form , allow for a -independent povm , whereas states with unbalanced weights of , and terms lead to optimal povms which are necessarily -dependent .moreover , for an photon state , a single , -independent povm saturating the cramr - rao bound can be found provided that the state enjoys the path - symmetry property : , where is a fixed phase the same for all components . in general , due to losses the conditional pure states that appear in our problem lack the above mentioned symmetry and consequently the optimal povm varies with the change of .one could argue that the photon state given by eq .( [ eq : input ] ) does not represent the most general way of using photons to determine the relative phase in an interferometer : instead of considering a state with a _definite _ photon number , one could consider a state which is a superposition of terms with different total photon number , and only imply a constraint that the _ mean _ photon number is .furthermore , by using states of the form ( [ eq : input ] ) it is assumed that all photons in an arm of the interferometer are indistinguishable , i.e. occupy a common spatio - temporal mode .this greatly limits the class of considered states since the hilbert space dimension of states sent into the interferometer would be of the order if the photons were distinguishable , while for indistinguishable photons it is . in the next two subsectionswe show that none of these generalizations improves the phase estimation precision , and consequently it is sufficient to consider the states of the form given by eq .( [ eq : input ] ) .we emphasize that in this paper we consider closed systems , i.e. there are no additional reference beams , neither classical nor quantum , since any additional beam would contain photons and should therefore be explicitly taken into account for the determination of the required resources .consider a superposition of input states of the interferometer with different , but definite photon numbers , where the mean total photon number is , i.e. terms with different evolve with different frequencies . in the absence of an additional reference beam which allows for clock synchronization between the sender and the receiver the relative phases between terms with different become unobservable and the state given by eq .( [ eq : superpos ] ) is physically equivalent to a mixture , moreover , since the quantum fisher information is convex ( see sec .[ sec : fisher ] ) , we have consequently it is always better to send a state with a fixed number of photons , with probability , rather than to use a superposition ( which is effectively a mixture ) .the analysis can thus be restricted to states with definite photon number without compromising optimality . in this subsectionwe consider general photon input states where the photons are distinguishable which is the case , e.g. , if they are sent in different time bins .a state of this type can be written as where represents a binary sequence of length , and , .summation from a sequence to a sequence should be understood using partial ordering between sequences , i.e. . therefore the summation is performed over all binary sequences .furthermore denotes a state of photons , where means that the photon sent in the -th time bin propagates in arm ( ) of the interferometer .mathematically , we deal here with an arbitrary -qubit state , which lives in a dimensional space . notice , that the indistinguishable case is recovered once we consider only the fully symmetric ( bosonic ) subspace of qubits . can be calculated in a similar way as in sec .[ sec : calcfisher ] and reads where boldface symbols denote binary strings of length , , denotes the number of in the sequence , and again provided that knowledge from which mode a photon was lost is not relevant or self - evident , e.g. when .let us now symmetrize the state of photons .this corresponds to replacing with , where the summation is performed over all permutations of an element set .notice that the first term within the parenthesis in eq .( [ eq : fishersm ] ) is not affected by symmetrization .let us denote and .then the subtracted term in eq .( [ eq : fishersm ] ) reads .performing symmetrization corresponds to replacing and with and respectively .since are positive , the following inequality holds ^ 2}{1/2(b_{\boldsymbol{l}_a\boldsymbol{l}_b}+ b_{\boldsymbol{l}^\prime_a\boldsymbol{l}^\prime_b})},\ ] ] which shows that symmetrizing can only decrease the subtracted fraction , hence increase the fisher information .this proves that the optimal states are symmetric states living in the bosonic subspace .consequently , using indistinguishable photons is sufficient to obtain the optimal fisher information .a physical intuition behind the above derivation is the following .notice that if there are no losses , the optimal qubit state has the form it is the n00n state which lives in the fully symmetric subspace . in this casethere is no advantage in using distinguishable photons .if there are losses , things get even worse for distinguishable photons .there is still no advantage in terms of phase sensitivity , yet when a photon is lost , the _ knowledge of which photon was lost _ additionally harms the quantum superposition .hence , it is optimal to use states from a fully symmetric subspace , where it is not possible to tell which photon was actually lost .we have analyzed the optimal way of using -photon states for phase interferometry in the presence of losses .we have derived an explicit formula for the quantum fisher information in the case when losses are present in one arm of the interferometer , and have provided a useful bound for the case of losses in both arms . using the quantum fisher information as a figure of merit , we have found the optimal states , investigated their advantage over various quantum and classical strategies .the optimal measurement saturating the cramr - rao bound has been presented , and it has been proven that in general it varies depending on the phase in the vicinity of which we perform estimation .a close inspection of the properties of the quantum fisher information showed that it is optimal to use a quantum state with a definite number of indistinguishable photons .neither allowing superpositions of different total photon - number terms , nor making photons distinguishable can improve estimation precision .this research was supported by the epsrc ( uk ) through the qip irc ( gr / s82176/01 ) , the afosr through the eoard , the european commission under the integrated project qap ( contract no .015848 ) , the royal society and the polish mnisw ( n n202 1489 33 ) .
we give a detailed discussion of optimal quantum states for optical two - mode interferometry in the presence of photon losses . we derive analytical formulae for the precision of phase estimation obtainable using quantum states of light with a definite photon number and prove that maximization of the precision is a convex optimization problem . the corresponding optimal precision , i.e. the lowest possible uncertainty , is shown to beat the standard quantum limit thus outperforming classical interferometry . furthermore , we discuss more general inputs : states with indefinite photon number and states with photons distributed between distinguishable time bins . we prove that neither of these is helpful in improving phase estimation precision .
worldwide there is a great interest of governments and organizations involved in public safety and security ( pss ) towards the evolution of existing wireless systems for critical communications based on professional mobile radio ( pmr ) technologies such as terrestrial trunked radio ( tetra ) , tetra for police ( tetrapol ) , or association of public - safety communications officials - project 25 ( apco p25 ) .it is widely recognized that efficient communications are of paramount importance to deal with emergency or disaster situations .the possibility of pss operators to use a wide range of data - centric services , such as video sharing , files transmission ( e.g. , maps , databases , pictures ) , ubiquitous internet and intranet access , has a strong impact on the efficiency and the responsiveness of the emergency services . however , pmr systems are based currently on 2 g networks , thus offering a wide range of voice services , but having a limited possibility to support data services . even if some efforts have been done to enhance pmr systems and to offer higher communication capacity , achievements are still behind those made in the commercial world that recently has developed the 3gpp long term evolution ( lte ) technology .hence , there is a great consensus in adopting the commercial lte framework to answer to the pss communication needs . a common standard for commercial and pss environments can open the door to new opportunities and synergies , offering advantages to both worlds .however , the lte technology needs some specific enhancements to be fully compliant with the mission and safety critical requirements .indeed , pmr networks must be reliable , secure and resilient , guaranteeing service accessibility and wide coverage as well .in addition pss operators need some specific applications and functionalities such as push - to - talk , dispatch services , priority management , group communications and direct communications .therefore , adopting lte as pmr broadband technology needs that these features are included in the future releases of the 3gpp standard also guaranteeing interoperability with actual narrowband pmr systems . in this paper ,[ req ] analyses the main requirements of pmr communications .the use of lte for critical communications and the description of critical services currently not supported by the lte standard is given in sec .[ lte - pss ] underling the need of further research activity .[ ap ] proposes an evolution path of the pmr lte - based network architecture .finally the attention is devoted to group communications in sec .[ gc ] , where the recommendations that are under evaluation by the 3gpp are analysed and on their basis our proposal is detailed .conclusions are drawn in sec .mission critical communications are characterized by different and more severe requirements respect to commercial communications . in particular, we can distinguish between typical narrowband pmr and new broadband requirements and functionalities : * * pmr narrowband requirements and functionalities * * * _ high reliability and availability ._ the system shall be available for 99% of time in 24 hours and 99.9% of time in a year .it shall cover 96% of the area in outdoor environment and 65% in indoor .* * _ half - duplex and full - duplex voice calls_. * * _ fast call setup time _ lower than 300 . * * _ push to talk _ management in half - duplex communications . * * _ call priority and preemption_. the system shall assign different levels of priority to calls and interrupt low priority calls on arrival of high priority calls that do not find available resources . * * _ direct mode communications _ between terminals without the support of network infrastructure . * * _ text message service _ , e.g. , the short data service of tetra system . * * _ network interoperability_. communications with users located on _ external networks _ or pstn ( public switched telephone network ) . * * _ emergency calls_. * * _ group calls_. gcs allow a user to talk simultaneously with several users belonging to a group that can be predefined or formed on - demand. the network shall be able to permit the coexistence of many active groups at the same time .a real life scenario comprises an average of 36 voice groups corresponding at least to 2000 users in an area .it is expected that up to 500 users can participate in a group . moreover ,each user can be registered to many groups at same time . * * pmr broadband requirements and functionalities * * * _ data communications _ for fax and image transfer . * * _ synchronous video transmission _ that consists of a bidirectional communication composed of different data , audio and signaling flows at 256 kbps . in order to fulfil all the above requirements and functionalitiesit has been evaluated that a bandwidth of 10mhz is needed . as a consequenceone open point is to identify an harmonised frequency band allocation for the new pmr systems that takes into account different national allocation policies , interferences towards other existing systems and economic convenience .this section describes the main communication features that shall be gradually introduced in the future lte standard releases in order to satisfy the typical needs of public safety organization .indeed , currently the lte system does not provide services considered vital for the pss context .direct connection among devices ( direct mode operation - dmo ) is a mandatory feature for pmr systems .it allows pss users to communicate without the involvement of the network infrastructure , e.g. , if the network is not available due to a failure or lack of coverage .current pmr systems foresee the use of dmo for direct connection between user terminals ( back - to - back ) , but also between the users and special terminals that operate as relay and/or gateway towards the trunked mo ( tmo ) network .the 3gpp is working to introduce in the lte standard the direct communication as a new service named _proximity service _ ( prose ) .in particular the technical report deals with the definition of an advanced network architecture able to support prose service either for commercial or pss applications .the service definition is on going and several proposals are under investigation .in general we can state that the prose foresees two different operative modes : * network assisted * not network assisted in the first case the network assistance is required to authenticate terminals , allocate resources and manage the security .the network provides to terminals the set of parameters needed for the call management . in the second case ,the connection is activated directly by the terminals without any network involvement , using parameters already known by the user equipment ( ue ) and pre - allocated resources . for commercial applicationsonly the network assisted mode is considered , while for pss applications both modes are allowed because in this case the prose shall guarantee ues in proximity to communicate under any network condition . in network assisted solutions we can also distinguish between : * _ full network control _ :the direct link among the ues is handled by the network including control ( connection set - up , maintenance ) and data planes .the communications occur on licensed bands and the resources can be allocated dynamically or in a semi - static mode by the network ; * _ partial network control _ :the network is involved only in the authentication and authorization phase , but the direct connections between the ues are initialized in autonomous way .usually the resources are pre - allocated to this kind of service .even if broadband services are gaining importance in emergency and rescue operations , voice call still remain a fundamental service even for future pmr systems. however , this appears to be in conflict with the lte architecture that is full - ip based and mainly developed for data communications , hence , unable to support the circuit switching mode typical for classical voice calls . in ltethe implementation of voice service is following an evolution path starting from circuit switching fall back approach that relies on legacy networks . in pmr environmentthis implies a transition phase during which voice service will be supported by the existing pmr systems . at a later stage, voice calls will rely on voice over lte ( volte ) , an advanced profile defined by global system for mobile communications association ( gsma ) , based on ip multimedia subsystem architecture ( ims ) .volte supplies basic functions as call establishment and termination using session initiation protocol ( sip ) , call forwarding , calling i d presentation and restriction , call - waiting and multiparty conference .however , volte does not support primary functions of mission critical communications , i.e. , group and direct communications .although ims supports open mobile alliance ( oma ) push - to - talk over cellular ( poc ) , which allows up to 1000 multicast groups and 200 users in one group , it is not suitable for critical communications because of the high call set - up time that reaches 4000 in case of pre - established session and 7000 if the session has to be created . thus the adoption of poc needs some specific network solutions .another important aspect of volte is the lack of a suitable security level due to absence of end - to - end encryption mechanism typical of pmr communications .communication security is of paramount importance in pss networks whose main aspects are encryption , authentication , provisioning and user management. however , the details concerning security are out of the scope of this paper .the interested reader can refer to for some examples of promising approaches on how to face the main security issues in a public safety network .the group call ( gc ) represents one of the most important and indispensable service of pmr networks .it enables an efficient management of the rescue teams and permits to send commands to all the pss operators in a disaster area and share information . for the above reasons ,the remaining part of this paper will be devoted on the discussion and proposal of approaches on how to manage gcs in a public safety lte - based network .the communications addressed to more than one user can be distributed with a point - to - point ( p2p ) approach , using one _ unicast _ transmission for each involved user , or with point - to - multipoint ( p2 m ) flows .+ in lte the distribution of _ multicast _ communications is demanded to the evolved multimedia broadcast multicast service ( embms ) , introduced in the 3gpp standard starting from release 10 . in particular , it t has been mainly developed for multicast and broadcast distribution of multimedia data , such as video streaming provided by external sources that act as service providers . for this reason only a downlink channel is considered .providing mission critical services in a public safety lte network involves a proper network architecture solution in order to achieve and maintain the required performance and reliability levels .hence , it easy to foresee that several research efforts and further releases of the lte standard will be needed before this task will be fully accomplished . as a consequence , in the short termthe simplest way for enabling high data rate services for pmr networks based on lte , is the use of already deployed commercial networks .3gpp provides recommendations and technical specifications for the sharing of network devices and modules between operators , commonly used in commercial networks .lte network is composed by several modules , which can be collected in different logical domains : * the _ services domain _ , managing the contents and the services to be provided to ues , injecting the traffic in the network through a service delivery platform ; * the _ evolved packet core _ ( epc ) network , mainly responsible of the control functions ; * the _ radio access network _ ( ran ) , composed by the enodebs ; * the _ users equipment _ domain .while the service domain shall be dedicated to pmr communications , due the specific services to be provided , the core network and the radio access network can be fully or partially shared between the professional and the public networks , as shown in fig .[ network_evol ] . in an initial phase the pmr network can rely completely on a commercial network , limiting to provide non mission critical professional services .this is mainly due to the fact that the public safety agencies can not control the fulfilment of the quality of service requirements , as well as the reliability and availability of the network , that are fully managed by carrier operators . from the network architectural point of view , in the first two configurations represented in the figure , the pmr operator acts as a mobile virtual network operator ( mvno ) , in core network sharing or full ran sharing configuration , respectively .the two networks partially sharing their epc can follow the gateway core network ( gwcn ) specifications of 3gpp . in the third configurationthe professional network expands to the eutran , with a partial coverage of the territory .the deep diffusion of the network on the area of interest is one of the major obstacle on the upgrade of current pmr networks over new and more efficient communication technologies .lte allows the sharing of part of the eutran between operators , following the multiple operator core network ( mocn ) 3gpp specification .the last presented configuration represents an autonomous lte pmr network , with full radio coverage of the territory . in this casethe pmr operator is the owner of the eutran , but some forms of passive sharing , such as mast sharing or site sharing , can be anyway implemented for reducing costs and environment impacts of the new network deployment .as stated before future lte - based pmr networks are expected to support bandwidth consuming applications such as video , imaging and data communications .in addition , multiple gcs shall be able to handle a great number of ues at the same time and in geographical areas served by several neighbour enbs ( i.e. , contiguous cells ) . in this sectionwe analyse recommendations provided by 3gpp , for the deployment of the gc service in lte networks .these focus mainly on multicast transmission mode , in particular on the lte embms framework , even if unicast transmission is taken also into account .the advantage of the unicast transmission scheme for the delivery of a gc service is that each unicast link can be tailored to the propagation conditions experienced by the served ue through the channel state information feedback and/or by acknowledge ( ack ) messaging .conversely , in the multicast transmission the most adverse propagation conditions must be always considered in order to meet specific service requirements . however , the use of the unicast mode is not recommended in scenarios involving the delivery of multimedia contents to large groups of users , as the ones expected for future pmr networks . in this case , the use of the multicast mode provides a more efficient and reliable solution .indeed , the main benefit of using p2 m flows is that the same content can be received by many users at the same time with a bandwidth and power consumption not dependent on the number of simultaneous users .+ the embms service combines the advantages of p2 m transmissions with geographical and temporal flexibility .indeed , the multicast distribution over embms is not extended necessarily on the entire network , but its scope can be limited to a small geographical area , such as a city center or a stadium , as well as large areas or regions . + in particular , the embms framework provides two kinds of transmission schemes , namely single cell ( sc ) and single frequency network ( sfn ) . in sc - embms mode , each enb delivers the mbms data flow independently from the others . on the other hand , in sfn - embms mode multiple enbsare synchronized in order to transmit the same physical signal at the same time .the enbs involved in the distribution of the same multicast / broadcast flow form a _mbms area_. one enb can belong to more than one mbms area simultaneously .the sfn - embms transmission mode leads to a significant improvement in terms of cell coverage and spectral efficiency , since that at the ue receiving side the signals coming from multiple enbs are combined resulting in a higher useful power .[ fig.rate_gc ] shows a performance comparison between sc and sfn ( with 4 enbs ) embms transmission schemes in terms of system throughput as a function of the number of active group calls .performance has been derived for two types of service ( video flow @ kbps at the application layer and voice call @ kbps ) and bandwidths .we can see that the throughput increases with the number of active gcs up to a maximum value .then it decreases rapidly due to a saturation of the system .the benefits of using sfn are evident .another advantage of the sfn - embms scheme for pmr networks is that the mbms area can be adapted in order to cover the whole critical area involved in pss operations .the lte standard schedules unicast and multicast / broadcast services in separated subframes . in particular , in each lte radio frame up to 6 out of 10 subframes can be reserved for the delivery of embms services .each embms service is mapped into a mbms bearer service and into a logical _ multicast traffic channel _ ( mtch ) .then the mtchs are multiplexed with a logical multicast control channel ( mcch ) and mapped into a physical multicast channel ( mch ) . from an architectural point of view, the embms framework introduces new modules for the set - up and management of multicast and broadcast contents .in particular , the _ broadcast - multicast service center _ ( bm - sc ) represents the interface between the core network ( epc ) and the _ content provider _ , and it is responsible for the scheduling of mbms services over the lte network .the _ mbms gateway _ ( mbms - gw ) is a logical node that transmits the multicast flow towards the enbs that belongs to the mbms area . finally ,the _ multicell coordination entity _( mce ) is a logical node that ensures that all the enbs of the mbms area use the same radio resource configuration .the embms framework in the lte standard is designed only for downlink distribution of multimedia data flows .hence , it has to be modified in order to support different services such as the gc .to this goal 3gpp proposes to introduce a new logical interface , called _ group communications system enabler _ ( gcse ) , which is in charge of the creation , management and deletion of gcs .the proposals under evaluation are mainly based on the multicast communication approach , the unicast mode is enabled whenever a multicast transmission can not be accomplished . in both casesall the group members belonging to a gc perform uplink communications through the establishment of unicast bearers .the gc management by the gcse must ensure also the meeting of the pmr network requirements such as short call latencies , service continuity for ues moving from one cell to another , security and mechanisms for priority and pre - emption .assuming the embms framework and gcse as basis , 3gpp indicates three different architectural approaches that could be adopted .the difference among these depends on the network element that manage the gcs .the first defines that the epc handles the gcs , while the second approach specifies a decentralized solution where the management of gcs is in charge of the enb .finally , the third approach is based on ims .in accordance with the indications of the 3gpp , we outline here a promising solution for gcs in the future lte - based pmr networks and support our vision by providing suitable performance evaluations and comparisons .the envisaged solution is based on embms framework and gcse as previously described , and assumes a centralized approach for the call management ( i.e. , performed by the epc ) that leads to benefits in terms of coverage and reliability . in particular, we will discuss herein two possible approaches .the first is fully based on multicast transmissions , named _static embms activation _, where the unicast mode is considered only as backup solution when the multicast transmission is not available , e.g. , due to bad propagation conditions or because the ue moves out of a mbms area .unicast mode is activated only to provide service continuity when the ue detects an increasing packet loss .a second , more advanced solution , named _ dynamic embms activation _, is considered in order to increase the system flexibility and efficiency . _ dynamic embms activation _ manages group communications over both unicast and multicast transmissions .the idea is based on the ability to perform a counting procedure for the gcse , with the aim to transmit downlink media contents through multiple unicast bearers until a certain number of users requires the same gc service , hence to activate a multicast bearer .the advantage is that the gc is never broadcast in a cell with no active group members , and , hence , the spectrum efficiency is maximized .for both solutions , the network architecture is represented in fig .[ fig.network ] , where the gcse acts as an _ application server _( gcse as ) providing the functionality for the management of gcs .in addition , the gcse uses the existing lte standardized interfaces in order to communicate with the epc and selects the proper transmission scheme for the delivery of gcs .finally , multicast bearers are considered for downlink communications whenever possible . on the other hand , uplink trafficis always sent via unicast bearers .gcse provides a multicast to unicast switching mechanism depending on the selected approach .hence , the gcse is connected to both the pgw ( for the distribution of unicast traffic ) and to the bm - sc ( for the multicast traffic ) through the sgi and gc2 interfaces , respectively . in the _static embms activation _solution , we assume that the gcs are always managed by resorting to multicast transmission except when it is not available .this approach exploits the basic structure of the embms , for which it is needed to verify the satisfaction of the pss requirements . from fig .[ fig.rate_gc ] it is possible to see that this is true for the expected requirements in terms of number of gcs supported for each type of service .however , the most critical requirement is represented by the gc set - up time that must be lower than 300 ( sec .[ req ] ) .this depends on the time requested to establish the embms bearer and to start - up the call .two options have been analysed : * _ pre - established embms bearer_. the network establishes the embms bearer over preconfigured mbms areas before the gc starts .this implies that the bm - sc pre - establishes in advance all the information related to the gc , such as the _ temporary multicast group identifier _ ( tmgi ) , qos class and the enbs belonging to the mbms area .in particular , the gcse as requests the creation of an embms bearer to the bm - sc by means of the pcrf interface , which is in charge of the exchange of the information related to the embms session .as soon as a ue requests a gc , the downlink traffic is transmitted using one of the pre - established embms bearer .this solution provides a fast total set - up time for the gc service that depends only on the time to start - up the call .3gpp estimates that it is nearly 220 - 250 , which is consistent with the requirement for the pmr voice communication ( see sec .[ req ] ) . *_ dynamic bearer setup at group call start - up_. in this case the embms bearer is established only when needed .it means that in addition to the 220 - 250 for the call set - up , also the delay for the downlink bearer establishment shall be considered for evaluating the user experience .the additional latency is assessed on the order of 115 , taking into account 10 for radio interface delay , 5 for network interface delay ad 5 of request processing delay .this additional latency is not negligible and the total delay experienced by users , for call set - up and downlink bearer set - up , also exceeds 300 .[ bearer_gr ] shows the delay contributions composing the overall end - to - end latency at call start - up , for both options . even if dynamic bearer set - up is more flexible and permits an optimization of the user resources , pre - established bearer is the option selected in our solution , because it permits to satisfy the gc set - up time requirement .however , it could be not sufficient because when the mbms service is mapped on the embms bearer and on the logical mtch channel , it has to be multiplexed with the control information ( mcch ) and then sent on the physical mch to be transmitted . in the current lte standard, the mcch can be updated with a minimum period of 5.12 ( called _ mcch modification period _ ) , it means that the gc should wait also this time before to be transmitted exceeding the set - up delay requirement . hence , we propose also a shorter mcch modification period that should be about 50 ( once every 5 lte frames ) . in this casethe use of unicast and multicast is more flexible and also the gc set - up time requirement can be relaxed . indeed in this solution uesare always served through unicast transmissions at the communication start - up , and the use of pre - established unicast bearer shall be considered in order to accomplish the required end - to - end call set - up time .on the other end the activation of embms bearer can be dynamic , since users are already receiving the service and will be able to switch on multicast reception as soon as embms has been completely activated , as a consequence the mcch modification period can be neglected and unmodified . the number of users associated to a gc is provided on a per - cell basis by the pgw by means of the _ user location information _ ( uli ) procedure .consequently , the gcse selects the most efficient transmission scheme .an alternative to the uli procedure is represented by the ue that sends a message to the gcse whenever moves from one enb to another one .this solution implies a higher data transmission over the core network , but at the same time allows the ue to switch to idle mode in order to save battery life delivering high burst of data in short period of time . in the proposed_ dynamic embms activation _solution the gcse exploits the information regarding the ues associated to a gc service , to detect the most efficient solutions .in particular , the total amount of resources requested by the ues if they would be served with unicast mode is evaluated and compared with the resources reserved for the multicast mode . as an example fig .[ tx_eff ] shows the spectral efficiency in terms of throughput of a group member , normalized to the resources allocated to the multimedia gc service when the number of group(s ) members increases .we can see that the multicast solution is independent on the number of group members , the resource usage depends only on the number of subframes allocated to the multicast service .conversely , the unicast spectral efficiency decreases with the number of served ues . withthe proposed _ dynamic embms activation _procedure the system is able to switch from unicast to multicast when the number of ues overcomes a given value represented by the intersection of the curves , hence the spectral efficiency is always the maximum value ( i.e. , the envelope of the unicast and multicast curves ) .the development of the lte technology offers an excellent opportunity to improve both performance and capabilities of the actual pmr communication systems .however , towards this goal , substantial research efforts are needed mainly to support the specific critical features of pss systems currently not available in the lte standard . after a critical review of the state - of - the - art concerning group call standardization proposals in 3gpp , the paper outlined a viable architecture solution and validated its efficiency by providing performance evaluations .k. zhang , x. liang , x. shen , and r. lu , `` exploiting multimedia services in mobile social networks from security and privacy perspectives , '' _ communications magazine , ieee _ , vol .52 , no . 3 , pp .5865 , march 2014 .p. stavroulakis , _ terrestrial trunked radio - tetra : a global security tool _ , ser . signals and communication technology.1em plus 0.5em minus 0.4emspringer , 2007 .[ online ] .available : http://books.google.it/books?id=zgb721nzzg4c , `` tr 23.979 3rd generation partnership project ; technical specification group services and system aspects ; 3gpp enablers for open mobile alliance ( oma ) ; push - to - talk over cellular ( poc ) services ; stage 2 ( release 11 ) , '' tech .rep . , 09 2012 .lorenzo carl ( s13 ) received a m.sc . in telecommunications engineering from the university of florence in 2013 , and is currently a ph.d .student the department of information engineering , university of florence .his research interest are mainly focused on radio resource allocation strategy for broadband wireless networks and digital network coding .he is an author of technical papers published in international conferences .romano fantacci ( m84 , sm90 , f05 ) received the phd degree in telecommunications in 1987 from the university of florence where he works as full professor since 1999 .romano has been involved in several research projects and is author of more than 300 papers .he has had an active role in ieee for several years .he was associate editor for several journals and funder area editor for ieee transactions on wireless communications .francesco gei received the master s degree in telecommunications engineering in 2008 from the university of florence , where he currently collaborate as researcher .after initial studies on uwb ( ultra wide band ) communication systems ( 2005 - 2008 ) , francesco has been involved in industrial research project on sdr ( software defined radios ) and mesh networks for military and professional purpose ( 2008 - 2013 ) . in last years , he focuses his activity on pmr ( professional mobile radio ) evolution towards 4 g and portability of pmr services over lte ( long term evolution ) mobile networks .dania marabissi ( m00 , sm13 ) received the phd degree in informatics and telecommunications engineering in 2004 from the university of florence where she works as an assistant professor .she has been involved in several national and european research projects and is author of technical papers published in international journals and conferences .she is principal investigator of the firb project `` held '' .she was associate editor for ieee transaction on vehicular technology .luigia micciullo received her degree in telecommunication engineering from the university of florence in 2007 . in 2007she joined the department of electronics and telecommunications of the university of florence as a research assistant .her research interests include physical layer for wireless communications , ofdm , intra - systems coexistence , interference management , and multi - tiered networks , broadband professional mobile radio evolution .
currently public safety and security communication systems rely on reliable and secure professional mobile radio ( pmr ) networks that are mainly devoted to provide voice services . however , the evolution trend for pmr networks is towards the provision of new value - added multimedia services such as video streaming , in order to improve the situational awareness and enhance the life - saving operations . the challenge here is to exploit the future commercial broadband networks to deliver voice and multimedia services satisfying the pmr service requirements . in particular , a viable solution till now seems that of adapting the new long term evolution technology to provide ip - based broadband services with the security and reliability typical of pmr networks . this paper outlines different alternatives to achieve this goal and , in particular , proposes a proper solution for providing multimedia services with pmr standards over commercial lte networks . professional mobile radio , lte , group call , critical communications
this article is a companion article for . in that article we discussed intermittent dynamics associated with boundary crisis ( homoclinic ) bifurcations in families of unimodal maps . in the present work we treat saddle node bifurcations from the same perspective . by jakobsons celebrated work , the logistic family , ] , see theorem [ existence ] below .in contrast to the misiurewicz bifurcation values , the saddle - node bifurcation value is not a full density point of this parameter set .this is because it is known that the parameter set for which the map has a periodic attractor has positive density at any saddle - node .following the construction of a.c.i.m.s , we continue with a detailed discussion of the intermittency that occurs due to the saddle - node bifurcation .that saddle - node bifurcations can give rise to intermittency is known since , who called intermittency associated with a saddle - node bifurcation type i intermittency .pomeau and manneville studied type i intermittency in connection with the lorenz model . in the model ,simplifying ( hyperbolicity ) assumptions on the dynamics outside a neighborhood of the saddle - node periodic orbit are made . in perhaps the most basic example of type i intermittency , in families of unimodal maps , such simplifications are not justified .this is due to the presence of a critical point .our discussion of absolutely continuous invariant measures allows us to give a rigorous treatment of intermittent time series , where we explain and prove quantitative aspects earlier discussed numerically in , see theorems [ intermittency ] and [ saddlenode ] below .diaz et . studied the unfoldings of saddle - node bifurcations in higher dimensional diffeomorphisms and their results imply in the present context that there exists a subset which has positive density at , such that for each , has an absolutely continuous invariant measure .the measures produced there are supported on small periodic domains on which the map renormalizes to a hnon - like family . in the present contextthis corresponds to parameter values inside periodic windows , for which is renormalizable .however , the invariant measures produced in jakobson s work are supported on the maximal possible interval , ] . while editing the final draft of this paper , we learned of the results of maria joo costa , corresponding to part of her 1998 thesis , on related work in a similar context .she focused on the sink - horseshoe bifurcation in which a sink and a horseshoe collapse , see , and studied families of unimodal interval maps to describe bifurcations .the class of families of interval maps studied in differs from ours ; it consists of unfoldings of unimodal maps \to [ 0,1] ] , with critical point at .suppose that each is at least smooth and that , , and , are w.r.t . .suppose that each has negative schwarzian derivative ( see ) and that .further , suppose that and that the fixed point at 0 is hyperbolic repelling .we say that _ unfolds a ( quadratic ) saddle - node _ if , * there is a -periodic point , with , and , * at . for the sake of clarity, we will assume that with this convention , for the saddle - node point disappears and complicated dynamics may occur . for a set of parameter values , let be its lebesgue measure .a unimodal map is called _ renormalizable _ if there is a proper subinterval ] .the set has positive density at : it does not have full density : careful numerical studies of the quadratic family , , predict that between the period doubling limit and , excluding the period three window , less then about 15% of the parameter values correspond to the periodic windows .numerical simulations also suggest that near a saddle - node bifurcation , there is a large set of parameter values for which the a.c.i.m.s are supported on the maximal possible interval .this is illustrated in figure [ fig : quad ] .existence of the parameter set follows from a similar approach as used by benedicks and carleson in their treatment of jakobson s result .below we comment in more detail on the construction of the parameter set .we now first consider intermittency that results from the saddle - node bifurcation and introduce our second main result .intermittent dynamics manifests itself by alternating phases with different characteristics . in one phase ,referred to as the laminar phase , the dynamics appear to be nearly periodic . while in the other phase , the relaminarization phase , the orbit makes large , seemingly chaotic excursions away from the periodic region .these excursions are called chaotic bursts .let be a neighborhood of the orbit , of , not containing a critical point of .let be defined as whenever the limit exists , where is the usual indicator function of the set .that is , is the relative frequency with which the orbit visits ( for those for which the limit exists ) .the following theorem discusses for near 0 .denote by the atomic measure supported on the orbit of , given by we will denote the usual weak convergence of measures by the symbol .[ intermittency ] let and be as in theorem [ existence ] . there exist sets of parameter values with positive density at , so that restricting to , is a constant , , almost everywhere on ] and depends continuously on at .there exists so that different sets lead to different limit values in theorem [ saddlenode ] . in fact , the proof of theorem [ saddlenode ] makes clear that arbitrary large numbers occur as the limit values .this fact , together with theorems [ existence ] and [ saddlenode ] lead us to conjecture that there is a parameter set which has as a lebesgue density point , so that it was shown in that such a limit can not hold without restricting the parameter set .denote by a small neighborhood of on which is invertible .let and denote the usual local stable and local unstable sets for .[ embedding ] let be a family of , , maps unfolding a saddle - node .then there exists a family of flows , , on such that for each .further , in the topology on and in the topology on compact intervals away from the fixed point . the flow is uniquely determined by . proof .the version of this theorem is due to takens .the result follows from part 2 of .the case follows from appendix 3 of .the case and the convergences as follow from theorem iv.2.5 and lemma iv.2.7 of the same . [ il ] this resultis known as the takens embedding theorem . a version of it appears in .they proved that one may obtain which depends smoothly on both and , even at the fixed point , if one requires that be smooth , where may be larger than .proposition [ embedding ] allows for our weaker hypotheses and its implications are sufficient for our purposes .choose and let .\ ] ] also , choose a point so that \subset e.\ ] ] for the sake of convenience we restrict to be the interval ,\ ] ] so that and ] . given and ,define to be the unique number for which for and , let be defined by it follows from the smoothness of that for each , the functions are diffeomorphisms from to [ 0,1 ] .we will use as coordinates on . in the following , we will associate ] , be the reparameterization map defined by we have that and .we may invert , for each , to obtain maps \rightarrow [ 0,1] ] , proof .diaz et .al . proved this result under the hypothesis that is , using ilyashenko and li s embedding result ( remark [ il ] ) .a proof of this result under the current hypotheses appears in based on . + we remark that converges as ( see ) , so that as .this fact , together with proposition [ distortion ] imply the next proposition .[ density ] let be a measurable subset of and denote ] where denotes a rigid rotation by angle . proof .this follows from proposition [ embedding ] and the definitions of and as the time variables for the embedding flow for . + in addition to ] for .furthermore , we will denote wherever is defined .similarly , is given by let , .it follows from the assumptions that there is an integer such that . by choosing so that is in the interior of we have that for all small .denote this point by .it then follows that for any , either denote this intersection of with by .note that for a fixed the function will have a jump discontinuity at which the value will jump from one endpoint of to the other .denote by the point at which the discontinuity takes place .there exists a limit as the sequence of maps converges in the topology on compact sets not containing . proof .this easily follows from proposition [ embedding ] and the definition of . + specifically , we will make use of the implication that the derivatives of with respect to converge uniformly as for in compact intervals away from the discontinuity . later, this will allow us to make estimates of derivatives along which are uniform in .in this section , we apply the local analysis near the saddle - node from the previous section to obtain expressions for global return maps. we study the occurrence of misiurewicz maps for small parameter values .we begin with a simple but , useful lemma .[ lem_over ] ] , then for some , is disjoint from .this contradicts the assumption that is once renormalizable . + we will use the freedom in the choice of and take so that is not in . by this choicewe have that will be in the interior of for some . for ,let be the first hit map from to .define \rightarrow { \mbox{\bb r}} ] by . by identifying the endpoints of ] before eventually being trapped in and the function \rightarrow { \mbox{\bb z}}^- ] , we may consider as a map on the circle .[ approx ] given any , for each . proof .denote and .let denote the global ( first hit ) map from to by .let denote the local map .note that is not equal to since some points in will return to before hitting ( by landing in ) .however , the two maps do agree when restricted to since those points will in fact hit before returning to .since we are only considering a finite number of iterations from to , it follows from the construction of that converges to on the restricted set .proposition [ localmap ] then implies that , converges to on . in this section we will identify parameter values for which maps the critical point onto some repelling hyperbolic periodic point and thus does not return to a neighborhood of itself , i.e. satisfies the _misiurewicz condition_. there are two ways that hyperbolic periodic points can occur : as periodic points whose orbits pass through , or as continuations of periodic points for ( outside of . in the former case ,the periodic orbits exist for parameter values within subintervals of ] is a hyperbolic set ( see theorem iii.5.1 in ) .if a point is mapped onto then will have a discontinuity at . by lemma [ lem_over ] , for any is a point which is mapped onto by .recall that is the first point in the orbit of that hits .let .[ mis2 ] suppose that is a periodic point contained in the nonwandering set of \setminus \bar{e} ] , such that is mapped onto by iterates of .the point is in for all small , and , approaches as .thus , .let be the point in which is mapped onto by an iterate of .if , then approaches as and ( [ partialmb ] ) implies the result .if then it follows there is an such that for all .applying proposition [ approx ] to and then implies that is mapped by the -return map into arbitrarily close the -th preimage of for large .the result then follows from ( [ partialmb ] ) . + the following corollary provides the parameter values which will be used in the proof of theorem [ existence ] .[ oml ] given any , there exists integers and and a sequence of parameter values ] , we reparameterize and use the parameter ] .for , denote and let be the component of adjacent to and the component of adjacent to .this yields a partition of ( elements of the partition can be empty ) .if the leftmost or rightmost nonempty intervals of this partition do not contain a fundamental domain , , join them to the adjacent intervals .note that if is partitioned into two elements neither of which is a fundamental domain , this leaves a choice in coding the resulting interval after or . this way an interval that covers one or more fundamental domains is partitioned into subintervals which are at least as large as a fundamental domain .given , define note that as long as does not cover a fundamental domain in , equals some fixed iterate of .also note that consists of at most two components .we further remark that , if maps homeomorphically onto , then there is a fixed number of iterates after which contains in its interior .define maps and by we would like to consider images .as for single maps we come across the difficulty that is discontinuous along the backward orbit of in .we consider the set . by a fundamental strip we mean a set , .consider a curve that projects injectively to ] .let be a small positive number .given , write and .we can suppose that and are integers .let subdividing each interval into subintervals , , of equal length provides partitions of and of .given , write ( or ) and let . define the _ binding period _ of as .suppose and define the binding period associated with as for a fixed , let be a subinterval of .denote where is the projection .if intersects , , then is called a _ return time _ for .define the binding period associated with a return time of a parameter interval as for each let be the trivial partition of the parameter interval .inductively we will define parameter sets and partitions thereof . in order to define given , we first construct a refinement of .we say that a return of at time is a _ bound return _ if there is a return time of and .let .* chopping times . *: : we say that is a chopping time for if + 1 . intersects in at least three elements of , and 2 . is not a bound return for .* non - chopping times . *: : we say that is a non - chopping time for in all other cases , that is if one or more of the following occurs : + 1 . . 2 . is a bound return of . intersects no more than two elements of the partition of . in case a non - chopping time for , we let .if is a chopping time for we partition as follows .write , so that each fully contains one and at most one element of . if contains an interval of length less then , we include this interval in the adjacent interval of .otherwise an interval of is an element of the partition of .write the resulting partition of as .there is a corresponding partition of , given by .let each element of this partition be an element of .note that an element of partitioning need not be connected , but can consist of several intervals .this may happen if intersects in at least two fundamental domains , for some .let and consider .we speak of a bound , essential or inessential return time or an escape time for in the following situations .+ _ bound return time : _ the interval intersects and is a bound return time for .+ _ inessential return time : _ the interval intersects and is a non - chopping time for that is not a bound return time , but intersects at most two elements of the partition .+ _ essential return time : _ the interval intersects and is a chopping time for . + _ escape time : _ the return time is a chopping time for , but does not intersect . in this casewe call an escape component of . + any interval belongs to a unique nested sequence of intervals where for . if is a chopping time for , then is strictly contained in . chopping times are either escape times or essential return times .the return depth of at time is defined if intersects , as define functions and which associate to the sum of the return depths and the sum of the essential return depths , over the first iterates .define and the sets will be shown to satisfy the stated properties in proposition [ jakobson ] . hereexpansion properties of the maps , and are discussed .we relate expansion properties of these maps . in section [ sec_mane ]we prove a ma type result for , that is , we show that there is expansion along orbits outside a neighborhood of .the relation between expansion along orbits of , and is discussed in the next two lemma s .[ lemma_withouttilde ] if there exist , such that for all , then there are and , so that , for all .write with minimal such nonnegative integer .compute now implies that the piece of orbit is in . since , the term is bounded below in , is bounded above by since any point in is mapped outside of in or fewer iterations .therefore the quantity is bounded from below by a constant .thus , . we can let .since there is a minimum number of iterations of needed for an orbit to enter after leaving , and the number of consecutive iterations in is bounded above by , it follows that the fraction is bounded below by a constant .hence , is strictly larger than some number . + similarly one derives the following lemma relating expansion of to expansion of .[ breve - tilde ] if for some , , then there are constants and so that .the converse statement holds as well .given a subinterval ] .if does not vanish on , then .the following lemma is similar to theorem iii.6.2 in .so is its proof .[ lemma_size ] there are constants , , so that for all large enough the following holds .let be a maximal interval with a homeomorphism .then proof .denote .the number of elements in is fixed .there is also a minimum distance between any two points in , uniformly in .thus we may let be a neighborhood of such that for all .similarly , is a finite set .let be a maximal interval on which is a homeomorphism , but not .let be the subintervals of on which is a homeomorphism .the boundary points of are contained in . since and finite , all intervals have lengths which are bounded below uniformly in . applying the koebe principle ( to )one checks that there is a constant with further , since and are each finite it is clear that is not a homeomorphism for some uniformly bounded .the result follows and it is clear that the constants can be chosen uniformly in . + the next proposition discusses expansion properties of , for .[ prop_ga ] for any small enough neighborhood of , there are constants and , so that the following holds for all sufficiently large . if for , then if , then without any condition , proof .let be a neighborhood of , small enough so that for .we claim that there are , so that for all large enough integers , if for , then .it suffices to show that there exists with ( compare the proof of theorem iii.3.3 in ) .assume there exist points ] , then lemma [ lemma_size ] yields that , so that now ( [ largerbound ] ) and ( [ smallerbound ] ) contradict each other , proving the claim .observe that and hence the constants and can be chosen uniformly in .this proves the first estimate .next , suppose .let be the maximal interval containing such that is a homeomorphism on .because the orbit of is finite , the interval extends a positive distance away from to both sides .koebe s principle implies for some which gives the second estimate . finally , not assuming any condition , split the iterates into a part that ends in , one iterate starting in , and a part that stays outside . combining the first two estimates for the first and last part ,proves the last estimate . + proof of proposition [ prop_exp ] .as in the proof of theorem iii.6.4 in . + [ prop_parameterstate ] there are constants so that for all large enough the following holds . if , and , proof .let be the continuation of for near given by ( where is the hyperbolic periodic point from the definition of ) .then proposition [ localmap ] implies that for some constant .writing , the chain rule gives now is arbitrarily small for sufficiently large . by ( [ estder ] ) and the exponential growth of , the statement of the proposition holds for . by the chain rule , it follows that by assumption , for . for each positive integer , there are constants so that if . hence , for .the constant is small if is large .the proposition follows . + the next proposition is also used in section [ sec_measure ] to show the existence of absolutely continuous invariant measures of for .the binding period of is defined by ( [ binding1 ] ) , ( [ binding2 ] ) . [ lem_binding ] there exist constants and such that the following holds .let , and suppose that is an essential return time for with return depth . then the binding period satisfies .we have furthermore , , or , and .the next lemma implies bounded distortion of iterates of on during the binding period .the lemma is an ingredient for the proof of proposition [ lem_binding ] , as in .the proof of the lemma differs from ; distortion estimates for a passage through have to be treated separately .the remainder of the proof of proposition [ lem_binding ] is as in . for , for and for some .write and . by the chain rule , lies outside , then is bounded by a constant and if , we can write . thus . using this , it follows that again for some .hence , we proceed to estimate . by the definition of binding period , .write . if is small enough, we have ( see section [ sec_relate ] ) .we may hence assume that .it follows from this and ( [ etaj ] ) that .further for some , so that for some . combining ( [ etaj ] ) and ( [ zj ] ) shows that is bounded , thus proving the lemma .we remark that the distortion bound is close to 1 if is small .this follows from the observation that is outside a neighborhood of , where it undergoes exponential expansion , for a large number of iterates . the main results from the previous sections are propositions [ prop_exp ] , [ prop_parameterstate ] , [ lem_binding ] .these results have their counterparts in proofs of the work of benedicks and carleson . from this point on , we can follow closely .for completeness we sketch the remaining steps leading to the proof of proposition [ jakobson ] in the next two sections . in the inductive constructions ,the following two propositions are shown to hold .the proofs are as in , relying on propositions [ prop_exp ] , [ prop_parameterstate ] , and [ lem_binding ] .* [ bounded recurrence ] * each point in satisfies .one shows that in fact , , if .that is , a substantial proportion of the returns are chopping times . by assumption ,hence , . as in shows that this bound implies . *[ bounded distortion ] * restricted to a connected component of , the map is a diffeomorphism with uniformly bounded distortion for all where is the last essential or inessential return time of and is the associated binding period . if then the same statement holds for all restricted to any subinterval such that .for the proof of proposition [ jakobson ] , combinatorial properties of are studied .escape times play a central role in this study .the combinatorial properties described next are used at the end of the section to prove proposition [ jakobson ] . to each associated a sequence , of escape times and a corresponding sequence of escaping components with .let for and for .this defines for each .observe that for and , the sets and are either disjoint or coincide .define and let be the natural partition of into sets of the form .observe that and .for , , let denote by the partition restricted to .define by and for .let [ comb ] for some .one can take , which is negative for small enough .the proof divides into two parts .one bounds the cardinality of by and one shows that for any , , and , one has .combining the two statements proves the proposition . for the proofs one can follow . + [ lem_estcale ] proof .the equality follows immediately from the definitions . for the inequality , let , and write proposition [ comb ] ( with the remark from its proof that with ) implies so that assuming that has been chosen small enough and large enough .since and is constant on elements of , we have applying ( [ repeatedly ] ) repeatedly gives + observe chebyshev s inequality and lemma [ lem_estcale ] yield if is large enough .this implies write . for small, there exists so that for all . hence noting that , for all , it follows that a uniform lower bound for exists .this concludes the proof of proposition [ jakobson ] .since goes to as , we have also shown that is a lebesgue density point of . in the previous sectionsit was proved that has bounded recurrence for a set of parameter values with measure bounded from below , uniformly in . by proposition [ density ] , this implies that has positive measure and has positive density at .the following proposition implies that , , has exponential expansion along the orbit of .the proof is as in .[ prop_br > exp ] if is close to and satisfies , then for some , .combining proposition [ prop_br > exp ] with lemma [ lemma_withouttilde ] , gives [ prop_coleck ] for each , there are , , so that thus is a collet - eckmann map if .collet - eckmann maps are known to admit absolutely continuous invariant measures , see theorem v.4.6 in .this concludes the proof of theorem [ existence ] , except for the conclusion that ] .outside the domain of definition of , let .define the return map on ] be the function where is continuous and elsewhere .one shows that the variation of is bounded , uniformly in and .this relies on the negative schwarzian derivative ( see ) and the analysis of the local saddle - node bifurcation in section [ sec_snlocal ] ( see proposition [ embedding ] ) . by induction the variation of bounded .using the uniform bound on the variation of , one bounds the variation of for a density with bounded variation .it follows that has a fixed point with uniformly bounded variation .if denotes the measure whose density is the fixed point of , then is obtained by pushing forward , where is the set on which .the uniform bound for follows from the properties of , as in , see also .that the support of equals all of ] , there exists so that \subset f_{\gamma}^n(i) ] equals \backslash { { \tilde{e}}}_{\gamma } ) ) ] ,let be the number of iterations under required for to enter .then there is so that for all , } \tilde{l}_{v } d\tilde{\nu}_{\gamma}/ \tilde{\nu}_{\gamma}([0,1]\backslash v ) & \le & l.\end{aligned}\ ] ] corresponding statements for the study of the boundary crisis bifurcation are contained in .note that need not contain ; a similar statement where is replaced by is therefore untrue .however , proposition [ prop_outside ] has the following corollary which deals with . paraphrasing , it shows that a typical ( with respect to the invariant measure ) point ] , let be the number of iterations under required for to enter .then there is so that for all , } l_{{{\tilde{e}}}_{\gamma } } d\nu_{\gamma}/ \nu_{\gamma}([0,1]\backslash { { \tilde{e}}}_{\gamma } ) & \le & l.\end{aligned}\ ] ] proof . at , is renormalizable : there is an interval containing in its interior and in its boundary , so that . by slightly extending , we may assume that it contains for small values of .the result follows from proposition [ prop_outside ] by noting that outside , equals . + we will now show how theorem [ intermittency ] is proved by combining propositions [ prop_tildeacip ] , [ prop_outside ] and [ prop_relaminar ] .+ proof of theorem [ intermittency ] .let be as constructed in section [ sec_br ] and take .let be the absolutely continuous invariant measure for , obtained in lemma [ prop_mb ] .the measure is ergodic , so that by birkhoff s ergodic theorem , for any borel set ] .it follows that also hence , for almost all ] is obviously bounded , as .because this holds for any neighborhood , this shows that as . purpose of this section is to indicate a proof of proposition [ prop_outside ] . to introduce the reasoning, we start with an alternative proof for proposition [ prop_relaminar ] , which does not derive it as a corollary to proposition [ prop_outside ] . + proof of proposition [ prop_relaminar ] .we claim that \backslash { { \tilde{e}}}_{\gamma}) ] and \backslash { { \tilde{e}}}_{\gamma} ] . by -invariance of ,the measures of both sets are the same , and , since they add up to at least 1 , are bounded from below by . the normalized measure\backslash { { \tilde{e}}}_{\gamma}) ] outside .therefore , applying proposition [ prop_tildeacip ] , it follows that a uniform bound of the form \backslash { { \tilde{e}}}_{\gamma } ) & \le & k \sqrt { m(a)}\end{aligned}\ ] ] holds for borel sets \backslash { { \tilde{e}}}_{\gamma}) ]a uniform bound of the form \backslash o ) \le k \sqrt { m(a)} ] denote the restriction of to \backslash o ] . by proposition [ prop_exp ] , some iterate of is expanding .observe that the number of branches of is constant for ] as in the above indicated alternative proof of proposition [ relaminar ] .to obtain uniform bounds in , we must investigate properties of conditionally invariant measures for .following , there is a conditionally invariant measure for ; is characterized by \backslash o).\ ] ] the proof of proposition [ prop_outside ] is identical to the above proof of proposition [ prop_relaminar ] , once lemma [ lemma ] is proved . + proof . by proposition [ prop_exp ] ,an iterate of is an expansion .hence , the existence of the conditionally invariant measure follows from . to derive bounds on its density, we must examine the existence proof .the conditionally invariant measure is constructed by finding its density as a fixed point of a perron - frobenius operator .write ) ] with } g dm = 1 ] , has a number of inverse functions .define a perron - frobenius operator on ) ] , the fixed point is the density of the measure .for a lipschitz density , let the regularity of be given by , g'(x)\mbox { is defined } , g(x)>0 \}.\ ] ] we will show that for some independent of .we remark that from one concludes that such a bound holds when is restricted to an interval ] with . then , for ] , there is a component of \backslash o ] for all small .hence , for each $ ] , there exists with .this gives latexmath:[\[\begin{aligned } p^n_{\gamma}g ( x ) & \ge & |\psi_{i_0 } '' ( x)| g\circ \psi_{i_0 } ( x ) , \\ & \ge & , we may take so that is bounded from below . therefore , for some which is independent of and . by ( [ converges ] ) , this proves the lemma .
we discuss one parameter families of unimodal maps , with negative schwarzian derivative , unfolding a saddle - node bifurcation . it was previously shown that for a parameter set of positive lebesgue density at the bifurcation , the maps possess attracting periodic orbits of high period . we show that there is also a parameter set of positive density at the bifurcation , for which the maps exhibit absolutely continuous invariant measures which are supported on the largest possible interval . we prove that these measures converge weakly to an atomic measure supported on the orbit of the saddle - node point . using these measures we analyze the intermittent time series that result from the destruction of the periodic attractor in the saddle - node bifurcation and prove asymptotic formulae for the frequency with which orbits visit the region previously occupied by the periodic attractor .
centrality measures the degree to which network structure contributes to the importance , or status , of a node in a network .over the years many different centrality metrics have been defined .one of the more popular metrics , betweenness centrality , measures the fraction of all shortest paths in a network that pass through a given node .other centrality metrics include those based on random walks and path - based metrics .the simplest path - based metric , degree centrality , measures the number of edges that connect a node to others in a network . according to this measure ,the most important nodes are those that have the most connections . however , a node s centrality depends not only on how many others it is connected to but also on the centralities of those nodes .this measure is captured by the total number of paths linking a node to other nodes in a network .one such metric , -centrality , measures the total number of paths from a node , exponentially attenuated by their length . the attenuation parameter sets the length scale of interactions . unlike other centrality metrics , which do not distinguish between local and global structure, a parameterized centrality metric can differentiate between locally connected nodes , i.e. , nodes that are linked to other nodes which are themselves interconnected , and globally connected nodes that link and mediate communication between poorly connected groups of nodes .studies of human and animal populations suggest that such ` bridges ' or ` brokers ' play a crucial role in the information flow and cohesiveness of the entire group .one difficulty in applying -centrality in network analysis is that its key parameter is bounded by the spectrum of the corresponding adjacency matrix of the network . as a result ,the metric diverges for larger values of this parameter .we address this problem by defining _normalized -centrality_. we show that the new metric avoids the problem of bounded parameters while retaining the desirable characteristics of -centrality , namely its ability to differentiate between local and global structures .in addition to ranking nodes , parameterized centrality can be used to identify communities within a network . in this paper, we generalize modularity maximization - based approach to use normalized -centrality . rather than find regions of the network that have greater than expected number of edges connecting nodes , our approach looks for regions that have greater than expected number of weighted paths connecting nodes .one advantage of this method is that the attenuation parameter can be varied to identify local vs. global communities .normalized -centrality is a powerful tool for network analysis . by differentiating between locally and globally connected nodes , it provides a simple alternative to previous attempts to quantify fine - grained structure of complex networks , such as the motif - based and role - based descriptions .the former measures the relative abundance of subgraphs of a certain type , while latter classifies nodes according to their connectivity within and outside of their community . applying either of these descriptions to real networks is computationally expensive : role - based analysis , for example , requires the network to be decomposed into distinct communities first .normalized -centrality , on the other hand , measures node connectivity at different length scales , allowing us to resolve network structure in a computationally efficient manner. we use normalized -centrality to study the structure of several benchmark networks , as well as a real - world online social network .we show that this parameterized centrality metric can identify locally and globally important nodes and communities , leading to a more nuanced understanding of network structure .bonacich defined _ -centrality _ as the total number of attenuated paths between nodes and , with and giving the attenuation factors . in communication and information networkswe are considering , . ] along direct edges ( from ) and indirect edges ( from intermediate nodes ) in the path from to , respectively , and is the length of the longest path . given the adjacency matrix of the network , -centrality matrix is defined as follows : the first term gives the number of paths of length one ( edges ) from to , the second gives the number of paths of length two , etc .although along different edges in a path could in principle be different , for simplicity , we take them all to be equal : . in this case , the series converges to , which holds while , where is the largest characteristic root of .the computation of is difficult , especially for large networks , which include most complex real - world networks . to get around this difficulty, we define _ normalized -centrality _ matrix as : as we show in the appendix , in contrast to -centrality , normalized -centrality is not bounded by .also , we prove that , assuming is strictly greater than any other eigenvalue , exists ; and as is increased , converges to this value and is finite for . just like the original -centrality , normalized -centrality contains a tunable parameter that sets the length scale of interactions . for , ( normalized) -centrality takes into account direct edges only . as increases, becomes a more global measure , taking into account ever larger network components .the expected length of a path , the radius of centrality , is .much of the analysis done by social scientists considered local structure , i.e. , the number and nature of an individual s ties . by focusing on local structure , however , traditional theories fail to take into account the macroscopic structure of the network .many metrics proposed and studied over the years deal with this shortcoming , including pagerank and random walk centrality .these metrics aim to identify nodes that are ` close ' in some sense to other nodes in the network , and are therefore , more important .pagerank , for example , gives the probability that a random walk initiated at node will reach , while random - walk centrality computes the number of times a node will be visited by walks from all pairs of nodes in the network .normalized -centrality , , also measures how ` close ' node is to other nodes in a network and can be used to rank the nodes accordingly .the presence of a tunable parameter turns normalized -centrality into a powerful tool for studying network structure and allows us to seamlessly connect the rankings produced by well - known local and global centrality metrics . for ,normalized -centrality takes into account local interactions that are mediated by direct edges only , and therefore , reduces to _ degree centrality_. as increases and longer range interactions become more important , nodes that are connected by longer paths grow in importance . for ,the rankings produced by normalized -centrality are equivalent to those produced by -centrality .also as shown in the appendix , for symmetric matrices , as , normalized -centrality converges to _ eigenvector centrality _ .the rankings no longer change as increases further , since has reached some fundamental length scale of the network .girvan & newman proposed _ modularity _ as a metric for evaluating community structure of a network .the modularity - optimization class of community detection algorithms finds a network division that maximizes the modularity , which is defined as ( connectivity within community)-(expected connectivity ) , where connectivity is measured by the density of edges .we extend this definition to use normalized -centrality as the measure of network connectivity . according to this definition , in the best division of a network ,there are more weighted paths connecting nodes to others within their own community than to nodes in other communities .modularity can , therefore , be written as : \delta(s_i , s_j)}\ ] ] is given by eq .( [ eq : inf1 ] ) . since factors out of modularity , without loss of generality we take . can be varied from 0 to 1 . is the expected normalized -centrality , and is the index of the community belongs to , with if ; otherwise , .we round the values of to the nearest integer .to compute , we consider a graph , referred to as the null model , which has the same number of nodes and edges as the original graph , but in which the edges are placed at random . to make the derivation below more intuitive , instead of normalized -centrality , we talk of the number of attenuated paths . in normalized -centrality, the number of attenuated paths is scaled by a constant , hence the derivation below holds true .when all the nodes are placed in a single group , then axiomatically , .therefore = 0 ] is defined as : the _normalized -centrality matrix _ is then given by : the normalized -centrality vector is where is a unit vector and is the number of nodes in the network . if is an _ eigenvalue _ of , then invertibility of would lead to the trivial solution of eigenvector ( ) .hence for computation of eigenvalues and eigenvectors , we require that no inverse of should exist , i.e. equation ( [ eq : char_eq ] ) is called the _ characteristic equation _ , solving which gives the _ eigenvalues _ and _ eigenvectors _ of the adjacency matrix .the adjacency matrix can be written as : where is a matrix whose columns are the eigenvectors of , and is a diagonal matrix whose diagonal elements are the eigenvalues of , , arranged according to the ordering of the eigenvectors in . without loss of generality we assume that . the matrices can be determined from the product where is the _ selection matrix _ having zeros everywhere except for element .adjacency matrix raised to the power is then given by using equation ( [ eq : lambda_k ] ) , [ eq : bc_eq ] reduces to where if , and if .for the equations [ eq : bc_eq ] and [ eq : cm ] to hold non - trivially , .1 . : if , ( and ) would be independent of , since 2 . : the sequence of matrices _ converges _ to as if all the sequences for every fixed and converge to . if , _ converges _ to . 3 . and , dominates in the equation ( [ eq : cm ] ) . since centrality score due to -centrality is and that due to normalized -centrality is , from equations [ eq : b_alpha1 ] and [ eq : b_alpha ] , the induced ordering of nodes due to -centrality ( ) would be equal to induced ordering of nodes due to normalized -centrality ( ) . under the assumption that is strictly greater than any eigenvalue , as , equation ( [ eq : b_alpha ] ) reduces to is because all other eigenvectors shrink in importance as .therefore , as , we have under the assumption that is strictly greater than any other eigenvalue , dominates in the equation ( [ eq : cm ] ) , [ eq : maxa ] . for symmetric matrices therefore equation [ eq : z ] reduces to where is the column of representing the eigenvector corresponding to . hence , in case of symmetric matrices : where and . since corresponds to the eigenvector centrality vector ,hence for symmetric matrices , the induced ordering of nodes given by eigenvector centrality is equivalent to the induced ordering of nodes given by normalized centrality .
a variety of metrics have been proposed to measure the relative importance of nodes in a network . one of these , -centrality , measures the number of attenuated paths that exist between nodes . we introduce a normalized version of this metric and use it to study network structure , specifically , to rank nodes and find community structure of the network . specifically , we extend the modularity - maximization method for community detection to use this metric as the measure of node connectivity . normalized -centrality is a powerful tool for network analysis , since it contains a tunable parameter that sets the length scale of interactions . by studying how rankings and discovered communities change when this parameter is varied allows us to identify locally and globally important nodes and structures . we apply the proposed method to several benchmark networks and show that it leads to better insight into network structure than alternative methods .
halting an epidemic outbreak in its early stages requires a detailed understanding of the progression of the number of new infections ( incidence ) .current mathematical models predict that the incidence grows exponentially during the initial phase of an epidemic outbreak . within this exponential growth scenarioinfectious diseases are characterized by the average reproductive number , giving the number of secondary infections generated by a primary case , and the average generation time , giving the average time elapse between the infection of a primary case and its secondary cases . in turn ,vaccination strategies are designed in order to modify the reproductive number and the generation time .i have recently shown , however , that this picture dramatically changes when the graph underlying the spreading dynamics is characterized by a power law degree distribution , where the degree of a node is defined as the number of its connections .the significant abundance of high degree nodes ( hubs ) carry as a consequence that most nodes are infected in a time scale of the order of the disease generation time .furthermore , the initial incidence growth is no longer exponential but it follows a power law growth , where is the characteristic distance between nodes on the graph . yet, these predictions are limited to uncorrelated graphs and the susceptible - infected ( si ) model . in this worki extend the theory of age - dependent branching processes to consider the topological properties of real networks .first , i generalize my previous study to include degree correlations .this is a fundamental advance since real networks are characterized by degree correlations that may significantly affect the system s behavior .second , i consider the susceptible - infected - removed ( sir ) model that provides a more realistic description of real epidemic outbreaks , allowing us to obtain conclusions about the impact of patient isolation and immunization strategies on the final outbreak size. finally , i survey our current knowledge about different networks underlying the spreading of infectious diseases and computer malwares and discuss the impact of their topology on the spreading dynamics .consider a population of susceptible agents ( humans , computers , etc ) and an infectious disease ( human disease , computer malware , etc ) spreading among them .the potential disease transmission channels are represented by an undirected graph , where nodes represent susceptible agents and edges represent disease transmission channels .for example , when analyzing the spreading of sexually transmitted diseases the relevant graph is the web of sexual contacts , where nodes represent sexually active individuals and edges represent sexual relationships .the degree of a node is the number of edges connecting this node to other nodes ( neighbors ) in the graph .given the finite size of the population there is a maximum degree , where is at most .i denote by the probability distribution that a node has degree .the results obtained in this work are valid for arbitrary degree distributions .nevertheless , recent studies have shown that several real networks are characterized by the power law degree distribution with .therefore , i focus the discussion on this particular case .real networks are characterized by degree correlations between connected nodes as well .networks representing technological and biological systems exhibit disassortative ( negative ) correlations with a tendency to have connections between nodes with dissimilar degrees .in contrast , social networks are characterized by assortative ( positive ) degree correlations with a tendency to have connections among nodes with similar degrees . to characterize the degree correlations i consider the probability distribution that a neighbor of a node with degree has degree .it is important to note that the probability distributions and are related to each other by the detailed balance condition although contains all the information necessary to characterize the degree correlations it is difficult to analyze . a more intuitive measure which often appears in the analytical calculations is the average neighbors excess degree empirical data indicates that where is obtained from the detailed balance condition ( [ db ] ) , resulting in when the degree correlations are disassortative the nearest neighbors of a low / high degree node tend to have larger / smaller degree . in this case decreases with increasing .in contrast , when the degree correlations are assortative the nearest neighbors of a low / high degree node tend to have proportional degrees . in this case increases with increasing . therefore ,disassortative and assortative correlations are characterized by and , respectively .real networks also exhibit the small - world property , meaning that the average distance between two nodes in the graph is small or it grows at most as .for instance , social experiments such as the kevin bacon and erds numbers or the milgram experiment reveals that social actors are separated by a small number of acquaintances . this property is enhanced on graphs with a power law degree distribution ( [ pkpl ] ) with . in this casethe average distance between two nodes grows as , receiving the name of ultra small - world .given the graph underlying the spreading of an infectious disease , let us consider an epidemic outbreak starting from a single node ( the root , patient zero , or index case ) . in the worst case scenario the disease propagates to all the nodes that could be reached from the root using the graph connections .thus , the outbreak is represented by a spanning or causal tree from the root to all reachable nodes .the generation of a node on the tree corresponds with the topological or hopping distance from the root .this picture motivates the introduction of the following branching process : annealed spanning tree ( ast ) with degree correlations consider a graph with degree probability distribution and average degree , neighbors degree distribution given a node with degree , detailed balance condition ( [ db ] ) , and average distance between nodes .the annealed spanning tree ( ast ) associated with this graph is the branching process satisfying the following properties : 1 .the process starts from a single node , the root , at generation .the root generates sons with probability distribution .2 . each son at generation generates other sons with probability distribution , given its parent node has degree .3 . a son at generation does not generate new sons .[ acausaltree ] the term annealed means that we are not analyzing the true ( quenched ) spanning tree on the graph but a branching process with similar statistical properties . from the mathematical point of viewthe ast is a generalization of the galton - watson branching process to the case where ( i ) the reproductive number of a node depends on the reproductive number of its ancestor and ( ii ) the process is truncated at generation .this mathematical construction has been previously introduced to analyze the percolation properties of graphs with degree correlations .the sharp truncation of the branching process at generation is an approximation . in the original graphthere are some nodes beyond the average distance between nodes and their average degree decreases with increasing generation .therefore , a more realistic description is obtained defining generation dependent and truncating the branching process when the number of generations equals the graph diameter . yet, an analytical treatment of this more realistic model is either unfeasible or results into equations that most be solved numerically , questioning its advantage with respect to direct simulations on the original graph . to allow for an analytical understandingi truncate the branching process at generation , where represents the average distance between nodes in the original graph .furthermore , i assume that is the same for all generations . at this pointit is worth noticing that all results derived below are exact for the ast but an approximation for the original graph .the ast describes the case where all neighbors of an infected node are infected and at the same time .more generally a node infects a fraction of its neighbors and these infections take place at variable times .the susceptible infected removed ( sir ) model is an appropriate framework to consider the timing of the infection events .the time scales for the transitions susceptible infected and infected removed are characterized by the distribution function of infection and removal times and , respectively .for example , is the probability that the infection time is less or equal than .consider an infected node and a susceptible neighbor .the probability that is infected by time given was infected at time zero is the combination of two factors .first , the infection time should be smaller that and , second , the removal time of the ancestor should be larger than the infection time .more precisely \ . \label{bt}\ ] ] from i obtain the probability that gets infected no matter when and the distribution function of the generation times in the original kermack - mckendrick formulation of the sir model the disease spreads at a rate from infected to susceptible nodes and infected nodes are removed at rate . in this casethe infection and removal rates and are exponentially distributed , and , resulting in some of the results obtained in this work are valid for any generation time distribution .we focus , however , on the sir model with constant rate of infection and removal ( [ rsir])-([gtauexp ] ) . at this pointwe can extend the ast definition to account for the variable infection times : age - dependent ast with degree correlations the age - dependent ast is an ast where nodes can be in two states , susceptible or infected , and 1 .an infected node ( primary case ) infects each of its neighbors ( secondary cases ) with probability .the generation times , the times elapse from the infection of a primary case to the infection of a secondary case , are independent random variables with probability distribution .[ adast ] the age - dependent ast is a generalization of the bellman - harris and crum - mode - jagers age - dependent branching processes .the key new elements are the degree correlations and the truncation at a maximum generation , allowing us to consider the topological properties of real networks .let be the average number of nodes that are infected between time and given that patient zero ( the root ) was infected at time zero .this magnitude is known in the epidemiology literature as the incidence .consider an age - dependent ast and a constant infection and removal rate ( [ rsir])-([gtauexp ] ) .making use of the tree structure i obtain ( appendix [ iterapproach ] ) \ , \label{ntexp}\ ] ] where is the average number of nodes at generation , satisfying the normalization condition the interpretation of ( [ ntexp ] ) is the following .if we count the time in units of one then on average new nodes are found at each generation .since the infection times are variable , however , nodes at the same generation may be infected at different times .this contribution is taken into account by the factor between $ ] in ( [ ntexp ] ) , giving the probability density of the sum of generation times .independent of the particular dependence of , from ( [ ntexp ] ) it follows that the incidence decays exponentially for long times with a decay time .this result is a consequence of the population finite size , i.e. sooner or later most of the nodes are infected and the number of new infections decays .in contrast , the factor remaining after excluding the exponential decay is an increasing function of time and it dominates the spreading dynamics at short and intermediate times .i obtain the following result determining the speed of the initial growth : consider the normalized incidence if there is some integer and real numbers and such that for all when then \ , \label{rhot}\ ] ] where [ nooo ] the symbol indicates that ( [ rhot ] ) is valid asymptotically when and it represents correction terms of the order of .the demonstration of this result is straightforward . from ( [ cond ] )it follows that for all the average number of nodes at generation ( [ zd ] ) is of the order of .therefore , in the limit the sums in ( [ ntexp ] ) and ( [ no ] ) are dominated by the term and corrections are given by the ratio between the and terms .plane showing the regions where theorem [ nooo ] is valid ( shadowed region ) for the case of a power law degree distribution ( [ pkpl ] ) and degree correlations ( [ qk ] ) .the text inserts indicate the exponent in ( [ cond]).,width=384 ] the initial dynamics is characterized by a power law growth with an exponent determined by the average distance .the characteristic time marks the time scale when this polynomial growth starts to be manifested .this time is particularly small for graphs with a large maximum degree and satisfying the small world property , i.e. is small .for instance , let us consider a power law degree distribution ( [ pkpl ] ) with and degree correlations ( [ qk ] ) .the values of and for which the condition ( [ cond ] ) is satisfied are given in fig .[ examples ] , together with the exponent .disassortative degree correlations ( ) may invalidate the condition ( [ cond ] ) , indicating that strong disassortative correlations may lead to deviations from the theorem [ nooo ] prediction .this observation is in agreement with a previous study focusing on the epidemic threshold .in contrast , for assortative degree correlations ( ) the condition ( [ cond ] ) is satisfied for all . in other words ,assortative correlations enhance the degree fluctuations , extending the validity of theorem [ nooo ] to the region .focusing on the final size of the outbreak i obtain the following corollary : consider the average total number of infected nodes if the conditions of theorem [ nooo ] are satisfied then \ . \label{ni}\ ] ] [ coro ] from this corollary it follows that increasing the rate of node removal , because of patient isolation or immunization , we just obtain a gradual decrease on the final outbreak size .this implies that the concept of epidemic threshold loses sense since the outbreak size remains proportional to the population size for all removal rates .this conclusion is in agreement with previous studies for the case ( ) and ( ) .the above corollary extend these studies to the region , demonstrating that when there is not an epidemic thresholds for any value of .theorem ( [ nooo ] ) proposes a new law of spreading dynamics characterized by an initial power law growth .in essence the power low growth is a consequence of the small - world property and the divergence of the average reproductive number .its origin is better understood analyzing the contribution of nodes at a distance from the root .the distribution of infection times of nodes at generation is given by the distribution of the sum of generation times .for the case of a constant infection rate this distribution is a gamma distribution , which is characterized by an initial power law with exponent .this is the standard result for stochastic processes defined by a sequence of steps happening at a constant rate .the total incidence is then obtained superimposing the contribution of each generation , weighted by the average number of nodes at that generation .since most nodes are found at generation then the contribution from that generation dominates the incidence progression , resulting in a power law growth with exponent .the small - world properties simply implies that is small and the resulting power law growth can be distinguished from an exponential growth .the validity of this regime is restricted to time scales that are large enough such that an appreciable number of nodes at generation are infected , and it is followed by an exponential decay after most nodes at that generation are infected . to understand the relevance of this spreading law for real epidemic outbreaks , in the followingi analyze the validity of the conditions of theorem [ nooo ] for real networks underlying the spreading of human infectious diseases and computer malwares ._ sexually transmitted diseases : _ sexual contacts are a dominant transmission mechanism of the hiv virus causing aids .there are several reports indicating that the web of sexual contacts is characterized by a power degree distribution .the current debate is if the exponent is smaller or larger than three . in either case, it is known that social networks are characterized by assortative degree correlations , which extends the validity of theorem [ nooo ] to ( see fig .[ examples ] ) .there is also empirical evidence indicating that the number of aids infections grows as a power law in time for several populations .when this empirical evidence is put together with that for sexual networks we obtain a strong indication that theorem [ nooo ] provides the right explanation for the observed power law growth . _ airborne diseases : _ airborne diseases require contact or proximity between two individuals for their transmission . in this casethe graph edges represent potential contact or proximity interactions among humans and the degree of an individual is given by the number of people with who he / she interacts within a certain period of time .recent simulations of the portland urban dynamics shows that the number of people an individual contacts within a day follows a wide distribution up to about 10,000 contacts .a report for the 2002 - 2003 sars epidemic shows a wide distribution as well , in this case for the number of secondary cases generated by a primary sars infection case .although this data is not sufficient to make a definitive conclusion , it provides a clear indication that the number of proximity contacts a human undergo within a day is widely distributed .this observation together with the high degree of assortativity of social networks opens the possibility that the spread of airborne diseases within a city is described by theorem [ nooo ] ._ computer malwares : _ email worms and other computer malwares such as computer viruses and hoaxes spread through email communications .the email network is actually directed , i.e. the observation that user a has user b on his / her address book does not imply the opposite .this is an important distinction since the detailed balance condition ( [ db ] ) is valid for graphs with undirected edges .there is , however , a significant reciprocity , meaning that if user a has user b on his / her address book then with high probability the opposite takes place as well .thus , in a first approximation we can represent email connections by undirected links or edges and , in such a case , the detailed balance condition ( [ db ] ) holds .recent studies of the email network structure within university environments indicate that they are characterized by a power law degree distribution with .therefore , the spreading of computer malwares represents another scenario for the application of theorem [ nooo ] .further research is required to determine the influence of the lack of reciprocity among some email users. in conclusion , theorem [ nooo ] characterizes the spreading dynamics on complex networks with wide connectivity fluctuations .its corollary [ coro ] determines the region of connectivity fluctuations and degree correlation where there is not an epidemic threshold .the empirical data indicates that the theorem conditions are satisfied for several networks underlying the spreading of infectious diseases among humans and computer malwares among computers .therefore , i predict that theorem [ nooo ] is a spreading law of modern epidemic outbreaks .let be the probability distribution of the number of infected nodes at time ( including those that has been recovered ) , , on a branch of the ast [ acausaltree ] , given that branch is rooted at a node at generation and its has degree .in particular is the probability distribution of the total number of infected nodes at time , given that patient zero ( the root of the tree ) became infected at time zero and its has degree .based on the tree structure we can develop an iterative approach to compute recursively .let be a node at generation with degree and let us denote by its neighbors on the next generation , where if , if , and if . then since node and its neighbors lie on a tree then are uncorrelated random variables .furthermore , all nodes at generation has the same statistical properties , i.e. are identically distributed random variables .let be the probability distribution of , given node is at generation and its ancestor has degree .with probability the node is not infected at any time and with probability it is not yet infected at time given it will be infected at some later time . thus on the other hand , with probability node will be infected at some time , with distribution function , and continue the spreading dynamics to subsequent generations .once node is infected , the number of infected nodes in the branch rooted at node is a random variable with probability distribution , given node has degree .
infectious diseases and computer malwares spread among humans and computers through the network of contacts among them . these networks are characterized by wide connectivity fluctuations , connectivity correlations and the small - world property . in a previous work [ a. vazquez , phys . rev . lett . 96 , 038702 ( 2006 ) ] i have shown that the connectivity fluctuations together with the small - world property lead to a novel spreading law , characterized by an initial power law growth with an exponent determined by the average node distance on the network . here i extend these results to consider the influence of connectivity correlations which are generally observed in real networks . i show that assortative and disassortative connectivity correlations enhance and diminish , respectively , the range of validity of this spreading law . as a corollary i obtain the region of connectivity fluctuations and degree correlations characterized by the absence of an epidemic threshold . these results are relevant for the spreading of infectious diseases , rumors , and information among humans and the spreading of computer viruses , email worms and hoaxes among computer users .
binary neutron star mergers ( nsms ) are unique laboratories for the study of astrophysics .the merger involves many elements of the theory of relativistic astrophysics , gravitational wave astronomy , and nuclear astrophysics .furthermore , nsms are thought to be a possible source of detectable gravitational radiation , the _r_-process elements , and gamma ray bursts . in order to develop accurate models of nsms onemust numerically solve the equations describing gas dynamics and the gravitational field arising from the matter .however , the need for accurate numerical models of the inspiraling binary system presents some unique challenges that we address in this paper . in realistic astrophysical situationsthe merger of binary neutron star systems is driven by gravitational radiation losses ( misner et al . 1973 ) .this loss of energy will lead to the inspiral and eventual coalescence of the binary system .the prediction of energy loss by gravitational radiation was confirmed by the observation of psr1913 + 16 , a binary neutron star system ( hulse & taylor 1975 ) . the observed rate of decrease of the orbital period of this system is in good agreement with predictions made by general relativity ( taylor & weisberg 1989 ) .coalescing binary systems are expected to emit tremendous amounts of energy in the form of gravitational waves during the final stages of coalescence , and the gravitational waves produced in these events are expected to be observed by gravitational wave detectors currently under construction .gravitational wave interferometers such as ligo ( abramovici et al .1992 ) and virgo ( bradaschia et al . 1990 ) will soon be operational and will present the first opportunities to study nsms via gravitational waves .theoretical templates of the expected signal are required to extract signal information from the noisy background ( cutler et al .post - newtonian methods ( lincoln & will 1990 ) may be adequate for the prediction of waveforms for the early stage of the inspiral . however , the prediction of waveforms in the later stages of the merger , when tidal effects and neutron star structure become important , requires a full three - dimensional numerical solution of the equations describing the motion of matter and the gravitational field .in addition to gravitational wave astronomy , nsms are of interest for nuclear astrophysical reasons .nsms may yield information about the structure of neutron stars .since the equation of state ( eos ) of neutron star matter is not well constrained , the observation of gravitational wave signals from nsms may provide constraints that could provide information about the dynamics of the merger and in turn the eos of dense matter .additionally , the material ejected during the coalescence of binary neutron stars may be a site of _ r_-process nucleosynthesis ( lattimer et al . 1977 ) .the _ r_-process , which is thought to be responsible for the production of about 50% of elements heavier than iron in the universe , occurs when the capture rate of neutrons bombarding nuclei exceeds the beta decay rate . in the material ejected during nsms thereare expected to be regions where the _ r_-process occurs robustly ( meyer 1989 ) .simulations of nsms can allow us to study both the mass ejection and nucleosynthesis that occurs in the ejected material .nsms are a suggested source for the mysterious gamma - ray bursts observed by cgro and other high - energy observational missions .nsms are thought to release energy on the order of their gravitational binding energy erg , which may be larger than estimated gamma - ray burst energies ( quashnock 1996,rees 1997 ) to erg ( woods & loeb 1994 ) .a popular model for bursts at cosmological distances is the relativistic fire - ball ( paczyski 1986 , goodman 1986 , shemi & piran 1990 , paczyski 1990 ) .nsms are likely candidates for the source of relativistic fire - balls , but the mechanism by which the fire - ball develops has yet to be determined .observations in the spring of 1997 of optical and x - ray counterparts to grb 970228 ( costa , et al .1997 , guarnierni 1997 , piro et al . 1997 ) and grb 970508 ( bond 1997 , djorgovski et al .1997 , metzger et al .1997 ) , particularly the measurement of a mg ii absorption line at redshift ( metzger et al . 1997 ) , suggest that the bursts do indeed have a cosmological origin .simulations of nsms can test the consistency of the energetics and time scales with the estimated energies and observed time scales of the observed bursts .one of the major difficulties in carrying out numerical simulations of binary neutron star mergers is developing a numerical algorithm that does not introduce unphysical dynamical effects into the problem . in order to avoid spurious inspiral , boththe total energy and angular momentum must be conserved to sufficient accuracy . in the absence of physical instabilities or dissipative effects , such as gravitational radiation losses, the numerical methods should be capable of maintaining a binary neutron star system in a stable orbit .additionally , when physical instabilities capable of causing coalescence are present , the algorithms must continue to conserve all important physical quantities . without this capabilityit is impossible to develop quantitatively accurate models in situations where radiation losses are present . in this paperwe consider several variations on the popular zeus hydrodynamic algorithm ( stone & norman 1992 ) as applied to nsms .the comparisons that will appear later in this paper examine the numerical effects of the choice of rotating versus inertial frames as well as the choice of several possible schemes for the coupling of gravity to the hydrodynamics .this paper is intended to lay the numerical groundwork for post - newtonian and relativistic studies that will follow in later papers . while realistic models of nsms are clearly relativistic or , at a bare minimum , post - newtonian ( pn ) in nature , the examination of newtonian models of orbiting binaries neutron stars is still of considerable value . many of the lessons learned from newtonian models will provide guidance for pn or gr modeling efforts . indeed ,if a numerical self - gravitating hydrodynamics algorithm is incapable of maintaining stable orbits for binary star systems in the newtonian limit , then it is unlikely to be useful for more complex , and realistic , simulations of nsms . in this paperwe concentrate on purely newtonian models of orbiting binary neutron stars in both the stable and unstable regimes .we will consider the evolution of initial configurations that are tidally unstable as well as initial configurations involving both spherical and `` relaxed '' neutron stars .post - newtonian models that make use of our numerical techniques will be considered in a subsequent paper .the newtonian and pn simulations that have been carried out to date can be placed into two categories : those that have employed eulerian hydrodynamic methods ( see bowers & wilson 1991 for a discussion of eulerian methods ) and those that have employed smoothed particle hydrodynamics ( sph ) methods ( see gingold & monaghan 1977 or hernquist & katz 1989 for a discussion of sph methods ) , an inherently lagrangean technique .additionally , these simulations have been carried out in both inertial , i.e. laboratory , frames and in non - inertial , rotating frames .these simulations have also utilized a range of techniques for calculating the gravitational potential .finally , these simulations have employed both spherical stars and equilibrium binary configurations as initial data .these choices can play a critical role in determining the outcome of the simulations .for this reason , in this section we briefly describe existing work on newtonian and pn binary neutron star systems with a focus on the numerical techniques and the initial configurations that have been used .we first consider the eulerian calculations followed by the sph models .the earliest eulerian models of binary neutron star systems were carried out by oohara and nakamura ( 1989 ) .this work and subsequent papers ( nakamura & oohara 1989 , oohara & nakamura 1990 , nakamura & oohara 1990 , nakamura & oohara 1991 , oohara & nakamura 1992 ) made use of purely newtonian hydrodynamics , while later work with shibata included pn effects ( oohara & nakamura 1992 , shibata , nakamura , & oohara 1992,1993 ) .as with all eulerian , i.e. grid - based , hydrodynamics methods the underlying pdes are discretized onto a coordinate mesh .the evolution of the mass distribution occurs as the material flows through the grid zones , and the equations governing the evolution are finite - differenced analogs of the euler equations .there are many approaches to finite - differencing the euler equations , but most modern formulations are at least second order in time and space , and have methods of realistically modeling shocks .the hydrodynamics method employed by the oohara / nakamura / shibata calculations utilizes leblanc s method for transport , making use of a tensor artificial viscosity .a brief description is given in an appendix to oohara and nakamura ( 1989 ) .the earlier calculations were carried out in the laboratory ( fixed ) frame , while later models utilized a rotating frame . in all of the calculationsthe gravitational potential was found by a direct solution of the poisson equation .however , none of the papers discuss the boundary conditions for this equation .finally , the papers have considered both spherical stars and equilibrium configurations for initial data although no comparisons of the two types of initial data were offered .ruffert et al .( 1996,1997,1997 ) performed pn simulations of nsms with the prometheus code implementing the piecewise parabolic method ( ppm ) of colella and woodward ( 1984 ) .the ppm is an extension of godunov s method that solves the riemann problem locally for the flow between zone interfaces , and , accordingly , it is well suited to addressing shocks .the calculation of the gravitational potential was accomplished by means of the direct solution of poisson s equation using zero - padding boundary conditions ( hockney 1988 ) .the initial conditions of the simulations were spherical neutron stars in both tidally locked and rotating configurations .the stars were embedded in an atmosphere of / that covered the entire grid , and an artificial smoothing was performed on the surfaces of the stars to soften the edges .the earlier models considered configurations with the realistic equation of state of lattimer and swesty ( 1991 ) while later studies considered models with a much simpler polytropic eos ( ruffert , rampp , & janka 1997 ) .the most recent eulerian simulations of binary neutron star systems were performed by new and tohline ( 1997 ) .their work focused on the evolution of equilibrium sequences of co - rotating , equal mass pairs of polytropes .the equilibrium sequences were constructed with hachisu s self - consistent field ( scf ) technique ( hachisu 1986a , b ) .the dynamical stability of these equilibrium sequences was tested by evolving them with a 2nd order accurate finite - differenced newtonian hydrodynamics code .the gravitational potential was obtained by a direct solution of poisson s equation accomplished by means of the alternating direction implicit method .no description was given of the boundary conditions that were applied to the poisson equation .the calculation was carried out in a rotating frame that was initially co - rotating with the binary system in order to avoid problems with the advection of the stars across the grid . in a stability test ,a comparison of two white dwarf binary system simulations starting from the same initial conditions , one carried out in the inertial reference frame and the other carried out in the initially co - rotating frame , revealed dramatic differences in the dynamics of the binary system .this difference illustrates the need for very careful studies of purely numerical effects on these types of simulations . in the stability tests ,new and tohline found no points of instability for polytropic models with fairly soft equations of state ( ) .they did find , however , an instability for the stiffer case indicating that systems with stiffer equations of state are susceptible to tidal instabilities .it is worthwhile to note that the authors state that they may have misidentified some stable systems as unstable had they performed their simulations in the inertial reference frame , and they therefore stress the importance of very careful studies of numerical effects and careful comparison of different numerical methods .the earliest sph simulations of binary neutron star systems began with the work of rasio and shapiro ( 1992,1994,1995 ) .in sph , a distribution of mass in a particular region of space is represented by discreet particles such that the mass density of the particles is proportional to the specified density of the fluid .local calculations of fluid quantities are accomplished by smoothing over the local distribution of particles .the method is inherently lagrangean .this numerical work accompanied semi - analytical work with lai , with the goal of predicting the onset of instabilities in binary systems ( lai , rasio , & shapiro 1993a,1993b,1993c,1994a,1994b , 1994c ) .the calculation of the gravitational potential was accomplished by mapping the particle distribution onto a density distribution on a mesh where the poisson equation was solved by means of fast fourier transform methods . in these calculations , the authors employed several different polytropic equations of state .the equilibrium initial data for the models were obtained by allowing the spherical stars to relax to a steady state solution in a rotating frame of reference .the work investigated sequences of binary star systems with a range of initial separations , and their construction of equilibrium initial configurations for evolutions was critical in determining if and when a dynamical instability forced the merger .the group reported the presence of a dynamical instability at separations that increase with an increase of the polytropic exponent .the drexel group performed newtonian sph simulations of nonaxisymmetric collisions of equal mass neutron stars ( centrella & mcmillan 1993 ) .this effort was followed by calculations of newtonian nsms by zhuge et al .( 1994,1996 ) they employed the treesph implementation of sph developed by hernquist and katz ( 1989 ) in which the gravitational forces are calculated via the hierarchical tree method of barnes and hut ( 1986 ) .the calculations employed a polytropic eos with and .these calculations were carried out in the laboratory frame and the initial data consisted of spherical stars in the non - rotating case or rotating stars that were produced using a self - consistent field method .the work was aimed at studying the gravitational radiation emission from nsms and addressed effects due to the equation of state , spins , and mass ratio of the stars on the gravitational wave energy spectrum .the coalescence in these calculations was driven by a frictional force term added to the hydrodynamic equations that models the effects of gravitational radiation loss .davies , benz , piran , and thielemann ( 1994 ) performed sph simulations of nsms with a focus on the nuclear astrophysical and thermodynamic effects of coalescence .the sph code used for these calculations was described in earlier work on stellar collisions ( benz & hills 1987 ) , and it makes use of a tree algorithm for calculating gravitational forces .the calculations were carried out in the inertial frame .the initial data for the neutron stars was modeled as equal mass polytropes , but a more realistic eos was employed for the dynamical calculation .the driving force behind the coalescence was a frictional force model of gravitational radiation loss similar to that of the drexel group .the rates of energy and angular momentum loss were determined by applying the quadrupole approximation to the equivalent point mass system , and the resulting acceleration of each sph particle was determined by expressions derived from these rates .subsequent calculations by rosswog et al .( 1998 ) have focused on _r_-process nucleosynthesis and mass ejection .more recently members of this group have developed a pn extension of the sph algorithm ( ayal et al .1999 ) , which they have applied to nsms to study the dynamics and gravitational wave emission from the merger . in the remainder of this paper we will focus on comparisons of several eulerian numerical methods for modeling newtonian binary neutron star systems as well as comparisons of the effects of spherical versus equilibrium initial data . additionally , we will examine the stability of both the spherical and equilibrium initial data .the subject of gravitational wave signals from nsms will not be considered here , but instead will be the subject of a subsequent paper . in [ sec : hydro ] we explain our numerical methods for evolving the equations of hydrodynamics and solving the poisson equation . in [ sec : eqd ] we delineate our method for obtaining equilibrium data with self - consistent boundary conditions . in [ sec : self ] we compare calculations carried out in both rotating and inertial frames with several different schemes for coupling gravity to matter via the gas momentum equations . in [ sec : stab ] we compare models with both equilibrium and non - equilibrium initial data . in [ sec : conc ] we offer conclusions about this work especially regarding its meaning for post - newtonian and fully general relativistic models of neutron star mergers .as we have mentioned in the previous section , the accuracy of the numerical algorithms is of paramount importance if one is to obtain accurate hydrodynamic models for mergers .previous work on such models have utilized a variety of hydrodynamic schemes in either a fixed ( inertial ) or rotating frame of reference .the latter has been claimed to be more accurate by virtue of is obviation of the difficulties of advection , but no systematic comparison of the two has yet been published . in this sectionwe describe two numerical hydrodynamics schemes that we have employed to produce such a study .we have not attempted an exhaustive study in which we compare the qualitative results of each hydrodynamic scheme that has been employed to date .such studies have been conducted for most of these schemes on a number of problems involving shocks . however , the performance of the hydrodynamic algorithm on shocks is not the only metric by which one needs to measure the quality of the hydrodynamic algorithm .for example , for simulations in which the linear momentum equation has been solved one should examine how well angular momentum is conserved . in the long timescale evolutions needed for multiple orbit simulations of orbiting stars , the addition or loss of angular momentum into the calculation could artificially enhance or delay inspiral during the mergers .similar issues apply for linear momentum in simulations where the gas angular momentum equations are solved .in the same vein , if the gas momentum equation is solved then one should monitor how well the total energy is conserved . or if the total energy equations is solved one should monitor how well the gas energy equation is solved .the 3-d numerical hydrodynamics scheme we describe in the section is similar to the zeus scheme of stone & norman ( 1982 ) .however , we have made some fundamental changes to the order of operations in order to improve the numerical accuracy of the scheme on self gravitating problems .the flow of matter in the neutron stars can be taken to be inviscid . under these circumstances the newtonian description of the matter evolutionis described by the continuity equation together with the euler equations ( mihalas & mihalas 1984 , bowers & wilson 1991 ) of compressible inviscid hydrodynamics . in an inertial frame of referencethe equations are where the dependent variables are the mass density , the internal energy density , the fluid velocity , the fluid pressure , and the newtonian gravitational potential .the gravitational potential is described by the the poisson equation in conjunction with boundary conditions that must be specified . in a frame rotating with angular frequency about the center of the fixed ( inertial ) grid the gas momentum equation is modified by the addition of the coriolis and centrifugal forces ( chandrasekhar 1969 ) where is the velocity of the rotating frame relative to the lab frame .the notation indicates the component of the vector . in the limit of recover the inertial frame momentum equation . the continuity , gas energy , and poisson equations are unchanged from the inertial frame case . for the remainder of this paperwe will consider the angular frequency vector to be co - aligned with the z axis so that we assume that the z axis passes through the center of the grid at coordinates . under this conditionthe coriolis and centrifugal force terms in cartesian coordinates become : where and are the coordinates of the z - axis .the set of hydrodynamic equations must be closed by specifying an equation of state expressing pressure as function of local thermodynamic quantities .a standard choice for building the initial neutron star models is the polytropic equation of state , which has the form where is the polytropic exponent , which is related to the polytropic index , , by the relationship this particular type of eos is advantageous in that the gas energy equation becomes linear in rendering the solution trivial . in isentropic situationsthis eos allows the pressure to be written purely as a function of density in the form where is the pressure , and is the polytropic constant .this form of the eos is used for the construction of the initial models .much of the numerical scheme we employ for the solution of the euler equations is derived from the zeus-2d hydrodynamics scheme invented by stone & norman ( 1992 ) ( hereafter sn ) .in particular the finite - differencing stencils are identical to those of sn , with the exception of the coriolis and centrifugal forces , which are not included in the zeus-2d scheme .however , the method we employ differs in one significant way : the order of solution of the various terms in the euler equations differs from that of the zeus-2d algorithm . as we will show in a subsequent section of this paper ,the order of the solution of these equations is of fundamental importance to the accuracy of the algorithm in the case of self - gravitating hydrodynamics .a final simplification for our algorithm , which we shall henceforth refer to as the v3d algorithm , employs only cartesian coordinates .this simplification significantly increases the computational speed of the v3d code .the finite - differencing algorithm we employ relies on a staggered grid in which the intensive variables , and are defined at cell centers , while the vector variables such as the velocity components are considered to be defined at their respective cell edges .the gravitational potential is also defined at the cell center .the centering of the variables on the grid is depicted in a 2-d plane of the 3-d grid in figure [ fig : grid ] . in our finite - difference notationwe employ superscripts to denote the time at which the variables are defined .the timestep is taken to be and the numerical algorithm advances the solution of the pdes from time , , to the new time , .the explicit finite - difference algorithm employed by sn for the solution of the euler equations decomposes the time integration into two steps by employing operator splitting among the various terms of the equations . in one stepthe density , internal energy density , and velocities are updated by integrating the advective terms . in the nomenclature of snwe refer to this as the _ transport _ step .the remaining terms , i.e. the terms on the right hand sides of equations ( [ eq : cont ] ) , ( [ eq : gase ] ) , and ( [ eq : rgasm ] ) , are integrated forward in time .following sn we refer to this step as the _ source _ step .an additional consideration involves the solution of the poisson equation and how it relates to the solution of the euler equations . in the zeus-2d algorithmthe order of solution of the euler equations is described in the flow chart of figure [ fig : flow]a .in contrast our algorithm , v3d , is described in figure [ fig : flow]b . in the transportstep the following equations are solved : the advection integration is carried out using norman s consistent advection scheme ( norman 1980 ) , which ties the advected internal energy density and advected velocity to the mass flux as described in sn .this concept of tying the energy and momentum fluxes to the mass flux has been shown to posses superior angular momentum conservation properties ( norman 1980 ) .the actual flux limiter employed is the van leer monotonic flux limiter ( van leer 1977 ) that is spatially second order . in the sourcestep the following equations are updated : the scalar viscous stress is added to the equations in order to allow for viscous dissipation by shocks in the fluid .we employ the standard von neumann - richtmyer prescription for the viscous stress , as described in sn , with a length parameter of .we monitor the viscous dissipation arising from this stress and have found that the total viscous energy generation is negligible for two merging polytropes .the coupling to gravity enters through the gradient of the newtonian gravitational potential in equation ( [ eq : rgasm_s ] ) .we wish to note that the continuity equation is not updated during the source step as it possesses no source term .the lack of a source term for the continuity equation means that the density at a new time is known after the transport step is complete . as we will discuss in a subsequent section of this paper ,this point is crucial to our preferred method of solution for these equations .the explicit finite - differencing of the euler equations is briefly discussed in appendix a. for the remainder of this section we concentrate on the order of solution of the transport and source steps . in the method of sn the source terms ( equations [ eq : gase_s ] and [ eq : rgasm_s ] )are integrated forward in time to arrive an intermediate solution for the new internal energy density and the velocity .the intermediate energy density and velocity are used as initial values for the transport equations ( [ eq : cont_t])-([eq : rgasm_t ] ) , which are then integrated forward in time to find the values of the density , energy density , and velocity at .sn have shown by means of convergence testing that this algorithm is spatially second order accurate .there is no compelling reason to suggest that the order of source and transport updates as presented by sn is preferred .one can easily reverse the order of updates so that the results of the transport step are utilized in the source update .we henceforth will refer to this order of updates as the v3d algorithm while the opposite order will be referred to as the zeus algorithm . by comparing the two algorithms on a number of standard hydrodynamic test problems we have numerically verified that the reversal of these operations has no significant effect on the overall quality of solutions when self - gravity is not present .however , as we will show in a later section the v3d algorithm offers significant advantages when modeling orbiting binary stars .an example of the comparable performance of the zeus and v3d algorithms on a standard test problem is shown in figure [ fig : sod ] where the performance of the algorithm on a sod - like ( sod 1978 ) shock tube is shown .the shock tube problem pictured employs a polytropic equation of state .the grid is set up with 100 spatial zones over the range of cm with the initial contact interface at located at .this initial configuration is that of a riemann problem , which results in a shock and a contact discontinuity propagating to the right and a rarefaction propagating to the left . because the exact solution is known ( chorin & marsden 1993 ) we can easily evaluate the numerical results from both algorithms . overall the character of the numerical solution is comparable between the two cases .the values of the variables in both the contact discontinuity and the shock are slightly different as would be expected from different algorithms , but both methods resolve the shock and the contact discontinuity with the same number of zones .the rarefaction is represented nearly identically by both methods .the figure compares the two orders of update .one can visually see that little difference exists between the two solutions .we have verified this on a number of other non - self - gravitating test problems .in contrast , for the case of self - gravitating hydrodynamics , we do find that the v3d algorithm is preferred as we will discuss in section [ sec : self ] . in order to describe self - gravitating phenomena , the gravitational field of the matter distributionmust be found by solving the poisson problem described by equation ( [ eq : poisson ] ) .the poisson equation can be readily solved by a variety of techniques well suited to elliptic equations . in the simulations described in this paperwe have employed both w - cycle multigrid and fast - fourier - transform ( fft ) methods ( press et al .both methods have been extensively tested on matter configurations where the solution is known .because the poisson equation is linear one can easily generate test problems with known answers , but which also posses complex field geometries .for example , by placing polytropes at random points within the computational domain we can create a complex gravitational field configuration .since the gravitational potential for a polytrope is analytically known , the potential for the entire configuration at any point in the computational domain is readily found as a superposition of individual polytropic solutions . using this methodwe have found that both methods give the correct answers to approximately for the grid resolutions employed in this work .the numerical solution of equation ( [ eq : poisson ] ) requires the specification of boundary values along the edge of the computational domain . for the problem of merging neutron starsthese are _ a priori _ unknown .the problem of determining the appropriate boundary conditions has been approached differently by a number of different groups .ruffert et al .have employed zero - padding boundary conditions in conjunction with their fft solution method .the zero padding method has been shown by james ( 1977 ) to be algebraically equivalent to a direct summation by convolution of image charges ( defined on the edge of the grid ) over the green function for the poisson equation .oohara and nakamura ( 1990 ) , rasio and shapiro ( 1992,1994,1995 ) , and new and tohline ( 1997 ) have not specified how boundary conditions on the potential were obtained for their hydrodynamic solutions . a number of groups ( davies et al .1994 ; zhuge et al .1994 ; zhuge et al .1996 ) have carried out smoothed particle hydrodynamics ( sph ) simulations in which the field was computed by a tree - code summation obviating the need for the specification of boundary values .the accurate specification of self - consistent boundary conditions for the poisson equation is challenging .the expansion of the potential in terms of multi - poles may require the expansion to be carried out to very high order if one is to obtain accurate values for the potential .this is especially true given that the initial configuration for a neutron star merger simulation consists of two widely separated fluid bodies , which has a very large quadrupole moment .the zero - padding method necessitates a fairly large memory cost .this memory cost can largely be eliminated by use of the james algorithm for the solution of the poisson problem for isolated systems ( james 1977 ) .in fact we employ the james algorithm to obtain equilibrium initial data described in the next section .however , we have found that the james algorithm does not scale well to large numbers of processors on shared - memory parallel computers . in order to obtain an accurate algorithm for the boundary conditions on the potential we have turned to direct integration over the green function for poisson s equation , i.e. because of the large number of grid zones present in the problem , it is computationally intractable to compute this sum directly .if our computational domain is discretized into points in each of the three spatial dimensions then the summation is over zones in order to evaluate the potential at each of the points on the edge of the domain .this implies the total algorithm is order .since we typically employ for our simulations , this renders the direct summation over the entire grid computationally intractable for use in a hydrodynamic simulation .this dilemma is further exacerbated by the need to calculate an inverse square root in order to evaluate the distance from the boundary point to each zone .however , for the problems we are considering the mass is concentrated in only a relatively small region of the computational domain .if we restrict the summation to only those zones in which a significant amount of mass is present , the summation becomes more tractable .accordingly , we have adopted the following algorithm for obtaining the boundary values .we evaluate the amount of mass in eight - zone cubic ( ) `` blocks '' of the grid .if the mass of the zone is greater than a threshold value the total mass of the block and the blocks center of mass coordinates are stored in a list .this operation is order but it is only required once per timestep . once the entire mesh has been scanned we obtain a complete list of all the block with a significant mass .the size of the list is dependent on the mass threshold employed .if the mass threshold is chosen too low , the list will become very large and the summation computationally intractable .if the cutoff is chosen too large , the list will not include most of the mass in the domain .we have experimentally found that a value of produces a list which fully represents the mass in the domain .once the list of significant mass blocks has been produced the boundary values of the potential can be calculated by direct summation where is the mass in the block in the list , is the distance from the point on the edge of the grid to the center of mass coordinates of the block , and is the length of the list .this operation is order .however , , which renders the summation tractable . furthermore , this algorithm is readily parallelizable on a shared memory parallel computer thus allowing for a rapid solution .we typically find that calculation of the boundary conditions never exceeds 20% of the overall computational effort .finally , we wish to emphasize that this algorithm will not work efficiently in cases where the mass is more evenly distributed over the entire mesh. the accuracy of this algorithm has been tested by two methods .first , the algorithm was applied to the test problems we mentioned earlier in this section where the analytic answer was known .secondly , the summation algorithm was also tested in more general situations by comparing the boundary values obtained by this method to those obtained by brute force direct summation . in all cases the boundary values agreed to better than ; in most cases the agreement was better than .additionally , we track the total mass in the list so that it can be compared to the total mass on the mesh . any significant difference between the two masseswill indicate a problem with the summation .in order to accurately model two neutron stars in close orbits with one another , it is important to employ initial conditions that precisely reflect the true configurations of the two fluid bodies . in general , for close binary systems , these equilibrium configurations will not consist of two spherical stars .instead the configuration will contain tidally distorted fluid bodies that are only approximately spherical . in numerical simulations of binary star systemsthe stability of the orbits can be quite sensitive to the details of the initial configuration .in a subsequent section we compare dynamical models which have employed equilibrium initial conditions with models that have utilized spherical stars .the construction of initial data for these systems is a non - trivial task . in practiceeach neutron star in a binary system will be non - synchronously rotating around it s own axis .several calculations ( bildsten & cutler 1992 ; kochanek 1992 ) have shown that viscous dissipation at the causal limit is insufficient to tidally lock binary neutron star systems during their lifetime . accordingly ,the most realistic configurations that one could model would be non - tidally locked . however , there is tremendous difficulty of obtaining equilibrium initial data for such cases .finding initial conditions that correspond to the non - synchronous case would require the solution of the compressible darwin riemann problem that is well outside the scope of this paper . since the target of this paper is a study of the numerical methods and initial conditions needed for precise simulations of binary neutron star mergers , we restrict ourselves to the tidally locked case .realistic binary systems will also contain unequal mass components . however , in this paper we consider only equal mass systems . our numerical algorithm for obtaining equilibrium initial datais easily extended to the non - equal mass case which will be considered in a future paper .a number of other research groups ( oohara & nakamura 1990 ; new & tohline 1997 ) have developed methods to obtain equilibrium data for the case of synchronous binary neutron star systems .in both cases the equilibrium models must simultaneously satisfy both the bernoulli and poisson equations . in the case of oohara and nakamurathey have employed a method that in similar to ours in that it iteratively solves the bernoulli and poisson equations on a cartesian grid .however , we have found some problems with this method for obtaining the initial conditions that we seek .new & tohline have found initial conditions using the self - consistent field technique of hachisu ( 1986a,1986b ) , which iteratively solves the bernoulli equation and the integral form of the poisson equation on a spherical polar grid .while this latter method avoids the problems we have found with the oohara method , it s use for our case would involve remapping of the data from the polar grid to the cartesian grid we employ for our dynamical simulations . this remapping would introduce small errors that would render the initial conditions on the cartesian grid slightly out of equilibrium . in turn , the deviation from equilibrium can cause spurious hydrodynamic motions away from the initial data once the evolution begins . in order to avoid thiswe have combined techniques from both the oohara et al .and new & tohline methods .our objective is to develop equilibrium data on the same grid that the hydrodynamic simulation will employ .we first need to identify the equations that describe the equilibrium configuration .these equations result from taking the hydrostatic limit of the euler equations of hydrodynamics together with the poisson equation . in the hydrostatic limit the gas momentum equation ( [ eq : rgasm ] ) collapses to if we make the assumption that the equation of state is of the isentropic form given by equation ( [ eq : polyt ] ) , then equation ( [ eq : rhyd1 ] ) can be integrated by parts to find the bernoulli equation = c \label{eq : bern1}\ ] ] where is a constant and where we have assumed that the rotation is about the z - axis. this equation must be satisfied in the interior of the fluid bodies . simultaneouslythe equilibrium data must also satisfy the poisson equation however , two fundamental difficulties occur in the solution of these two equations on cartesian grids containing self - gravitating fluid bodies .first , we do not _ a priori _ know where the boundaries of the fluid bodies lie , and hence we do not know where the bernoulli condition should apply .second , we do not _ a priori _ know the boundary conditions on that must apply to the poisson equation .the boundary conditions on must be determined in a self consistent fashion using the green function corresponding to the poisson equation .this latter problem is the more difficult of the two problems to solve . while oohara et al .have employed a direct solution of the poisson equation for equilibrium data , they have made no mention of what boundary conditions they have employed on equation ( [ eq : poisson2 ] ) .we have found that the configuration resulting from the iterative solution of equations ( [ eq : bern1 ] ) and ( [ eq : poisson2 ] ) is quite sensitive to the use of non - self - consistent boundary conditions and we strongly recommend against employing such boundary conditions . in order to minimize the problems with deciding where the boundaries of the fluid bodies are , we have adopted a technique from the scf technique of hachisu et al .we consider equilibrium binary systems in two different topological configurations as depicted by figure [ fig : binary ] . in the first case we consider non - contact binary systems . in the second case we consider contact binary systems . in the first case during our iterative solution of the combined bernoulli and poisson equationswe specify the extremal inner and outer points of the star as depicted in figure [ fig : binary]a .we define the orientation of our grid so that the x - axis passes through the centers of mass of the two stars .the z - axis passes through the barycenter of the system thus defining the origin of the grid . by specifying the inner and outer points we seek equilibrium solutions with a certain aspect ratio . by adjusting the locations of the extremal points we can find equilibrium configurations with varying separations between the components and or their centers of mass . in the case of contact binaries we specify the extremal outer point of the contact system and the extremal outer point of the neck connecting the high density portions of the two fluid bodies . as with the detached case , by varying these two points we can find a sequence of equilibrium configurations with varying separations between their centers of mass . in the contact binary casewe define the center of mass of each star by considering only the mass contained within each half of the computational domain as defined by a plane perpendicular to the line connecting the two highest density zones of the grid . once we have identified the configuration of the system we can then determine where the bernoulli equation can be applied during each iterative step .assuming , for the moment , that we know , , and , and that we posses some iterative estimate of , when can then solve the bernoulli equation for a new estimate of the density : \right ) \right]^{1/(\gamma-1)}. \label{eq : bern2}\ ] ] obviously , this equation only makes sense where the factor contained within the square brackets of equation ( [ eq : bern2 ] ) is positive .we use this as a criterion to decide where to apply the bernoulli equation .if the factor is positive the density is updated to the density as determined by equation ( [ eq : bern2 ] ) , otherwise the grid zone is considered to be vacuum and the density is set equal to zero .the determination of self - consistent boundary conditions for the poisson equation is of paramount importance . in order to clarify what we mean by the use ofthe term `` self - consistent '' we first clarify the problem .we assume that the computational domain , , contains the self - gravitating fluid bodies which have compact support within the interior of the domain , i.e. the fluid density vanishes on the boundary of the domain .more simply stated , we assume that the fluid bodies are contained inside the domain . this type of self - gravitating system has been termed an `` isolated '' system ( james 1977 ) . under these circumstanceswe require that the boundary conditions on satisfy equation ( [ eq : poisson2 ] ) .since the potential on the boundaries depends on the density distribution we can not _ a priori _ self - consistently know the boundary values prior to solving the problem .this problem is not only relevant for initial data but is also relevant for the solution of the poisson problem during the course of a self - gravitating hydrodynamic simulation .various groups modeling equilibrium binary configurations have attempted to avoid this problem .the equilibrium sequence work of nt has utilized the scf method of hachisu , which avoids this problem by employing a multi - pole expansion of the potential in order to estimate the boundary conditions . in the case of isolated systems , james ( 1977 )has shown that the boundary conditions can be obtained exactly by use of fft techniques .we accordingly employ this method for use in our initial data algorithm .furthermore , james has shown that this method is algebraicly equivalent to the `` zero - padding '' technique employed by ruffert et al .the advantage of the james algorithm over the zero - padding technique is that it requires substantially less memory overhead which is a significant advantage in a 3-d simulation .we could also employ this algorithm to compute the self - consistent potential during the course of our hydrodynamic simulations .however , in the hydrodynamic simulations we have found it advantageous to employ equation ( [ eq : green1 ] ) directly to get the boundary conditions for followed by a straightforward poisson solve using either multigrid or fft techniques ( press et al .we have found that this method is more amenable to implementation on the shared memory parallel computing architectures that we employ for our simulations .the complete algorithm for finding the initial data is as follows : 1 .fix inner and outer points of stars ( in the detached binary ) case and outer point and neck width ( in contact binary case ) .denote the distance from the z - axis to these points as and .2 . make initial guess at the density distribution throughout the computational domain .also guess an initial value of .3 . using density distribution solve for potential using the james algorithm to solve the poisson equation with self - consistent boundary conditions .4 . using the bernoulli equationevaluate by 5 .evaluate the bernoulli constant , , at using equation ( [ eq : bern1 ] ) 6 .update by evaluating the bernoulli equation at some point which lies on the line through the centers of the stars 7 .calculate new value of density for every zone on the grid using the following algorithm : where \right).\ ] ] 8 .if the maximum relative density change in any zone of the grid is less than then consider the solution converged and stop. otherwise go to step 3 .one major difference between this algorithm and those utilized by others is step 6 , the update of . for a particular equation of state , e.g. , there may not be a solution for an equilibrium configuration with a given inner and outer point .one can easily see this for the case of an isolated polytrope where the radius is determined by ( shapiro & teukolsky 1983 ) ^{1/2}.\ ] ] in this case only a specific value of will allow the star to `` fit '' into the specified number of grid points between and . if the value of is not allowed to change the iterative procedure described does not converge . in practicethe change in is small . when constructing an equilibrium sequence with fixed values of , we adjust the grid size slightly to get the desired value of .we have found that this usually only requires changes of a few percent in the grid spacing in order to find an equilibrium solution for a specified value of .the total mass of the converged equilibrium system is determined by the initial guess of the density distribution in step 2 of the algorithm . by multiplying the initial guess of the density distribution by some factorwe can converge to equilibrium systems of more or less mass .for the case of both the bernoulli and poisson equations are linear in the variables and . in this case the total mass of the converged solution is affected only by the initial guess at the distribution while the value of is determined only by the grid spacing .this renders the procedure of producing a sequence of equilibrium solutions for a given polytropic constant , , and total mass , , relatively easy . in the casewhere the bernoulli equation becomes non - linear and changes in or the initial density guess affect both the resulting value of and . in this casebuilding an equilibrium sequence becomes much more difficult and time consuming .for this reason we have constructed a equilibrium sequence only at resolution .we have constructed a few specific equilibrium configurations at resolution for use in hydrodynamic studies of stability . using the method that we have described above we have constructed equilibrium sequences of data for ( ) and ( ) polytropes . for the case ,the sequences were constructed with a total mass of and a value of erg g .an isolated spherical polytrope with these parameters would have a radius of approximately km and a central density that is roughly 10 times the nuclear saturation density of g/ .such a configuration resembles a realistic neutron star .the sequence is shown in figure [ fig : eq_seq_2 ] while the case is shown in figure [ fig : eq_seq_3 ] .all separations shown are the center of mass separation which has been normalized to the spherical radius of a single undisturbed polytrope .both the total energy , , and the angular momentum , , are plotted for each configuration . in the sequencethe models with a separation of less than about are contact binaries where the two stars are joined by a `` neck '' of matter passing through the barycenter of the system .those systems with separations greater than are detached . for the casethe bifurcation point is at a separation of approximately . in order to obtain this numbermore precisely we would have to construct models at substantially higher resolution .because of the difficulty of constructing a large number of configurations with a specified value of and for the non - linear case , we have chosen not to do so .our purpose was to construct initial data for hydrodynamic simulations using the same grid that we would employ for the simulation . for the models shown in figures [ fig : eq_seq_2 ] and [ fig : eq_seq_3 ] the grid resolution was approximately km for the models and km for the models . from figure[ fig : eq_seq_2 ] it is easily observed , by comparing the and models , that the models do not have adequate spatial resolution at the wider separations . nevertheless both the and models show minima in both the total energy , , and angular momenta , , at approximately . the slight variation in the data near the bifurcation point between detached and contact binaries is due to the finite resolution of the grid .the contact binaries in this case may have a neck consisting of only one or two zones , a situation which is likely to cause some fluctuation in both the energy and the angular momentum due to the discrete nature of the neck .it is interesting to compare these results with the semi - analytic work of lai et al .( 1993c ) ( hereafter lrs ) .lrs constructed an equilibrium sequence of binary , compressible , darwin ellipsoids as an approximation to the equilibrium configurations of two synchronously orbiting polytropes . by identifying a turning point in the energy versus separation curveslrs found a secular instability for polytropes at a separation of .furthermore , lrs also found that these turning - points occured simultaneously in both the total energy , , and the angular momentum , .there have also been a number of efforts to construct such equilibrium sequences numerically .in addition to their semi - analytic work lrs also found equilibrium sequences obtained using a relaxation scheme which employed smooth particle hydrodynamics methods yielded a turning point in the energy and angular momenta for an equilibrium sequence at a separation of .in contrast new and tohline ( 1997 ) , using the scf technique , found a turning point at .our result , which can be readily seen from figure [ fig : eq_seq_2 ] , yields an approximate turning point of .this result is somewhat closer to the compressible darwin ellipsoid value and much further from the recently obtained value of new & tohline . in agreement with both lrs and nt, we find the turning point at a point where the equilibrium systems are still detached .nevertheless , the turning point is quite close to the point at which attached systems would form .the occurrence of a turning point on the detached binary branch of the curve seems to indicate that a binary system slowly spiraling inward by some energy and angular momentum loss mechanism will encounter an instability without ever becoming a contact binary .the nature of this instability and its implication for the dynamical evolution of binary systems will be discussed in a subsequent section . in the caseour results indicate a turning point in the equilibrium sequence . in this casethe value of the polytropic constant was erg g and the total mass was .note that in this case the turning point we seem to find is at approximately in comparison with the lrs semi - analytic value of .in contrast nt find a value of .unfortunately , hachisu ( 1986b ) does not present numerical values for the separation at this turning point so we are unable to compare to this work .in contrast , although rasio & shapiro ( 1994 ) find a find an instability for the case of based on hydrodynamic simulations , the equilibrium sequence they obtain on the basis of relaxation methods using their sph code yields a turning point at .this result can be contrasted with the results of nt and our own results which show a turning point in both energy and angular momentum at substantially larger separations . however , the semi - analytic results presented in both lrs and in lai et al .( 1994b ) show a turning point at .we will discuss the implications of this turning point for hydrodynamic evolution of a binary system in a subsequent section of this paper .our numerical hydrodynamics method requires the density to be non - zero everywhere on the computational grid .therefore , we include a low density ( 1 g/ ) `` atmosphere '' as a background in regions where stellar matter is not present .we have varied the density between ( 1 - 10/ ) in our hydrodynamic simulations and have found that this has no discernible effect on the dynamics of the simulations .other models ( ruffert et al . 1995 , 1996 ) of neutron star mergers that have been carried out with eulerian codes have had to employ much higher densities ( g/ ) for the surrounding material . however , adding matter in regions outside the stars presents two difficulties for hydrodynamic simulations .first , such matter will not in general be in hydrodynamic equilibrium if it has the same entropy as the matter in the stars .thus at the beginning of the simulation it will immediately infall towards the stars and form an accretion shock at the surface of the stars .this accretion shock , while not physically troublesome because of the low density of the material in the atmosphere , will have the undesirable numerical effect of driving the timestep determined by the courant stability condition to a very small timestep because of the high infall velocities . in order to counter this effectwe make the atmosphere hot , i.e. we set the energy per baryon in the atmospheric material to approximately mev .this has the effect of preventing the atmosphere from falling down onto the neutron star surfaces .furthermore we decrease this energy slightly with distance from the stars so as to achieve a configuration that is slightly more hydrostatically stable .a second problem originates if the atmosphere is put in place with a non - zero velocity with respect to the stars .if the matter is placed on the grid with zero velocities in the lab frame , the motion of the stars quickly sweeps up the matter into a bow shock on the front sides of the orbiting stars . in a circumstance similar to the accretion shock mentioned in the previous paragraph, the bow shock has the numerical effect of driving the courant timestep to zero . in order to avoid this problem ,calculations that are carried out in the lab frame have an atmosphere with an initial velocity such that the material is rotating about the center of the grid at the same speed with which the stars are revolving . near the edge of the grid the velocity of the atmospheric materialis slowly tapered to zero so as to avoid a shock forming at the edge of the grid and to keep the velocity below the speed of light . in the case of rotating frame simulations the velocity of the atmospheric materialis set equal to zero in the rotating frame .these two steps obviate the problem of having the matter accrete onto the stars .we wish to note that this method requires no additional machinations to treat the material outside the stars ; the evolution of the material is described by the numerical solution of the euler equations .one of the more difficult aspects of self - gravitating hydrodynamics is the need to self - consistently solve both the partial differential equations describing the dynamics of the fluid and the equation(s ) that describe the gravitational field arising from that matter .one of the origins of this difficulty in the newtonian case is the different mathematical character of the two sets of equations : the euler equations are hyperbolic while the poisson equation is elliptic in nature .while the euler equations can be numerically solved by explicit techniques , the poisson equation requires an implicit solution .since the two sets of equations are coupled by the gravitational acceleration term in the gas momentum equation , one must take care that the numerical methods employed for these coupled equations adequately maintain all the desirable properties of the total system such as angular momentum conservation and total energy conservation . in this sectionwe compare several methods for these calculations that employ various methods for treating the gravitational acceleration term .we wish to emphasize that no numerical scheme that solves the linear gas momentum equations in three dimensions will guarantee the numerical conservation of angular momentum .the converse is also true : if one solves the angular momentum equations in three dimensions the solution will not in general numerically satisfy the linear momentum equations .this discrepancy arises from the fact that the finite - differencing of the underlying partial differential equations reduces them to algebraic equations that must be solved for the new values of the density , internal energy , and velocity .thus the five euler equations are sufficient to algebraicly determine the five variables .the finite - differencing of the angular momentum conservation equations will be different from the linear momentum equations and thus give rise to five additional equations that must be algebraicly satisfied by the same five variables .the problem is algebraicly over - constrained .despite the fact that the linear gas momentum and linear angular momentum equations can be easily shown to be equivalent , i.e. that conservation of linear momentum guarantees the conservation of angular momentum and vice versa , there is no such equivalence between the finite - difference analogs to these two vector equations .a similar statement can be made about the gas energy equation and the total energy equation .the issue of importance to simulations of orbiting neutron stars is how badly does numerical conservation break down over the course of a simulation ?that is , how badly conserved are the angular momentum and total energy over the course of a simulation ?we will consider these issues for several different schemes for coupling the poisson and euler equations .we will also show that a superior choice among these schemes emerges from these comparisons .this is vital for quantitatively accurate models of binary neutron star mergers where it is necessary to conserve both angular momentum and energy in order to ensure that orbital decay is physical and not the spurious result of numerical non - conservation .as we have previously mentioned in subsection [ subsec : numerical ] , there are two possible orders of update of the source and transport portions of the euler equations .these two possibilities are illustrated algorithmicly in figure [ fig : flow ] .the zeus algorithm of stone & norman employs the order of update shown on the left of figure [ fig : flow ] while our v3d code employs the method shown on the right .as we have shown in figure [ fig : sod ] there is no substantive difference between these two method in the case where self - gravity is not relevant .however , in the self - gravitating case these two approaches admit different possibilities for calculating the gravitational acceleration in the gas momentum equation . in the case of the zeus algorithm ,the solution of the source step first requires the gravitational potential in order to calculate the gravitational acceleration in the gas momentum equation .hence the need for first solving the poisson problem as described on the left side of figure [ fig : flow ] . since the density at the new time ( ) is not _ a priori _ known at the beginning timestep ( at time ) , the right hand side of the poisson equationcan only be constructed using the density that is known at time .thus the newtonian gravitational potential is known only at time .consequently the gravitational acceleration term which is calculated from the newtonian potential is not time - centered between times and .our code , v3d , performs the advection step before the source step that updates the lagrangian terms ( the terms on the right hand side of the hydrodynamics equations ) .this ordering , advection before the source update , allows the choice of computing the right - hand side of the poisson equation , , with the density at the old time step ( time lagged ) , the new time step ( time advanced ) , or the average of the two densities ( time centered ) .the finite - difference expressions for these choices are : this choice can play a significant role in the dynamics of a simulation .for example , in physical situations where the gravitational acceleration is always increasing in time , the use of the time - lagged centering will always underestimate the gravitational acceleration . over long timescales this consistent underestimate can lead to significant deviations from the true physical behavior of the system . in the case of orbiting binary starsthis could lead to non - physical evolution of the orbits .finally , the choice of time - centering for the gravitational acceleration can have a significant impact on the conservation of both angular momentum and total energy . both total energy and angular momentum conservation are vital in achieving good neutron star merger models .a lack of conservation of either of these two quantities could lead to unphysical inspirals and both qualitatively and quantitatively incorrect outcomes of the simulations .therefore , it is necessary for us to examine how well both of these quantities are maintained as various choices are made for the gravitational acceleration coupling method . in order to ascertain how the time centering affects the dynamics of binary orbits we have conducted a simple test .we have placed two spherical , polytropes in a circular orbit with a separation of where km is the radius of the spherical polytrope . at such wide separationsthe tidal distortion of the polytrope is minimal and the spherical approximation is valid .this latter point will be confirmed in a subsequent section of this paper where we compare spherical and equilibrium initial data . in the physical situationdescribed in the previous paragraph , the two stars should remain in perfectly circular orbits with constant angular momentum . for this reason we have carried out six simulations in which we compare the effects of the three time centerings for both the rotating frame and inertial frame cases .the results of these six simulations are shown in figure [ fig : sixpanels ] , which depicts the trajectories of the centers of masses of the stars , and the barycenter of the system , in the orbital plane .note that the trajectories are terminated at the point where the stars merge or where the simulation was stopped ( if a merger did not occur ) .the comparison among the matrix of plots reveals that the choice of centering has a major impact on the evolution of the orbits .the results of the zeus algorithm , which employs a time lagged centering for the gravitational acceleration and is carried out in the inertial frame ( depicted in the bottom right panel ) , show a completely spurious inspiral of the two stars in the first orbit . in contrast , the middle right and the top right panels show inertial frame models with time centered and time advanced gravitational acceleration couplings . while the decay of the orbit is diminished with the time centered and time advanced couplings , the overall evolution of the orbits is still unstable . in a forthcoming paper , ( calder et al .2000 ) we shall show that this is a common feature of hydrodynamics simulations of this problem that employ inertial frames .the decay of the orbits in the inertial frame case is due to the non - conservation of angular momentum .this is shown directly in figure [ fig : jtot_comp ] where the angular momentum evolution for inertial frame models in the time centered and time advanced cases are plotted over the first millisecond .we have carried out additional models for each of these cases with a series of decreasing courant fractions .we define the courant , or cfl , fraction as the ratio of our actual timestep to the maximal possible hydrodynamic timestep as determined by the timestep control for the zeus / v3d algorithm ( see sn for details ) . in most simulationswe employ a cfl fraction of .the time - lagged models that we have carried out have revealed an even larger decrease in the angular momentum as a function of time than the time - centered models , which explains the rapid inspiral seen in the bottom right panel of figure [ fig : sixpanels ] .as figure [ fig : jtot_converge ] shows , the lack of conservation of angular momentum is clearly related to the size of the timestep .a perfect algorithm would show no evolution of the angular momentum . additionally , while the time lagged case is much worse than the time advanced case , both show a significant change in angular momentum over the first millisecond of evolution . in the time - lagged casethis loss results in the decreasing orbits seen in figure [ fig : sixpanels ] . in the time - advanced case the orbit outspirals and the system eventually acquires a small drift due to the slight interactions with the boundaries .this drift becomes noticeable after many orbits .the artificial loss of angular momentum in the calculation is due to the inability of the finite - difference scheme to maintain conservation of both angular momentum and linear momentum .while this loss is mitigated through the use of time - advanced gravitational centering it is still sufficient to cause an unphysical inspiral of the system .the use of a rotating frame helps to minimize the effects of angular momentum loss . with a rotating frame it is possible to choose the angular velocity of the frame so that the motion of the stars with respect to the frame is minimized .the advantages of employing a rotating frame are clearly shown in figure [ fig : sixpanels ] . in the rotating frame the advection of the stars across the grid is minimized and the angular momentum is conserved to a much higher degree .nevertheless , the time centering of the gravitational acceleration plays a role in determining the dynamics of the orbits .the best combination of techniques is illustrated in the top left panel which shows the results from a simulation using both a rotating frame and the time - advanced coupling .this particular scheme maintains stable orbits for the two stars for more than seven orbits at which point the simulation was terminated .the simulation has shown no significant change in the orbits of the two stars over the course of the simulation .an examination of the angular momentum evolution for this simulation , in figure [ fig : jtot_tarf ] , shows that the total angular momentum is well conserved .the lines in this figure show the angular momentum contained in matter above various density thresholds .note that nearly half of the angular momentum is contained in the high - density cores of the polytropes .also note that the g/ line shows that that there is initially a slight re - adjustment in the angular momentum distribution as the star relaxes on the grid .nevertheless the total angular momentum is fairly well conserved over the course of the simulation .in contrast the time - lagged inertial - frame case , shown in figure [ fig : jtot_tlff ] , reveals poor angular momentum conservation . this simulation shows a steady decline in the angular momentum at all densities . in particularthe high density core has lost most of the angular momentum .the loss of angular momentum terminates approximately when the two stars have coalesced into a single central object . at this pointthe object is fairly axisymmetric and could almost be though of as having achieved a steady state . under these circumstancesthe time centering of the gravitational acceleration is not as critical as it was prior to coalescence and consequently the angular momentum is better conserved at late times .the behavior of the total energy ( figure [ fig : eng_tlff ] ) in the the time - lagged inertial - frame case shows a slight decline which is not nearly so dramatic as the behavior of the angular momentum . again, the decline ceases after coalescence .note that figure [ fig : eng_tlff ] clearly shows the transfer of gravitational potential energy to kinetic energy during the inspiral and coalescence .the total internal energy changes very little throughout the coalescence .the time - advanced rotating frame case ( figure [ fig : eng_tarf ] ) shows a very steady behavior for all of the energies , with no substantial change throughout the length of the simulation .finally , we wish to point out that virtually none of the loss of angular momentum or energy is due to dissipation by the artificial viscosity terms in the gas energy and gas momentum equations .the total dissipation due to these terms is tracked throughout the simulation , and it is many orders of magnitude below the other energy and angular momentum scales involved .this includes the case where the stars have coalesced .we find no significant amount of shock generated dissipation as the stars merge in any of our models .we have assumed that the angular velocity of the rotating frame with respect to the inertial lab frame is a constant .thus in situations where the two stars inspiral due to physical processes , the stars will acquire a non - zero velocity with respect to the rotating frame .in this situation one might suspect that the angular momentum conservation might begin to break down as the stars begin moving with respect to the grid .however , in the next section we shall show that the angular momentum conservation is still well maintained even in the case where the stars inspiral and merge .the significant amount of energy and angular momentum non - conservation in the fixed frame calculations clearly establish that there are significant problems associated with their use in modeling binary neutron stars .we are currently surveying other hydrodynamic methods to see if the same difficulties are present in these other schemes . in order to avoid the problems associated with the inertial cartesian frames we have chosen to employ the rotating frame , time - advanced gravitational acceleration scheme as our preferred method for simulating orbiting and inspiral binary neutron stars . using this method we turn to the study of the stability of equilibrium models .it has been known for some time that even in the purely newtonian case that tidal instabilities can drive coalescence in binary polytropic systems .recent semi - analytic stability analyses have been performed by lai , rasio , and shapiro ( 1993a,1993c,1994a,1994c ) and lai and shapiro ( 1995 ) .these models , which treat the binary polytropes as self - similar ellipsoidal figures of equilibrium , have found that close polytropic binary systems may be unstable to both dynamic and secular instabilities . in this contextwe refer to a dynamical instability as one that takes place on the orbital timescale of the binary system while secular instabilities involve dissipative processes that may occur on much longer timescales .the presence of these instabilities was confirmed numerically by rasio and shapiro ( 1992,1994,1995 ) using sph hydrodynamics methods .more recently , new and tohline ( 1997 ) have performed similar calculations using eulerian hydrodynamics methods and have found results for the ( ) polytropic sequences that differ from those of rasio & shapiro . in this sectionwe discuss our investigations of these equilibrium sequences using the time - advanced rotating frame hydrodynamic scheme discussed in the previous section .the initial data for these equilibrium sequences was constructed as described in section [ sec : eqd ] .the models that we will discuss were all run at resolution with an approximate size of km in each dimension .the grid used to construct the equilibrium data was the same grid that was used for the hydrodynamic simulation , thus obviating any introduction of error by remapping the data onto a new grid . since our primary interest is in neutron star mergers we have only carried out simulations for and equilibrium sequences .the results of our simulations for the equilibrium sequence summarized in figure [ fig : binsep_2_eql ] , which shows the time evolution of the separation between the centers of mass between the two stars .we have utilized the center - of - mass of the stars to define their separation in the same fashion as lrs .in contrast with nt we have found pressure maxima to be ill suited for use as a separation diagnostic since extremely small changes in the values of the pressure in a given zone as the stars move can cause a discrete jump in the location of the maxima . because the center - of - mass is density - weighted , the location of these points changes smoothly . .[ fig : binsep_2_eql ] note that the binary systems with initial separations greater than approximately km seem to be stable over may orbits while those with initial separations less than this radius do not .normalized to the value of the unperturbed polytropic radius this cutoff corresponds to a separation of .this is in close agreement with the minimal energy and angular momentum separation which was found for this sequence ( shown in figure [ fig : eq_seq_2 ] ) .this also corresponds to the point at which the equilibrium sequence transitions from detached to connected binary systems .this result is in close agreement with the predictions of lrs .however , while we agree with the conclusion of rasio & shapiro ( 1992 ) that models with are unstable , we do not agree with their finding that the models inspiral on a timescale of 1 - 1.5 times the initial orbital period .we find that the inspiral occurs on timescales of times the orbital period .this timescale for evolution of these close systems seems to closely follow those of nt .while the simulations of nt were not carried out for a time sufficient to show instabilities , the closest separation systems of nt showed an outward evolution comparable to ours over the first four orbital periods . while we seem to agree with the numerical results of nt , we disagree with the conclusion drawn by nt regarding stability of the polytropic sequence .we see all systems interior to the minimum energy separation inspiral on the timescales of several , i.e. 3 - 5 , orbits .the hydrodynamic simulations of nt have stopped at four orbital periods . however ,most of our coalescing systems inspiral at precisely that time .there is no reason to believe that the dynamical timescale for the inspiral must only be 1 - 2 orbital periods .while the inspirals could be a result of a secular instability triggered by numerical inaccuracies within the code , it seems more likely that the dynamical process may take slightly longer than what is anticipated by nt .an interesting feature emerges from figure [ fig : binsep_2_eql ] where we note that the systems with the smallest separations spiral out slightly towards the minimum energy point before undergoing tidal disruption .similar behavior was seen by nt , who unfortunately terminated their calculations before the point where we see the inspiral occur .this can be seen from figure 12 of nt , which shows the growth in the moment of inertia of their closest system .this evolution can be interpreted as an instability that is driving the system towards a lower energy configuration at separations of .the quality of the total energy and angular momentum conservation for the coalescing models is paramount . as the stars coalesce a significant amount of matter is rapidly advected about the grid even in the rotating frame calculations .one might suspect that the quality of angular momentum and energy conservation might break down under such circumstances .however we have found that this does not seem to happen .this is illustrated by the results for the model which is typical of the coalescing cases .as figures [ fig : jz_conserve ] and [ fig : eng_conserve ] indicate , both the angular momentum and the energy are well conserved .the coalescence begins at a time of approximately 8 msec at which time there is a substantial transfer of angular momentum from the high density material to lower density material . as a result of this angular momentum transfer and the disruption of the starstidal `` arms '' are formed of material that is stripped from the stars .these tidal arms contain a significant fraction of the total angular momentum .some of the material in these arms is swept off of the grid , carrying with it angular momentum . in figure[ fig : jz_conserve ] we separately track the total angular momentum on the grid at every instant in time along with the cumulative total of the angular momentum swept off of the grid .the total angular momentum is the sum of these two curves .the components of the angular momentum displayed in figure [ fig : jz_conserve ] are entirely composed of angular momentum about the z - axis ; the x and y components are effectively zero .the angular momentum on the grid undergoes a sharp decline during the merger as matter flows off the grid .this is also reflected in the rise of the cumulative total angular momentum that has been advected off the grid by the matter .yet the total angular momentum remains quite well conserved .we wish to emphasize that we have not adjusted the rotation speed of the frame as the stars have inspiraled ; the grid has maintained a constant rotation speed with respect to the laboratory inertial frame .the introduction of a time - varying grid rotation speed would complicate the hydrodynamic equations and would at best yield only relatively small improvements in angular momentum conservation .the evolution of the three components of the total energy is shown in [ fig : eng_conserve ] .there is a slight rise of a few percent in the total energy over the course of the entire simulation but no significant jump during the coalescence .a modest transfer of kinetic and potential energy occurs during the the merger but this does not have a pronounced effect on the conservation of total energy .the hydrodynamic evolution of models from the equilibrium sequence shows an instability at a separation of approximately in good agreement with the minimum energy and angular momentum separation of .the evolution of three models with resolution is shown in figure [ fig : binsep_3_eql ] .[ fig : binsep_3_eql ] where the binary center - of - mass separation is shown . because of the difficulty of constructing high - resolution equilibrium models for the sequence , we have carried out only five simulations bracketing the predicted point of instability .our results again agree with the location of the instability identified in rs94 ( see rs94 figure 3 ) .rs94 found the instability occured at , a value within 10% of the semi - analytic prediction of lrs of a point of instability of .in contrast , we do not agree with the results of nt who find that binaries at larger separations , e.g. models , are unstable to merger ( see figure 13 of nt ) .our models with this initial separation exhibit no sign of instability .a puzzling fact about the nt results for the sequence is that even the largest separation model with an initial value of show signs of a slow orbital decay .neither we , nor rs , see such behavior .many numerical investigations of the dynamics of binary neutron star coalescence have employed spherical stars as initial data . as we discussed in section[ subsec : rot ] the tidal distortions for widely separated stars are small and one can often assume that spherical stars are a good approximation to the true equilibrium fluid bodies .this assumption clearly breaks down as the separation between the two stars is reduced .the critical question is as what point does this break down occur ? in order to clarify the realm of validity of the spherical initial data approximation we have carried out a series of simulations using and polytropes as initial data .the initial separations were varied in the same fashion as the the series of runs for equilibrium data models .the evolution of these separations for the case is shown in figure [ fig : binsep_2_sph ] . .[ fig : binsep_2_sph ] at larger separations the systems are stable for long timescales ; the separations to do not significantly evolve .the small oscillations present reflect the fact that the spherical stars are not in perfect equilibrium initially and consequently the evolving systems undergo small epicyclic oscillations .similar behavior has been seen by rs94 . for binary separationsnearer the equilibrium sequence stability limit we can see substantial differences between the binaries shown in figure [ fig : binsep_2_sph ] and their equilibrium counterparts shown in figure [ fig : binsep_2_eql ] . for systems withseparations less than 30 km , the orbital separation is diminishing . in the equilibrium case these systems are stable as is seen in figure [ fig : binsep_2_eql ] .similar behavior is seen in the case as is shown in figure [ fig : binsep_3_comp ] , which compares the equilibrium and spherical - star models . .[ fig : binsep_3_comp ] the rate at which systems inspiral is clearly high for systems of smaller initial separation .in fact , the closest systems are disrupted almost immediately .however , the systems with initially wider separations show no sign of instability .the results clearly indicate that for systems with initial separations well beyond the tidal instability limit that the spherical - star approximation is quite acceptable .this point is very important for the case of the coalescence of two rotating neutron stars where one would have to solve the compressible darwin riemann problem in order to obtain equilibrium initial data . by starting sufficiently far beyond the tidal instability limitone may be able to effectively employ static models of isolated rotating neutron stars as initial data for binary configurations .furthermore , the isolated star approximation , using post - newtonian models for the isolated stars , could also greatly simplify the construction of initial data for post - newtonian simulations as well .self - gravitating hydrodynamic models for binary neutron star phenomena pose some unique challenges for numerical modelers .since the use of eulerian hydrodynamics techniques is prevalent in newtonian , post - newtonian , and relativistic models of binary neutron star coalescence , it is vital that we have a good understanding of the role that the numerical techniques play in determining the outcome of the models . to this end, we have carried out a number of studies designed to compare rotating and inertial frame newtonian hydrodynamic models as well as to compare several choices that could be made for the coupling of gravity and matter .the lessons that are learned from this efforts will be invaluable for post - newtonian and relativistic models as well .we have been able to show that a combination of a rotating frame of reference and a time - advanced gravitational acceleration centering in the gas momentum equation yields adequate angular momentum conservation for the orbiting and merging binary problems . in contrast, we have found that the use of a inertial laboratory frame together with a time - lagged gravitational coupling yields incorrect results .the inertial frame methods produces a substantial angular momentum loss that leads to a spurious inspiral of what should be a stable newtonian binary system .this result indicates that the use of inertial frames which involve stars advecting across the grid should be avoided where possible .we have found a reliable method for constructing equilibrium initial data on the hydrodynamic grids for use in hydrodynamic simulations .this method employs a method of solving the poisson problem , including the determination of consistent boundary conditions , for isolated self - gravitating systems without having to resort to multi - pole expansions of the mass distribution .this iterative method is easily implemented and produces initial data that is consistent with the hydrodynamic grid . using this methodwe have constructed equilibrium sequences that closely agree with the semi - analytic calculations of lai , rasio , and shapiro for and polytropic sequences .while we see qualitative similarities with the results of new & tohline the locations of the minima differ somewhat from theirs and are closer to the lrs predictions . using the initial data from our equilibrium sequences we have investigated the stability of these models .our results are in very close agreement with the numerical sph models of rasio and shapiro .in contrast , we find that we disagree with the conclusions of new & tohline on the stability of the and equilibrium sequences .finally , we investigate the effects of using the isolated star approximation for initial data .we find that for separations modestly greater than the tidal instability limit that the use of isolated polytropes for initial data has little influence on the subsequent evolution of the binary system .this point justifies the use of the isolated star approximation for the construction of equilibrium data for rotating , post - newtonian , and other complex binary neutron star systems .we would like to express our thanks to our colleagues p. annios , g. daues , j. hayes , i. iben , e. seidel , p. saylor , m. norman , w .- m .suen , i. foster , j. lattimer , p. leppik , m. prakash , p. marronetti , g. mathews , j. wilson , and d. mihalas for many helpful conversations regarding this work .we would also like to thank john shalf , david bock , john bialek , and andy hall for extensive visualization support .finally we wish to thank the nasa earth and space science high performance computing and communications program for funding for this work under nasa can s5 - 3099 .acc would also like to thank the department of energy under grant no .b341495 to the center for astrophysical thermonuclear flashes at the university of chicago .computational resources were provided by the national center for supercomputing applications under metacenter allocation # mca97s011n .in this appendix we briefly describe the details of our hydrodynamic algorithm .the finite - differencing methodology is identical to that for the zeus algorithm as described by stone & norman ( 1992 ) .an exception is the addition of the coriolis and centrifugal force terms , which do not appear in sn .additionally , we have employed only cartesian coordinates , which significantly simplifies the equations as compared to the generalized coordinates of sn . for intimate details of the algorithmwe refer the reader to sn .we wish to point out one important difference of our algorithm ( v3d ) from the zeus algorithm : although the finite - difference stencils of the equations are identical , the order of updates differs significantly as discussed in section [ subsec : numerical ] .nevertheless , we still employ a multi - step ( operator split ) methodology as discussed in section [ sec : hydro ] . in the sn nomenclaturethe euler equations are broken up into the _ transport _ and _ source _ terms .the transport step results in the update of the hydrodynamic variables as due to the advective terms of the euler equations , while the source step results in the updates due to the remaining terms .we describe each of these in turn .in the transport step , equations ( [ eq : cont_t])-([eq : rgasm_t ] ) are updated .these equations represent only the advective part of the hydrodynamic evolution . because of the 3-dimensional nature of these equations we employ the widely used dimensional operator splitting technique of strang ( 1968 ) which decomposes the 3-d update into a series of 1-d updates in each dimension . for simplicity we will describe our updates in only the x - direction .the application to the y- and z - directions is obvious . in each dimension equations ( [ eq : cont_t])-([eq : rgasm_t ] ) are simple conservation laws of the form where is generic variable representing and advected quantity and is the flux of that quantity in the x - direction .for the five equations ( [ eq : cont_t])-([eq : rgasm_t ] ) takes the respective forms of , , or , while takes the forms of , , and .the fluxes are calculated for the x - faces of a cell centered around the point at which is defined .note that these cells will differ for the five variables with the exception of and which are both defined at the same point . fora given cell the update of will take the form where and are the fluxes on the right and left x - faces of the cell , is the cell width , and is the known value of at timestep . the notation is used to denote that this is only a partial update of only due to advection .the advection scheme utilizes the consistent advection scheme of norman ( 1980 ) which ties the energy and momentum fluxes to the mass flux , i.e. we define where is the internal energy per gram and where where the indicates that values of variables are calculated using the monotonic advection scheme of van leer ( 1977 ) .the implementation of this scheme is detailed in sn and we refer the reader to that paper for more information . the implementation of the source step in nearly identical to that of sn . in this stepwe solve equations ( [ eq : gase_s ] ) and ( [ eq : rgasm_s ] ) .note that there are no source terms for the continuity equation and thus the source step does not affect any change in the density .thus the updates of the internal energy density , , and the momentum density components , , are of the form this update makes use of the intermediate result obtained from the previously undertaken transport step .since the details of the finite - differencing for most of these source terms are given in sn we refer the reader to that work for further information .however , in the gas momentum equation the the coriolis and centrifugal force terms are differenced as \\ \label{eq : candcx}\end{aligned}\ ] ] where is the x - coordinate of the grid center . in equation ( [ eq : candcx ] )we have employed the difference notation as detailed in sn .the update for the y - velocity component is similarly given by \label{eq : candcy}\end{aligned}\ ] ] where is the y - coordinate of the center of the grid .note that since we are considering non - inertial frames that rotate around the z - axis , there are no coriolis or centrifugal contributions to the z - component of the momenta .the coriolis and centrifugal updates are completed in the middle of the source step .after the updates to the momenta due to pressure and gravitational accelerations have been completed , the velocities are calculated from the momenta .equations ( [ eq : candcx ] ) and ( [ eq : candcy ] ) are then applied to get the non - inertial force updates to the velocities .finally , the viscous stress updates to the velocities are completed .as the last step the source terms for the gas energy equation are solved in order to obtain the new internal energy .
the numerical modeling of binary neutron star mergers has become a subject of much interest in recent years . while a full and accurate model of this phenomenon would require the evolution of the equations of relativistic hydrodynamics along with the einstein field equations , a qualitative study of the early stages on inspiral can be accomplished by either newtonian or post - newtonian models , which are more tractable . however , even purely newtonian models present numerical challenges that must be overcome in order to have accurate models of the inspiral . in particular , the simulations must maintain conservation of both energy and momenta , and otherwise exhibit good numerical behavior . a spate of recent papers have detailed the results for newtonian and post - newtonian models of neutron star coalescence from a variety of groups who employ very different numerical schemes . these include calculations that have been carried out in both inertial and rotating frames , as well as calculations that employ both equilibrium configurations and spherical stars as initial data . however , scant attention has been given to the issue of the the accuracy of the models and the dependence of the results on the computational frame and the initial data . in this paper we offer a comparison of results from both rotating and non - rotating ( inertial ) frame calculations . we find that the rotating frame calculations offer significantly improved accuracy as compared with the inertial frame models . furthermore , we show that inertial frame models exhibit significant and erroneous angular momentum loss during the simulations that leads to an unphysical inspiral of the two neutron stars . we also examine the dependence of the models on initial conditions by considering initial configurations that consist of spherical neutron stars as well as stars that are in equilibrium and which are tidally distorted . we compare our models those of rasio & shapiro ( 1992,1994a ) and new & tohline ( 1997 ) . finally , we investigate the use of the isolated star approximation for the construction of initial data .
gravitational lensing has become one of important subjects in modern astronomy and cosmology ( e.g. , schneider 2006 , weinberg 2008 ) .it has many applications as gravitational telescopes in various fields ranging from extra - solar planets to dark matter and dark energy at cosmological scales .this paper focuses on gravitational lensing due to a n - point mass system .indeed it is a challenging problem to express the image positions as functions of lens and source parameters .there are several motivations .one is that gravitational lensing offers a tool of discoveries and measurements of planetary systems ( schneider and weiss 1986 , mao and paczynski 1991 , gould and loeb 1992 , bond et al .2004 , beaulieu et al .2006 ) , compact stars , or a cluster of dark objects , which are difficult to probe with other methods .gaudi et al . (2008 ) have recently found an analogy of the sun - jupiter - saturn system by lensing .another motivation is theoretically oriented .one may be tempted to pursue a transit between a particle method and a fluid ( mean field ) one . for microlensing studies ,particle methods are employed , because the systems consist of stars , planets or machos . in cosmological lensing , on the other hand , light propagation is considered for the gravitational field produced by inhomogeneities of cosmic fluids , say galaxies or large scale structures of the universe ( e.g. , refregier 2003 for a review ) .it seems natural , though no explicit proof has been given , that observed quantities computed by continuum fluid methods will agree with those by discrete particle ones in the limit , at least on average , where is the number of particles .related with the problems mentioned above , we should note an astronomically important effect caused by the finiteness of .for most of cosmological gravitational lenses ( both of strong and weak ones ) , a continuum approximation can be safe and has worked well .there exists an exceptional case , however , for which discreteness becomes important .one example is a quasar microlensing due to a point - like lens object , which is possibly a star in a host galaxy ( for an extensive review , wambsganss 2006 ) .a galaxy consists of very large number particles , and light rays from an object at cosmological distance may have a chance to pass very near one of the point masses . as a consequence of finite- effect in large point lenses , anomalous changes in the light curveare observed .for such a quasar microlensing , hybrid approaches are usually employed , where particles are located in a smooth gravitational field representing a host galaxy .it is thus likely that point - mass approach will be useful also when we study such a finite- effect at a certain transit stage between a particles system and a smooth one .along this course , an ( 2007 ) investigated a point lens model , which represents a very special configuration that every point masses are located on regular grid points . for a point - mass lens at a general configuration ,very few things are known in spite of many efforts . among known onesis the maximum number of images lensed by point masses . after direct calculations by witt ( 1990 ) and mao, petters and witt ( 1997 ) , a careful study by rhie ( 2001 for n=4 , 2003 for general n ) revealed that it is possible to obtain the maximum number of images as .this theorem for polynomials has been extended to a more general case including rational functions by khavinson and neumann ( 2006 ) .( see khavinson and neumann 2008 for an elegant review on a connection between the gravitational lens theory and the algebra , especially the fundamental theorem of algebra , and its extension to rational functions ) . * theorem * ( khavinson and neumann 2006 ) : + let , where and are relatively prime polynomials in , and let be the degree of .if , then the number of zeros for . here , and denote a complex number and its complex conjugate , respectively .furthermore , bayer , dyer and giang ( 2006 ) showed that in a configuration of point masses , replacing one of the point deflectors by a spherically symmetric distributed mass only introduces one extra image .hence they found that the maximum number of images due to distributed lensing objects located on a plane is .global properties such as lower bounds on the number of images are also discussed in petters , levine and wambsganss ( 2001 ) and references therein . in spite of many efforts on lensing objects , functions for image positionsare still unknown even for point - mass lenses in a general configuration under the thin lens approximation .hence it is a challenging problem to express the image positions as functions of lens and source locations .once such an expression is known , one can immediately obtain magnifications via computing the jacobian of the lens mapping ( schneider et al .1992 ) . only for a very few cases such as a single point mass and a singular isothermal ellipsoid ,the lens equation can be solved by hand and image positions are known , because the lens equation becomes a quadratic or fourth - order one ( for a singular isothermal ellipsoid , asada et al .for the binary lens system , the lens equation has the degree of five in a complex variable ( witt 1990 ) .it has the same degree also in a real variable ( asada 2002a , asada et al .this improvement is not trivial because a complex variable brings two degrees of freedom. this single - real - variable polynomial has advantages .for instance , the number of real roots ( with vanishing imaginary parts ) corresponds to that of lensed images .the analytic expression of the caustic , where the number of images changes , is obtained by the fifth - order polynomial ( asada et al .galois showed , however , that the fifth - order and higher polynomials can not be solved algebraically ( van der waerden 1966 ) .hence , no formula for the quintic equation is known . for this reason, some numerical implementation is required to find out image positions ( and magnifications ) for the binary gravitational lens for a general position of the source . only for special cases of the source at a symmetric location such as on - axis sources ,the lens equation can be solved by hand and image positions are thus known ( schneider and weiss 1986 ) . for a weak field region ,some perturbative solutions for the binary lens have been found ( bozza 1999 , asada 2002b ) , for instance in order to discuss astrometric lensing , which is caused by the image centroid shifts ( for a single mass , miyamoto and yoshii 1995 , walker 1995 ; for a binary lens , safizadeh et al .1999 , jeong et al . 1999 ,asada 2002b ) .if the number of point masses is larger than two , the basic equation is much more highly non - linear so that the lens equation can be solved only by numerical methods . as a result, observational properties such as magnifications and image separations have been investigated so far numerically for point - mass lenses .this makes it difficult to investigate the dependence of observational quantities on lens parameters .this paper is the first attempt to seek an analytic expression of image positions without assuming any special symmetry . for this purpose, we shall present a method of taylor - series expansion to solve the lens equation for point - mass lens systems .our method allows a systematic iterative analysis as shown later . under three assumptions of weak gravitational fields , thin lenses and small deflection angles ,gravitational lensing is usually described as a mapping from the lens plane onto the source plane ( schneider et al .bourassa and kantowski ( 1973 , 1975 ) introduced a complex notation to describe gravitational lensing .their notation was exclusively used to describe lenses with elliptical or spheroidal symmetry ( borgeest 1983 , bray 1984 , schramm 1990 ) . for point lenses , witt ( 1990 ) succeeded in recasting the lens equation into a single - complex - variable polynomial .this is in an elegant form and thus has been often used in investigations of point - mass lenses .an advantage in the single - complex - variable formulation is that we can use some mathematical tools applicable to complex - analytic functions , especially polynomials ( witt 1993 , witt and petters 1993 , witt and mao 1995 ) .one tool is the fundamental theorem of algebra : every non - constant single - variable polynomial with complex coefficients has at least one complex root .this is also stated as : every non - zero single - variable polynomial , with complex coefficients , has exactly as many complex roots as its degree , if each root is counted as many times as its multiplicity . on the other hand , in the original form of the lens equation , one can hardly count up the number of images because of non - linearly coupled properties . this theorem , therefore , raises a problem in gravitational lensing .the single - variable polynomial due to point lenses has the degree of , though the maximum number of images is .this means that unphysical roots are included in the polynomial ( for detailed discussions on the disappearance and appearance of images near fold and cusp caustics for general lens systems , see also petters , levine and wambsganss ( 2001 ) and references therein ) .first , we thus investigate explicitly behaviors of roots for the polynomial lens equation from the viewpoint of perturbations .we shall identify unphysical roots .secondly , we shall re - examine the lens equation , so that the appearance of unphysical roots can be avoided .this paper is organised as follows . in section 2, the complex description of gravitational lensing is briefly summarised .the lens equation is embedded into a single - complex - variable polynomial in section 3 .perturbative roots for the complex polynomial are presented for binary and triple systems in sections 4 and 5 , respectively .they are extended to a case of point lenses in section 6 . in section 7, we re - examine the lens equation in a dual - complex - variables formalism and its perturbation scheme for a binary lens for its simplicity .the perturbation scheme is extended to a point lens system in section 8 .section 9 is devoted to the conclusion .we consider a lens system with n point masses .the mass and two - dimensional location of each body is denoted as and the vector , respectively . for the later convenience ,let us define the angular size of the einstein ring as where is the gravitational constant , is the light speed , is the total mass and , and denote distances between the observer and the lens , between the observer and the source , and between the lens and the source , respectively . in the unit normalised by the angular size of the einstein ring ,the lens equation becomes where and denote the vectors for the position of the source and image , respectively and we defined the mass ratio and the angular separation vector as and . in a formalism based on complex variables , two - dimensional vectors for the source , lens and image positionsare denoted as , , and , respectively ( see also fig . ) . by employing this formalism ,the lens equation is rewritten as where the asterisk means the complex conjugate .the lens equation is non - analytic because it contains both and .( the circle ) and ( the filled disk ) , respectively .locations of n point masses are denoted by ( filled triangles ) for . here , we assume the thin lens approximation . ,the complex conjugate of eq .( ) is expressed as this expression can be substituted into in eq .( ) to eliminate the complex variable . as a result, we obtain a -th order analytic polynomial equation as ( witt 1990 ) equation ( a3 ) in witt ( 1990 ) takes a rather complicated form because of inclusion of nonzero shear due to surrounding matter .bayer et al . ( 2006 ) uses a complex formalism in order to discuss the maximum number of images in a configuration of point masses , by replacing one of point deflectors by a spherically symmetric distributed mass .their lens equation ( 3 ) agrees with eq .( ) . in order to show this agreement, one may use .it is worthwhile to mention that eq . contains not only all the solutions for the lens equation but also unphysical false roots which do not satisfy eq . , in price of the manipulation for obtaining an analytic polynomial equation , as already pointed out by rhie ( 2001 , 2003 ) and bayer et al .such an inclusion of unphysical solutions can be easily understood by remembering that we get unphysical roots as well as true ones if one takes a square of an equation including the square root .in fact , an analogous thing happens in another example of gravitational lenses such as an isothermal ellipsoidal lens as a simple model of galaxies ( asada et al .2003 ) .in general , the mass ratio satisfies , so that it can be taken as an expansion parameter . without loss of generality , we can assume that the first lens object is the most massive , namely for . thus , formal solutions are expressed in taylor series as where the coefficients are independent of . up to this point, the origin of the lens plane is arbitrary . in the following ,the origin of the lens plane is chosen as the location of the mass , such that one can put .this enables us to simplify some expressions and to easily understand their physical meanings , mostly because gravity is dominated by in most regions except for the vicinity of .namely , it is natural to treat our problem as perturbations around a single lens by ( located at the origin of the coordinates ) . in numerical simulations or practical dataanalysis , however , one may use the coordinates in which the origin is not the location of .if one wishes to consider such a case of , one could make a translation by as , and in our perturbative solutions that are given below .in this section , we investigate binary lenses explicitly up to the third order .this simple example may help us to understand the structure of the perturbative solutions .for an arbitrary case , expressions of iterative solutions are quite formal ( see below ) . in powers of ,the polynomial equation is rewritten as where we defined ( w^ * z^2-w w^ * z - w ) , \nonumber\\ f_1(z)&=&(z - w ) \nonumber\\ & & \times % \left ( \bigl ( \epsilon ( z - w ) [ ( 2w^*-\epsilon^*)z+2 ] % \nonumber\\ % & & -\epsilon^ * z^2 ( z-\epsilon ) - \epsilon z \bigr ) , % \right ) , \nonumber\\ f_2(z)&=&\epsilon^2 ( z - w ) .\label{f}\end{aligned}\ ] ] at , the lens equation becomes the fifth - order polynomial equation as .zeroth order solutions are obtained by solving this .all the solutions are ( doublet ) , and , where we defined one of the roots , , is unphysical , because it does not satisfy eq . at . by using all the 0th order roots including unphysical ones , is factorised as next , we seek - order roots .we put . at the linear order in , eq .( ) becomes where the prime denotes the derivative with respect to .thereby we obtain a - order root as the similar manner can not be applied to a case of , because it is a doublet root with , while . at ,( ) can be factorised as + \epsilon \right ) \nonumber\\ & & \times \left(z_{(1 ) } [ ( \epsilon - w)(w^ * \epsilon + 1)-\epsilon ] + \epsilon ( \epsilon - w ) \right ) = 0 .\label{2-lenseq-2nd}\end{aligned}\ ] ] hence , we obtain two roots as here , the latter root expressed by eq .( ) is unphysical and thus abandoned , because it does nt satisfy the original lens equation ( ) . on the other hand , the former root by eq .( ) satisfies the equation and thus expresses a physically correct image .99 an j. h. , 2007 , mnras , 376 , 1814 asada h. , 2002a , a&a , 390 , l11 asada h. , 2002b , apj , 573 , 825 asada h. , kasai t. , kasai m. , 2002c , prog .phys . , 108 , 1031 asada h. , hamana t. , kasai m. , 2003 , a&a , 397 , 825 asada h. , kasai t. , kasai m. , 2004 , prog .phys . , 112 , 241 bayer j. , dyer c. c. , giang d. , 2006 , gen .38 , 1379 beaulieu j. p. , et al . , 2006 , nature , 439 , 437 bond i. a. , et al . , 2004 , apj , 606 , l155 borgeest u. , 1983 , a&a , 128 , 162 bourassa r. r. , kantowski r. , norton t. d. , 1973 , apj , 185 , 747 bourassa r. r. , kantowski r. , 1975 , apj , 195 , 13 bozza v. , 1999 , a&a , 348 , 311 bray i. , 1984 , mnras , 208 , 511 gaudi b. s. , et al . , 2008 , science , 319 , 927 gould a. , loeb a. , 1992 , apj , 396 , 104 jeong y. , han c. , park s. , 1999 , apj , 511 , 569 khavinson d. , neumann g. , 2006 , proc ., 1077 khavinson d. , neumann g. , 2008 , not . amer .55 , 666 mao s. , paczynski b. , 1991 , apj , 374 , 37l mao s. , petters a. , witt h. j. , 1997 , in _ proceedings of the eighth marcel grossmann meeting on general relativity _r. ruffini , ( singapore , world scientific ) astro - ph/9708111 miyamoto m. , yoshii y. , 1995 , aj , 110 , 1427 petters a. o. , levine h. , wambsganss j. , 2001 , _ singularity theory and gravitational lensing _( boston , birkhuser ) refregier a. , 2003 , araa , 41 , 645 rhie s. h. , 2001 , arxiv : astro - ph/0103463 rhie s. h. , 2003 , arxiv : astro - ph/0305166 safizadeh n. , dalal n. , griest k. , 1999 , apj , 522 , 512 schneider p. , weiss a. , 1986 , a&a .164 , 237 schneider p. , ehlers j. , falco e. e. , 1992 , _ gravitational lenses _( heidelberg , springer - verlag ) schneider p. , 2006 , _ extragalactic astronomy and cosmology : an introduction _ ( heidelberg , springer - verlag ) schramm t. , 1990 , a&a , 231 , 19 van der waerden b. l. , 1966 , _ algebra i _( heidelberg , springer - verlag ) walker m. a. , 1995 , apj , 453 , 37 wambsganss j. , 2006 , in _ proceedings of the 33rd saas - fee advanced course _ ( heidelberg , springer - verlag ) : arxiv : astro - ph/0604278 weinberg s. , 2008 , _ cosmology _ ( oxford , oxford univ . press ) witt h. j. , 1990 , a&a .236 , 311 witt h. j. , 1993 , apj , 403 , 530 witt h. j. , petters a. , 1993 , j. math .34 , 4093 witt h. j. , mao s. , 1995 , apj . 447 , l105
this paper makes the first systematic attempt to determine using perturbation theory the positions of images by gravitational lensing due to arbitrary number of coplanar masses without any symmetry on a plane , as a function of lens and source parameters . we present a method of taylor - series expansion to solve the lens equation under a small mass - ratio approximation . first , we investigate perturbative structures of a single - complex - variable polynomial , which has been commonly used . perturbative roots are found . some roots represent positions of lensed images , while the others are unphysical because they do not satisfy the lens equation . this is consistent with a fact that the degree of the polynomial , namely the number of zeros , exceeds the maximum number of lensed images if n=3 ( or more ) . the theorem never tells which roots are physical ( or unphysical ) . in this paper , unphysical ones are identified . secondly , to avoid unphysical roots , we re - examine the lens equation . the advantage of our method is that it allows a systematic iterative analysis . we determine image positions for binary lens systems up to the third order in mass ratios and for arbitrary n point masses up to the second order . this clarifies the dependence on parameters . thirdly , the number of the images that admit a small mass - ratio limit is less than the maximum number . it is suggested that positions of extra images could not be expressed as maclaurin series in mass ratios . magnifications are finally discussed . [ firstpage ] gravitational lensing cosmology : theory stars : general methods : analytical .
this paper concerns with the numerical solution of the finite horizon optimal investment problem with transaction costs under potential utility .let us consider an investor whose wealth can be inverted in a risky stock and in a riskless bank account .we suppose that the investor is risk averse with constant relative risk aversion ( crra ) . in , merton showed that , in absence of transaction costs , the problem can be explicitly solved .the optimal strategy consists in keeping a fixed proportion between the money invested in the risky asset and the bank account . when transaction cost are considered , the merton strategy is unfeasible because it requires a continuous portfolio rebalancing with unbounded costs .proportional transaction costs were first introduced in .more recently , in , the problem was reformulated as a non - linear parabolic double obstacle problem posed in one spatial variable , defined in an unbounded domain .several explicit properties and formulae were obtained in , although explicit formulae for the solution are not available .the problem was numerically solved in , where the authors employ a characteristics method with a projected relaxation scheme .the scheme proportionates satisfactory results with good agrement with the results in .when we want to solve financial problems , like finding investment strategies or pricing derivative contracts , in general , there is no known closed form solution of the different problems and several numerical methods have been employed . without the aim to be exhaustive , monte - carlo based methods ( , ) , piecewise linear interpolations ( ) , lattice methods ( , ) , finite elements ( ) or spectral methods ( ) are some of them .a general review of financial problems or models , numerical techniques and software tools can be found in .the objective of this paper is to construct a spectral method specifically adapted to the optimal investment problem with potential utility when proportional transaction costs are present .as it is well known , spectral methods are a class of spatial discretizations for partial differential equations which offer fast convergence in the case of smooth solutions .they are not widely used yet in numerical finances because it is usually believed that the lack of smoothness present in most interesting problems makes spectral methods uncompetitive . however, several papers have used spectral methods for problems in finance with good results .for instance , in a fourier - hermite procedure to the valuation of american options has been presented . in spectral method based on laguerre polynomials has been employed to numerical valuation of bonds with embedded options . a fourier spectral method to compute fast and accurate prices of american options written on asset followinggarch models has been presented in . in authors use an adaptive method with chebyshev polynomials coupled with a dynamic programming procedure for contracts with early exercise features . in very efficient procedure for asian options defined on arithmetic averages has been proposed . in all cases ,the spectral - based methods have been proved to be competitive with other alternatives in terms of precision versus computing time needed to compute the numerical solution . in the present paper , we restate the problem using polar coordinates .this allows to consider a double parabolic obstacle problem in one spatial - like variable defined in a bounded domain .furthermore , this formulation avoids the emergence of nonlinear terms simplifying the numerical treatment .we present a chebyshev spectral approach based on adaptive meshes to locate the optimal frontiers .although some of the numerical difficulties that appear with the parabolic double obstacle problem are avoided with our approach , we still have to deal with the so - called gibbs effect , which comes from the fact that the objective function is continuous but not differentiable at maturity .we show that this issue can be circumvented by using a time - adapted spatial mesh .we show that our approach is efficient by comparing it with a standard finite difference scheme .the outline of the paper is as follows . in section [ ch3toiprecon ] ,a description of the optimal investment problem as it can be found in or is presented . in section [ ch3dpr ] ,the problem is reformulated as a parabolic double obstacle problem as it was done in .afterwards , we propose an equivalent formulation of the problem employing polar coordinates .ch3 nm ] is devoted to a mesh - adapted chebyshev - collocation method which solves the problem of section [ ch3dpr ] . in section [ ch3numresults ]we perform the numerical analysis of the method .section [ ch3conclus ] presents some conclusions and future research .we consider an optimal investment problem with transaction costs , , .let be a filtered probability space .let us consider an investor who holds amounts and in a bank and a stock account respectively .the dynamics of the processes is where denotes the constant risk - free rate , is the constant expected rate of return of the stock , is the constant volatility of the stock and is a standard brownian motion such that where is the natural filtration induced by .we suppose that and are adapted , right - continuous , nonnegative and nondecreasing processes representing the cumulative monetary values of the stock purchased or sold respectively and and , represent the constant proportional transaction costs incurred on the purchase or sale of the stock . in this paperwe assume that .the finance meaning of equation ( [ ch3ecudinacc ] ) is natural . along time , the rate of change of the amount of money invested in the risky asset , represented by the stochastic process , evolves according to a standard geometric brownian motion modified by the difference between the amount of money invested in buying stock , , and the amount of money obtained selling stock , . at the same time, the value of the bank account , , is instantaneously increased by the difference , that represents the net flow of money resulting from stock negotiations , including the transaction costs .processes and can be financially understood as an historical record of the total purchases and sales of stock of the investor .the net wealth is the money the investor would have if he closes his positions .it can be written as if the investor is long in the stock or in case the investor is short in the stock .let be a utility function , that is , a continuous , strictly increasing , concave function .the optimal value function is given by : ,\ ] ] for all ] of subject to : where the existence and uniqueness of a viscosity solution of ( [ chp3hjbecu1])-([chp3hjbecu2 ] ) has been proved in .there , it is proved that at any time , the spatial domain is divided in three regions , namely , in financial terms , the buying region , the selling region and the no transactions region .the selling and buying regions do not intersect .for simplicity in the exposition , we suppose that . with this hypothesis ,short - selling is always a suboptimal strategy , , .this means that the optimal trading strategy is always to have a nonnegative amount of money invested in the stock .as remarked in , the choice of the potential utility function is interesting since it leads to the homothetic property in the optimal value function , this property is used in to reduce the dimensionality of the problem . setting , a new function is introduced in , so that : in , the authors prove that is the solution of an one dimensional parabolic double obstacle problem with two free boundaries equivalent to ( [ chp3hjbecu1 ] ) .furthermore , it is also proved in , that there exist two continuous monotonically increasing functions \rightarrow(-(1-\mu),+\infty],\ ] ] such that .the buying and selling regions are characterized by \\mid z\leq\text{sr}^c_f(t ) , t\in[0 , \ t ] \right\ } , \\\text{\textbf{br } } & = \left\{(z , t)\in\omega\times[0,t ] \\mid z\geq\text{br}^c_f(t ) , t\in[0 , \ t ] \right\}. \\ \end{aligned}\ ] ] although other properties and explicit formulas are obtained in , a complete analytical solution is still missing and numerical procedures have to be used , see , for example , .here , inspired by , we take advantage of ( [ homothetic ] ) by working in polar coordinates , .it is not difficult to show that ( [ chp3hjbecu1])-([chp3hjbecu3 ] ) are equivalent to subject to : where and based on ( [ homothetic ] ) , we conjecture a solution to ( [ ch3ecuinicpol ] ) of the form : taking into account that and substituting in ( [ ch3ecuinicpol ] ) , ( [ finalcondition1 ] ) , we see that satisfies , subject to : the functions , , are given by the solvency region in the new coordinates is given by : where this formulation has several advantages over the formulation of .as in , the problem is one dimensional ( ( [ ch3enunprobpolar])-([ch3enunprobpolar2 ] ) do not depend of ) , but in our case the domain is bounded ( ) .furthermore , the operators involved in ( [ ch3enunprobpolar ] ) are linear in , whereas in , the equations contains a nonlinear term .next , we characterize the buying and selling regions in terms of the polar coordinates .first , let us observe that where is the function defined in ( [ checamborig ] ) .let us define the functions and by ,\ ] ] where and are the boundaries of the buying and selling regions in cartesian coordinates defined in ( [ ch3funfronc ] ) .the following proposition is an immediate consequence of the results in .[ ch3tradpropertapolar ] functions , are monotonically decreasing functions .it holds that and that ,\quad \hat{t}_0=t-\frac{1}{\alpha - r}\log\frac{1+\lambda}{1-\mu}.\ ] ] if , then , with it holds that , where is the merton line .if , there exist two values , such that the limit values and are defined by where and are the constants defined in ( * ? ?* theorem 6.1 ) .the functions , satisfy ( see also ( * ? ? ?* proposition 3.4.2 ) ) , .\ ] ] it is now easy to see that , for ] . in value function satisfies * 2 . * the selling regionis defined by . in value function satisfies * 3 . * the no transaction regionis defined by . in , satisfies of the following partial differential equations we remark that if the buying ( ) and selling ( ) frontiers are known , we can compute the value function in and explicitly by a simple integration of equations ( [ chp3brexpformula ] ) and ( [ chp3srexpformula ] ) respectively . for , we have and for , value function ( numerical solution ) for \times[0,30] ] , for a maturity of years .the figure shows the numerical values obtained for the function with the method described in section [ ch3 nm ] .we have coloured the function depending in whether is in the buying , selling or no transactions region .we can visually check the expected monotonicity of the buying and selling frontiers studying from above ( right ) the two curves which divide the different colours ( red - green and green - blue ) .the buying frontier remains constant ( ) for a certain period near maturity and the stationarity value of both frontiers as we move away from maturity is also observable .the numerical method described in this section is constructed upon the following strategy .let denote an admissible trading strategy where and are the amount of money in the bank and stock accounts at .let and and define where , are the amounts in the bank and stock accounts if strategy is followed .proposition [ ch3tradpropertapolar ] implies that where denotes the optimal trading strategy solving ( [ ch3defvarpphi ] ) .therefore , the optimal value function can be computed as the solution of ( [ ch3enunprobpolar])-([ch3enunprobpolar2 ] ) in ] taking into account that for ] is the stationary state of the selling frontier ( see proposition [ ch3tradpropertapolar ] ) . for ,let us consider the chebyshev nodes in ] and it is contained in the solvency region ] where and are the exact location of the buying and selling frontiers .for any , where is the restriction which guarantees that will be in the interior of , compute with ( [ ch3forint1 ] ) or ( [ ch3forint2 ] ) .it exists such that for any time mesh given by ( [ ch3timediscret ] ) , it holds for any . from , we know that . therefore , is .let from ( [ ch3paramk ] ) be fixed . since is in the interior of , it will exist such that for all : this guarantees that for any equally spaced time mesh , with , . to finish the proof , note that from proposition [ ch3tradpropertapolar ] , and that , so the result follows directly from the definition of .let us suppose that we know an approximation of the function value , and approximate values of and at time .for big enough ( proposition [ ch3propoinclusnteninter ] ) , we can compute ], we define the function as the function value which gives the expected terminal value when the trading strategy is to perform no transactions if , to buy the stock if and to sell the stock if , subject to .therefore , is the solution of the equation subject to let us consider the chebyshev nodes in where are the chebyshev points ( [ ch3chebynodes ] ) .the numerical approximation , to the function is the collocation polynomial of degree defined for by : subject to with ( neumann ) boundary conditions where the equations ( [ ch3collocationmethod])-([ch3collocationmethodcons ] ) define a dense system of linear equations to find the values of , .however , the fact that , with relative few nodes for the spatial mesh we can achieve a very good precision , makes this method competitive with respect to a finite differences method , see section [ ch3numresults ] .let us define which are explicit functions because is a known polynomial in . in ( [ ch3auxencheby ] ) ,we compare ( see ( * ? ? ?* subsection 3.5.3 ) ) whether it is better to not perform transactions or to buy the stock ( resp .sell the stock ) .if polynomial ( resp . ) it is better to not perform transactions rather than buy ( resp .sell ) the stock .the numerical approximation to the buying and selling frontiers is given by : \right\ } , \\\end{aligned}\ ] ] once we know the location of the frontiers and the function value in that points , we can compute the approximate function value through the following explicit formulas where we have used the notation , ^{\gamma},\ ] ] if , ^{\gamma},\ ] ] if .then the complete algorithm reads as follows : * fix a number and a number big enough such that proposition [ ch3propoinclusnteninter ] holds .+ compute and as in ( [ ch3timediscret ] ) . define . + set and compute with formula ( [ ch3forint1 ] ) + compute , , as the chebyshev interpolation polynomial in of function , given by ( [ ch3enunprobpolar2 ] ) , where denote the chebyshev nodes in .* compute the polynomial solving the collocation equations ( [ ch3collocationmethod ] ) with final condition ( [ ch3valorcondvencheby ] ) and boundary conditions ( [ ch3collocationmethodcons ] ) . *locate the buying and selling frontiers and using ( [ frontiers ] ) . *compute the interval with ( [ ch3forint1 ] ) if or with ( [ ch3forint2 ] ) otherwise .+ compute the numerical approximation at time with formulae ( [ vn1 ] ) , ( [ vn2 ] ) and ( [ vn3 ] ) . *set and stop if or , otherwise , proceed to * step 1*.we consider the parameter values as in the first experiment in . for ] .the colour code is blue if is in the buying , green in the no transactions and red in the selling region.,title="fig:",width=453,height=188 ] we have colored the function depending in whether is in the buying , selling or no transactions region . as in figure[ ch3stationarycn2 ] , we can visually check the properties from proposition [ ch3tradpropertapolar ] .first , we establish the criteria employed in the experiments to build the spatial mesh .we have fixed the control parameter , so that , at least % of the interval corresponds to the no transactions region .the particular choice of does not affect the rate of convergence of the error . in order to compare the performance of the spectral method with other numerical methods ,we have also implemented a central differences ( cd ) based method in order to solve the pde in step 2 ( see subsection [ ch3maccmcc ] ) .the formal study of the error will be conducted for the cases where explicit formulas are available , comparing the results of the central differences and chebyshev methods .the rest of the properties given in , although not included , were also checked .we consider defined in ( [ checamborig ] ) . for , we can explicitly compute with ( * ? ? ?* ( 3.9 ) ) . in figure [ ch3analiticalsolv0 t ]we plot the value of for ].,width=245,height=188 ] the value corresponds in polar coordinates to .a numerical solution can be computed explicitly using and formula ( [ ch3formularelacionpolarorig ] ) , which relates the function in polar coordinates and in the original variables . following figure compares the difference between the analytical solution ] where was computed with the chebyshev method with ( left ) and ( right).,width=529,height=207 ] in both pictures of figure [ ch3valpimedcomcheb ] we can observe an error discontinuity at time . from ( * ? ? ?* ( 3.9 ) ) , we know that the function is not derivable ( respect time ) at instant .the same phenomena can be observed in the numerical experiments in .we can also see that some oscillations appear at time where we change the kind of adaptive mesh ( see subsection [ ch3maccmam ] ) .we proceed to check the rate of error convergence .we define the root of the mean square error as figure [ ch3converrtesptempvalpimed ] shows the convergence of spatial error ( left ) for and different number of spatial nodes .the right side shows the convergence of temporal error for fixed and different values of . spatial ( left ) and temporal ( right ) error convergence of in logarithmic scale of the central differences ( blue ) and chebyshev ( red ) methods.,width=472,height=200 ] in the left side of figure [ ch3converrtesptempvalpimed ]we have plotted , in logarithmic scale , the number of spatial nodes versus the value .the slope of the regression line of the cd method ( plotted in blue ) is and of the chebyshev method ( plotted in red ) is .the spectral convergence that we could expect in the chebyshev method does not occur due to the regularity of the problem . in the right hand side of figure [ ch3converrtesptempvalpimed ] , we have plotted , in logarithmic scale , the number of time steps versus the value .the slope of the regression line of the cd method ( solid - blue ) is as it could be expected from an order 2 method .the slope of the chebyshev method ( solid - red ) is .we note that for large values of we reach very soon the error limit marked by the size of .we carry out a second experiment doubling the value of ( right - dashed - blue / red ) to check that the lowest value reached by the temporal error was given by the size of the spatial mesh . depending on the error tolerance, we might need a big value for in the cd method but much smaller in the chebyshev method .this makes that , depending on the required precision , chebyshev performs better in computational cost than cd .this will be studied below . from proposition [ ch3tradpropertapolar ] , we know that in polar coordinates . givena number of time steps , we look for which is nearest to and define the absolute error ( just for this experiment ) as : the next figure shows the convergence of spatial error ( left ) for and different number of spatial nodes . the right side shows the convergence of temporal error for ( chebyshev ) and ( cd ) and different values of .spatial ( left ) and temporal ( right ) error ( semilogarithmic scale ) of instant when with the cd ( blue ) and chebyshev ( red ) methods.,width=472,height=200 ] the spatial error ( left ) reduces as we increase the value of . at equal number of nodes ,the chebyshev method gives much smaller errors than the cd method . concerning the temporal error , the results are step shaped because of the definition of absolute error and the time partition when is halved .each time partition is included in the following one and sometimes changes and sometimes not .the temporal error reduces as we increase the value of . as in the spatial error ,the chebyshev method outperforms the cd method . from proposition [ ch3tradpropertapolar ] , we know that where is explicitly computable . given a number of time steps ,we look for such that for the chebyshev method , the may be bigger than 0 a few time steps prior to .we note that in the chebyshev method , the lower limit of ] ( left ) and zoom around ( right).,width=491,height=207 ] let be the biggest value such that if , the location of the buying frontier oscillates around 0 for and for when it behaves as we could expect from proposition [ ch3tradpropertapolar ] .numerical experiments show that it is better to let oscillate around 0 rather than imposing .the oscillations observed in figure [ ch3v1buyingfrontiercheby ] are generated by the imposition of the neumann conditions .the boundary error is controlled by and , but the spatial error is dominant in this experiment .the instant when the numerical solution begins to oscillate is always very close to and the size of the oscillations reduces as increases .these oscillations are the error that we are going to study .they include all the negative values ( since the buying frontier must be always positive ) and any positive value for discrete times larger than .thus , we define , for this method and experiment , the absolute error ( ae ) as we fix and compute the absolute error for several values for ) . in figure [ ch3v1convespfronnocerocheby ]we plot , in logarithmic scale , the value of versus the absolute error . as we can see the erroris rapidly reduced by increasing .spatial error convergence of the first instant when it is optimal to have a positive amount of stock ( chebyshev method).,width=226,height=188 ] and tend to a stationary state as that can also be computed explicitly ( see proposition [ ch3tradpropertapolar ] ) . computed with the same model parameters as before but for years ( see figure [ ch3stationarycn2 ] ) ,frontiers have stabilized a few years before reaching at : computed with the chebyshev method ( , ) .we define the absolute error ( for this experiment ) as we study the spatial ( and several values for ) and temporal ( for the cd , for the chebyshev method , and several values for ) error convergence .in figure [ ch3v1estacconv ] we plot , in logarithmic scale , the value of ( left ) versus the absolute value of the error and the value of ( right ) versus the absolute value of the error for both methods .spatial ( left ) and temporal ( right ) error convergence , in logarithmic scale , of the stationary state of the buying frontier for the cd(blue ) and chebyshev ( red ) methods.,width=510,height=226 ] in this experiment , temporal error is dominant compared with respect to the spatial error in the chebyshev method . in the case of the cd method , the error depends more in both the spatial and temporal discretizations . on the left side picture, we can see that the chebyshev method reaches the error marked by the time discretization with the smallest number of nodes therefore , if a high precision is required , chebyshev will perform better than the central differences method .the error behaviour of the selling frontier is similar to the one of the buying frontier . in this sectionwe compare the relative performance of the pseudospectral and finite difference methods .first of all , we fix several time and spatial discretization parameters : a. ] ( chebyshev ) c. $ ] ( central differences ) and solve the problem with all the combinations of the different discretizations for both methods .the lower and upper bounds of in the central differences method can be taken smaller or bigger .the criteria that we have employed is such that the numerical error varies between and .the same reads for the upper bound of in the chebyshev method .we point that during the implementation of the method , we observed that if was not big enough , the location of the frontiers may oscillate ( due to the gibbs effect or to the fact that the polynomials are not accurate enough ) , complicating the location of and in ( [ frontiers ] ) .the chebyshev spectral method is effective once enough resolution has been reached .this behaviour is typical of high order methods , see .the employment of the adaptive interval and an enough amount of interpolation nodes avoids the oscillations and allows to obtain just one numerical approximation of and in ( [ frontiers ] ) .the oscillations may appear if the following ( empirical ) bounds are violated where and numerical experiments suggest that might be a growing function of .the lowest value of in the chebyshev method was chosen so that no oscillations appear . if a smaller number of interpolation nodes is chosen , the solution oscillates and the error worsens .we plot the value of ( [ ch3errrorcuadmed ] ) versus the computational time employed in computing for each different spatial and temporal meshes in logarithmic scale .performance comparison of the error at . in logarithmic scale , we plot ( left ) , the value of rmse versus the total computational costs of cd ( blue ) and chebyshev ( red ) methods and their respective lower enveloping curves ( right).,width=510,height=226 ] the left - side picture of figure [ ch3v1performanceerror ] represents the cloud of results for the different discretizations of each method . the right - side , which is more visual , represents the lower convex enveloping curve . with the right - side picture, we can obtain an approximate behaviour of the evolution of the error versus the required computational time to reach that precision .we fix the error tolerance that we require for our problem and find which method and spatial and time discretization reaches it first . as we can see , the cd method ( blue in figure [ ch3v1performanceerror ] ) performs better if we do not require a high precision .if a higher precision is required , chebyshev ( red in figure [ ch3v1performanceerror ] ) performs better than cd . a similar behaviour can be observed if we compare the errors of the rest of cases where we have explicit formulas .the homothetic property of the potential utility function has been used to restate the investment problem in polar coordinates .this has allowed us to give an equivalent formulation of the problem in a bounded spatial domain .although some of the numerical difficulties that appear with the parabolic double obstacle problem are avoided , other problems may appear if we employ spectral methods .the gibbs effect , which comes from the fact that the objective function is continuous but not differentiable at maturity , can complicate the location of the frontiers , but this issue can be circumvented by the employment of a time - adapted spatial mesh ( subsection [ ch3maccmam ] ) . further work may include the extension of the model including a consumption term or the design of spectral methods to optimal investment problems with other utility functions , like the exponential utility . furthermore , through the indifference pricing technique ( see and ) , these kind of models can be applied to option valuation .chiarella , c. , el - hassan , n. and a. kucera , a _ evaluation of american option prices in a path integral framework using fourier - hermite series expansion _, journal of economic dynamics and control 23 ( 1999 ) , 1387 - 1424 .
this paper concerns the numerical solution of the finite - horizon optimal investment problem with transaction costs under potential utility . the problem is initially posed in terms of an evolutive hjb equation with gradient constraints . in , the problem is reformulated as a non - linear parabolic double obstacle problem posed in one spatial variable and defined in an unbounded domain where several explicit properties and formulas are obtained . the restatement of the problem in polar coordinates allows to pose the problem in one spatial variable in a finite domain , avoiding some of the technical difficulties of the numerical solution of the previous statement of the problem . if high precision is required , the spectral numerical method proposed becomes more efficient than simpler methods as finite differences for example . * keywords : * optimal investment , potential utility , transaction costs , spectral method .
this paper introduces quanfruit v1.1 , a java application available for free .( source code included in the distribution . ) recently , farhi - goldstone - gutmann ( fgg ) wrote a paper that proposes a quantum algorithm for evaluating nand formulas .quanfruit outputs a quantum circuit for the ffg algorithm .we say a unitary operator acting on a set of qubits has been compiled if it has been expressed as a seo ( sequence of elementary operations , like cnots and single - qubit operations ) .seo s are often represented as quantum circuits .there exist software ( quantum compilers ) like qubiter for compiling arbitrary unitary operators ( operators that have no a priori known structure ) .quanfruit is a special purpose quantum compiler .it is special purpose in the sense that it can only compile unitary operators that have a very definite , special structure .the quanfruit application is part of a suite of java applications called quansuite .quansuite applications are all based on a common class library called qwalk .each quansuite application compiles a different kind of quantum evolution operator .the applications output a quantum circuit that equals the input evolution operator .we have introduced 6 other quansuite applications in 2 earlier papers . ref. quantree and quanlin .ref. introduced quanfou , quanglue , quanoracle , and quanshi .quanfruit calls methods from these 6 previous applications , so it may be viewed as a composite of them . before reading this paper ,the reader should read refs. and .many explanations in refs. and still apply to this paper .rather than repeating such explanations in this paper , the reader will be frequently referred to refs. and .the goal of all quansuite applications , including quanfruit , is to compile an input evolution operator . can be specified either directly ( e.g. in quanfou , quanshi ) , or by giving a hamiltonian such that ( e.g. in quanglue and quanoracle ) .the standard definition of the evolution operator in quantum mechanics is , where is time and is a hamiltonian . throughout this paper, we will set .if is proportional to a coupling constant , reference to time can be restored easily by replacing the symbol by , and the symbol by .the input evolution operator for quanfruit is , where h_fruit = . where for some positive integer . is proportional to the incidence matrix for a line graph , where the edges of the graph connect states that are consecutive in a gray order .for example , for , the graph of fig.[fig - line - graph ] yields : h_line = g c|c|c|c|c|c|c|c|c| & & & & & & & & + & 0&1 & & & & & & + & 1&0 & & 1 & & & & + & & & 0&1 & & & 1 & + & & 1&1&0 & & & & + & & & & & 0&1 & & + & & & & & 1&0 & & 1 + & & & 1 & & & & 0&1 + & & & & & & 1&1&0 + , [ eq - h - line ] where is a real number that we will call the * coupling constant*. where for some positive integer . is proportional to the incidence matrix for a balanced - binary tree graph .for example , for , the graph of fig.[fig - tree - graph ] yields : h_tree = g c|c|c|c|c|c|c|c|c| & & & & & & & & + & & & & & & & & + & & & 1&1 & & & & + & & 1 & & & 1&1 & & + & & 1 & & & & & 1&1 + & & & 1 & & & & & + & & & 1 & & & & & + & & & & 1 & & & & + & & & & 1 & & & & + , [ eq - h - tr ] where is the same coupling constant as before . .in fact , h_glue = g .here labels the god state of the tree , the one with children but no parents .( labels the dud node ) we will call the * line door*. if , then the tree is connected to a tail of states . for , from fig.[fig - line - graph ] , if , then the tree is connected to the midpoint of the line of states ( runway " ) .the number of leaves in the tree is half the number of nodes in the tree : . also , for some positive integer . . in fact , h_oracle = g , where are the inputs to the nand formula .the dimension of the matrix is not generally a power of two . to represent it as a quantum circuit, we need to extend it to , where = 2^ , = \{n : n_s , line + n_s , tree2^n } .[ eq - nb ] define h_glue = h_glue + h_glue^ , h_ora = h_ora + h_ora^. [ eq - htoh ] ( this last equation is fine as an operator statement , but as a matrix statement , and must be padded " with zeros to make the equation true . by `` padding a matrix with zeros '' , we mean embedding it in a larger matrix , the new entries being zeros . )one can split into two parts , which we call the * bulk hamiltonian * and the * boundary corrections hamiltonian * : h_fruit = h_bulk + h_corr , where h_bulk = h_line + h_tree , h_corr = h_glue + h_ora .( again , this last equation requires zero padding if considered a matrix equation . ) note that =0 ] . for ,if , we say * approximates ( or is an approximant ) of order * for . given an approximant of , and some , one can approximate by .we will refer to this as * trotter s trick * , and to as the * number of trots*. for , quanfruit approximates with a suzuki approximant of order that is derived in ref. .quanfruit also applies the trotter trick with trots to the approximant of . for , quanfruit always approximates with an approximant of order 3 , that is derived in ref. .quanfruit also applies the trotter trick with trots to the approximant of .ref. gives exact ( to numerical precision ) compilations of the glue and oracle parts of .quanfruit uses these compilations , so the order of the suzuki ( or other ) approximant and the number of trots do not arise in quanfruit , for either the glue or the oracle . for , quanfruit also approximates with a suzuki approximant of order . recall that for is the second order suzuki approximant , and higher order ones are defined recursively from this one .thus , all suzuki approximants are specified by giving two functions of , and . to get a meta " suzuki approximant , we set and .quanfruit also applies the trotter trick with trots to the approximant of .fig.[fig - qfruit - main ] shows the * control panel * for quanfruit . this is the main and only window of the application .this window is open if and only if the application is running. the control panel allows you to enter the following inputs : file prefix : : : prefix to the 3 output files that are written when you press the * write files * button .for example , if you insert test in this text field , the following 3 files will be written : + * test_qfru_log.txt * test_qfru_eng.txt * test_qfru_pic.txt line : number of qubits : : : the parameter defined above .tree : number of qubits : : : the parameter defined above .coupling constant : : : the parameter defined above . line door : : : the parameter defined above . bands : : : you must enter here an even number of integers separated by any non - integer , non - white space symbols .say you enter . if for are as defined above , then iff .each set is a band " . if , the band has a single element .quanfruit checks that , , and for all .it also checks that .( if , bands and can be merged . if , bands and overlap . )line : number of trots : : : the parameter defined above . line : order of approximant : : : the parameter defined above . tree : number of trots : : : the parameter defined above . tree : order of approximant : : : this parameter is always 3 .meta : number of trots : : : the parameter defined above .meta : order of approximant : : : the parameter defined above . the control panel displays the following outputs : number of qubits : : : the parameter defined by eq.([eq - nb ] ) .number of elementary operations : : : the number of elementary operations in the output quantum circuit .if there are no loops , this is the number of lines in the english file , which equals the number of lines in the picture file .when there are loops , the loop k reps: " and next k " lines are not counted , whereas the lines between loop k reps: " and next k " are counted times .error : : : the distance in the frobenius norm between the input evolution operator and the output quantum circuit ( i.e. , the seo given in the english file ) .for a nice review of matrix norms , see ref. . for any matrix ,its frobenius norm is defined as .another common matrix norm is the 2-norm .the 2-norm of equals the largest singular value of .the frobenius and 2-norm of are related by : .message : : : a message appears in this text field if you press * write files * with a bad input . the message tries to explain the mistake in the input .pressing the * write files * button of the control panel of quanfruit generates 3 files ( log , english , picture ) .these files are analogous to their namesakes for quantree , quanlin and other quansuite applications .ref. explains how to interpret them .the quansuite applications , based on the qwalk class library , exhibit some code innovations that you will find very helpful . hopefully , these innovations will become commonplace in future quantum computer software . ** qwalk class library does most of the work in all quansuite applications : * look in the source folder for any of the quansuite applications .you ll find that it contains only 3 or 4 classes .most of the classes are in the source folder for qwalk . that s because most of the work is done by the qwalk class library , which is independent of the quansuite application .* * reusability of seo writers : * look at the class fruitseo_writer in the source folder for quanfruit .you ll find that fruitseo_writer utilizes the methods glueseo_writer ( ) , oracleseo_writer ( ) , treeseo_writer ( ) , lineseo_writer ( ) , and shiftseo_writer ( ) .thus , fruitseo_writer delegates its seo writing to methods from the quansuite applications : quanglue , quanoracle , quantree , quanlin and quanshi .in fact , quanfruit can be viewed as a composite of these simpler quansuite applications .this reusability of seo writers is made possible by the novel technique described in appendix [ app - pad ] . ** nested loops : * the english and picture files of quansuite applications can have loops within loops .this makes the english and picture files shorter , without loss of information .however , if you want to multiply out all the operations in an english file ( this is what the class seo_reader in qwalk does ) , then having nested loops makes this task more difficult .seo_reader of qwalk is sophisticated enough to understand nested loops . ** painless object oriented implementation of suzuki approximants and trotter s trick : * higher order suzuki approximants can be implemented painlessly by using the classes : qwalk / src / suzfunctions and qwalk / src / suzwriter .see the class quanlin / src / lineseo_writer for an example of how it s done .essentially , all you have to do is to override the two abstract methods in qwalk / src / suzfunctions .+ trotter s trick can also be easily implemented in a quansuite application , by using loop and next lines in the english file .see the write ( ) method of quanlin / src / lineseo_writer for an example .suppose we know how to compile .is it possible to use this compilation to compile , where and are square matrices of zeros ?the answer is yes , as we show next . __ s_s & = & + & = & ( _ b-1 ) ( _ b-2 ) ( + 1 ) ( ) h _ . as usual , .we will say that has been padded with s to obtain .now let be the unitary operation that shifts state to , with .the application quanshi gives a compilation of .using , one can define a matrix from as follows :
this paper introduces quanfruit v1.1 , a java application available for free . ( source code included in the distribution . ) recently , farhi - goldstone - gutmann ( fgg ) wrote a paper arxiv : quant - ph/0702144 that proposes a quantum algorithm for evaluating nand formulas . quanfruit outputs a quantum circuit for the ffg algorithm .
even if nonlinear distortions are often present , many systems can be approximated by a linear model . when the nonlinear distortions are too large , a nonlinear model is required .one option is to use a block - oriented model , which consists of interconnections of linear dynamic and nonlinear static systems .one of the simplest block - oriented models is the wiener model ( see fig .[ fig : wiener model ] ) .this is the cascade of a linear dynamic and a nonlinear static system .wiener models have been used before to model e.g. biological systems , a ph process , and a distillation column .some methods have been proposed to identify wiener models , see e.g. for a nonparametric approach where the nonlinearity is assumed to be invertible , for the maximum likelihood estimator ( mle ) , for an approach built on the concept of model complexity control , and for a frequency domain identification method where memory nonlinearities are considered .more complex parallel wiener systems are identified in using a subspace based method , and in using a parametric approach that needs experiments at several input excitation levels .some more wiener identification methods can be found in the book edited by giri and bai .this paper considers a wiener - schetzen model ( a type of parallel wiener model , see fig .[ fig : wiener - schetzen model ] ) , but , without loss of generality , the focus is on the approximation of a single - branch wiener system , to keep the notation simple .the recent book also provides some industrial relevant examples of wiener , hammerstein , and wiener - hammerstein models , that can be handled by the wiener - schetzen model structure due to its parallel nature .in fact , a wiener - schetzen model can describe a large class of nonlinear systems arbitrarily well in mean - square sense ( see for the theory and other practical examples ) . if the system has fading memory , then for bounded slew - limited inputs , a uniform convergence is obtained . in a wiener - schetzen model ,the dynamics are described in terms of orthonormal basis functions ( obfs ) .the nonlinearity is described by a multivariate polynomial .though any complete set of obfs can be chosen , the choice is important for the convergence rate .if the pole locations of the obfs match the poles of the underlying linear dynamic system closely , this linear dynamic system can be described accurately with only a limited number of obfs . in the original ideas of wiener , laguerre obfs were used .laguerre obfs are characterized by a real - valued pole , making them suitable for describing well - damped systems with dominant first - order dynamics . for moderately damped systems with dominant second - order dynamics ,using kautz obfs is more appropriate .we choose generalized obfs ( gobfs ) , since they can deal with multiple real and complex valued poles .this paper considers the approximation of a wiener system with finite - order iir ( infinite impulse response ) dynamics and a polynomial nonlinearity by a wiener - schetzen model that contains a limited number of gobfs .the system poles are first estimated using the best linear approximation ( bla ) of the system .next , the gobfs are constructed using these pole estimates .the coefficients of the multivariate polynomial are determined with a linear regression .the approach can be applied to parallel wiener systems as well .as the estimation is linear - in - the - parameters , the wiener - schetzen model is well suited to provide an initial guess for nonlinear optimization algorithms , and for modeling time - varying and parameter - varying systems .the analysis in this paper is a starting point to tackle these problems .the contributions of this paper are : * the proposal of an identification method for wiener systems with finite - order iir dynamics ( the initial ideas were presented in ) , * a convergence analysis for the proposed method .the paper is organized as follows .the basic setup is described in section [ sec : setup ] .section [ sec : identification procedure ] presents the identification procedure and the convergence analysis .section [ sec : discussion ] discusses the sensitivity to output noise .the identification procedure is illustrated on two simulation examples in section [ sec : illustration ] .finally , the conclusions are drawn in section [ sec : conclusion ] .this section first introduces some notation , and next defines the considered system class and the class of excitation signals .afterwards , the wiener - schetzen model is discussed in more detail .a brief discussion of the bla concludes this section .this section defines the notations , , and .the sequence , converges to in probability if , for every there exists an such that for every .we write the notation is an indicates that for big enough , , where is a strictly positive real number .the notation is an indicates that the sequence is bounded in probability at the rate .more precisely , , where is a sequence that is bounded in probability . the data - generating system is assumed to be a single input single output ( siso ) discrete - time wiener system ( see fig .[ fig : wiener model ] ) .this is the cascade of a linear time - invariant ( lti ) system and a static nonlinear system . in this paper, is restricted to be a stable , rational transfer function , and to be a polynomial .the class is the set of proper , finite - dimensional , rational transfer functions , that are analytic in and squared integrable on the unit circle .[ assumption : g is a rational tf ] the lti system . [ assumption : known order of g ] the order of is known .the poles of are denoted by ( ) .[ assumption : f non - even ] the function is non - even around the operating point . [assumption : f polynomial of known degree ] the function is a polynomial of known degree : more general functions can be approximated arbitrarily well in mean - square sense by over any finite interval .the input and the output are measured at time instants ( ) . in this paper , random - phase multisine excitations are considered .[ def : random - phase multisine ] a signal is a random - phase multisine if with , the maximum frequency of the excitation signal , the number of frequency components , and the phases uniformly distributed in the interval . the amplitudes can be chosen by the user , and are normalized such that has finite power as .[ def : excitation signal class ] the class of excitation signals is the set of random - phase multisines , having normalized amplitudes , where is a uniformly bounded function with a countable number of discontinuities , and if or .[ assumption : excitation signal ] the excitation signal . for simplicity ,the excitations in this paper are restricted to random - phase multisines . however , as shown in , the theory applies for a much wider class of riemann - equivalent signals . in this case , these are the extended gaussian signals , which among others include gaussian noise . the system is modeled with a wiener - schetzen model ( fig . [fig : wiener - schetzen model ] ) , where we choose to be gobfs . in , it is shown how a set of poles gives rise to a set of obfs [ eq : gobfs ] \quad .\ ] ] if the poles result from a periodic repetition of a finite set of poles , the gobfs are obtained , with poles the gobfs form an orthonormal basis for the set of ( strictly proper ) rational transfer functions in .one extra basis function is introduced , namely , to enable the estimation of a feed - through term and as such also to enable the estimation of static systems .this extra basis function is still orthogonal with respect to the other basis functions ( see appendix [ app : orthonormal basis ] ) .the lti system can thus be represented exactly as a series expansion in terms of the basis functions : let be a truncated series expansion , with , and the number of repetitions of the finite set of poles . recall that ( ) are the true poles of , and let then there exists a finite , such that for any , which shows that can be well approximated with a small number of gobfs if the poles are close to the true poles .the pole locations will be estimated by means of the bla of the system .the bla of a system is defined as the linear system that approximates the system s output best in mean - square sense .the bla of the considered wiener system is equal to where the constant depends upon the odd nonlinearities in and the power spectrum of the input signal .a similar result for gaussian noise excitations results from bussgang s theorem . under assumption [ assumption : f non -even ] , is non - zero .this section formulates the identification procedure and provides a convergence analysis in the noise - free case .the influence of output noise is analyzed in section [ sec : discussion ] .the basic idea is that the asymptotic bla ( ) has the same poles as ( see ) .the poles calculated from the estimated bla are thus excellent candidates to be used in constructing the gobfs .since the bla will be estimated from a finite data set ( finite ) , the poles calculated from the estimated bla will differ from the true poles .extensions of the basis functions ( ) will be used to compensate for these errors ( see ) .the identification procedure can be summarized as : 1 .estimate the bla and calculate its poles .2 . use these pole estimates to construct the gobfs .3 . estimate the multivariate polynomial coefficients .these steps are now formalized and the asymptotic behavior ( ) of the estimator is analyzed .first , the situation without disturbing noise is considered .the influence of disturbing noise is discussed in section [ sec : discussion ] .first , a nonparametric estimate of the bla is calculated .since the input is periodic , the bla is estimated as , in which and are the discrete fourier transforms ( dfts ) of the output and the input . for random excitations , the classical frequency response estimates ( division of cross - power and auto - power spectra ) can be used , or more advanced frf measurement techniques can be used , like the local polynomial method .next , a parametric model is identified using a weighted least - squares estimator [ eq : theta_hat ] where the cost function is equal to here , is a deterministic , -independent weighting sequence , and is a parametric transfer function model with the constraint to obtain a unique parameterization . under assumption [ assumption : known order of g ] , we put .eventually , the poles ( ) of the parametric model are calculated .before we derive a bound on in lemma [ lemma : bound on delta_p ] , a regularity condition on the parameter set is needed .[ assumption : regularity condition theta ] the parameter set is identifiable if the system is excited by . the existence of a uniformly bounded convergent volterra series is needed as well ( see appendix [ app : volterra series ] for more details ) .[ assumption : convergent volterra series ] there exists a uniformly bounded volterra series whose output converges in least - squares sense to the true system output for .[ lemma : bound on delta_p ] consider a discrete - time wiener system , with an lti system and a static nonlinear system .let ( ) be the poles of and be the pole estimates , obtained using the weighted least - squares estimator . then under assumptions , and , is an .let be the `` true '' model parameters , such that with and polynomials of degree .the roots of are equal to the true poles ( ) .if these poles are all distinct , the first - order taylor expansion of results in [ eq : sensitivity poles ] where , and where follows from under assumptions , it is shown in that . for the considered output disturbances ( no noise in this section , filtered white noise in section [ sec : discussion ] ) , the least - squares estimator is a mle . from the properties of the mle , it follows that then from and , it follows that this concludes the proof of lemma [ lemma : bound on delta_p ] .lemma [ lemma : bound on delta_p ] shows that good pole estimates are obtained .though no external noise is considered in this section , the probability limits in this paper are w.r.t .the random phase realizations of the excitation signal .next , the gobfs are constructed with these pole estimates ( see , with ) , and the intermediate signals ( ) ( see fig . [fig : wiener - schetzen model ] ) are calculated .the following lemma shows that the true intermediate signal can be approximated arbitrarily well by a linear combination of the calculated signals .[ lemma : bound on delta_x ] consider the situation of lemma [ lemma : bound on delta_p ] .let , and let ( ) be gobfs , constructed from the finite set of poles .let , and denote .then under assumptions , and , is an . from and, it follows that is an .it then follows from that , and thus is an .finally , the coefficients of the multivariate polynomial are estimated .let [ eq : y_beta ] be the output of , where the coefficients of the polynomial are chosen to be they are estimated using linear least - squares regression : [ eq : beta_hat ] where the regression matrix is equal to we will now show that the estimated output converges in probability to as . [theorem : convergence y_hat ] consider the situation of lemma [ lemma : bound on delta_x ] .let and , where the coefficients are obtained from the least - squares regression . then under assumptions , is an .the exact output + \delta y\\ & = \begin{aligned}[t ] & \tilde{\beta}_{dc}\\ & + \sum_{p = 1}^{q } \left ( \sum_{i_1 = 0}^{n } \sum_{i_2 = i_1}^{n } \cdots \sum_{i_p = i_{p - 1}}^{n } \tilde{\beta}_{i_1 , \ldots , i_p } x_{i_1 } \cdots x_{i_p } \right)\\ & + \delta y \end{aligned}\\ & = y_{\tilde{\beta } } + \delta y \end{aligned}\ ] ] where is the output of a multivariate polynomial , in which the coefficients follow from the true coefficients of and the true coefficients of the series expansion of .the truncation of this series expansion is taken into account by the term , which just as is an .note that the coefficients can be obtained as the minimizers of the artificial least - squares problem the estimated coefficients are equal to we now show that is an .each element in is an , due to the normalization of the excitation signal ( see assumption [ assumption : excitation signal ] ) .consequently , each element in the matrix is the sum of terms that are an , so is an .the elements in the matrix are thus an .each element in the vector is an .consequently , each element in the vector is the sum of terms that are the product of an and an .the elements in the vector are thus an . as a consequence , . andthus this concludes the proof of theorem [ theorem : convergence y_hat ] .theorem [ theorem : convergence y_hat ] shows that the estimated output converges in probability to the exact output with only a finite number of basis functions .the convergence rate increases if is increased .the multivariate polynomial is implemented in terms of hermite polynomials to improve the numerical conditioning of the least - squares estimation in . as this has no consequences for the result in theorem [ theorem : convergence y_hat ] ,ordinary polynomials are used throughout the paper to keep the notation simple .in this section , the sensitivity of the identification procedure to output noise is discussed .noise on the intermediate signal is not considered here . in general, this would result in biased estimates of the nonlinearity .more involved estimators , e.g. the mle , are needed to obtain an unbiased estimate . in the case of filtered white output noise , with a sequence of independent random variables , independent of , withzero mean and variance , and with a stable monic filter , the exact output .the estimated coefficients are then equal to ( cfr . ) the columns of are filtered versions of the known input signal , which was assumed independent of .it is thus clear that the noise is uncorrelated with the columns of .consequently , each element in the vector is the sum of uncorrelated terms that are the product of an and an .the elements in the vector are thus an , as a consequence , . the error on the estimated output due to the noise is thus independent of the number of repetitions .increasing allows to tune the model error such that it disappears in the noise floor .in this section , the approach is illustrated on two simulation examples .the first one considers the noise - free case , and illustrates the convergence rate predicted by theorem [ theorem : convergence y_hat ] .the second one compares the proposed method to the so - called approximative prediction error method ( pem ) , as implemented in the matlab system identification toolbox .consider a siso discrete - time wiener system with and the system is excited with a random - phase multisine ( see ) with and the sampling frequency ; ; and the amplitudes chosen equal to each other and such that the rms value of is equal to .the system is identified using the identification procedure described in section [ sec : identification procedure ] .no weighting is used to obtain a parametric estimate of the bla , i.e. . a random - phase multisine with is used for the validation .fifty monte carlo simulations are performed , with each time a different realization of the random phases of the excitation signals . the results in fig .[ fig : monte carlo ] show that the convergence rate of agrees with what is predicted by theorem [ theorem : convergence y_hat ] . the convergence rate increases with an increasing number of repetitions of the basis functions .these results generalize to parallel wiener systems as well . along the monte carlo simulations ( full line ) and its one standard deviation confidence interval ( filled zone ) .the predicted convergence rate is indicated by the dashed lines , which are an .[ fig : monte carlo],scaledwidth=40.0% ] the second example is inspired by the second example in .it is a discrete - time siso wiener system with a saturation nonlinearity .the system is given by where the input and the output noise are gaussian , with zero mean , and with variances and respectively .the coefficients and are equal to and , respectively . compared to the example in , no process noise was added to since the gobf approach can not deal with process noise .moreover , the output noise variance was lowered from to as the large output noise would otherwise dominate so much that no sensible conclusions could be made .one thousand monte carlo simulations are performed , with each time an estimation and a validation data set of data points each . in case of the approximative pem method , the true model structure is assumed to be known , and the true system belongs to the considered model set . in case of the proposed gobf approach , the order of the linear dynamics is assumed to be known . a model with and one withis estimated .the local polynomial method is used to estimate the bla .the nonlinearity is approximated via a multivariate polynomial of degree , in order to capture both even and odd nonlinearities .note that in this case , the true system is not in the model set . the results in fig .[ fig : histogram nrmse output noise ] show that the approximative pem method performs significantly better than the gobf approach .note that the approximative pem method used full prior knowledge of the model structure , while no prior knowledge on the nonlinearity was used in the gobf approach .still , it is able to find a decent approximation . a better approximation can be obtained by representing the nonlinearity with another basis function expansion that is more appropriate to the nonlinearity at hand .this , however , is out of the scope of this paper .finally , for a single - branch wiener system , the shape of the output nonlinearity can be determined as follows .motivated by lemma [ lemma : bound on delta_x ] and , an estimate of the coefficients can be obtained as the signal is then , up to an unknown scale factor , approximately equal to .the shape of the nonlinear function can then be determined from a scatter plot of and ( see fig . [fig : scatter plot ] ) . on the validation data sets for the monte carlo simulations .pem indicates the results for the approximative prediction error method , while gobf ( 0 ) and gobf ( 1 ) indicate the results for the proposed gobf approach with and .[ fig : histogram nrmse output noise],scaledwidth=40.0% ] and reveals the shape of the saturation nonlinearity ( gobf approach with , last monte carlo simulation ) .[ fig : scatter plot],scaledwidth=40.0% ] to make a fair comparison , the example is modified such that the system is in the model class for both of the considered approaches .the nonlinearity in is changed to a third - degree polynomial that best approximates the saturation nonlinearity on all the estimation data sets .the approximative pem method now estimates a third - degree polynomial nonlinearity . in this case , the gobf approaches have a similar performance as the approximative pem approach ( see fig .[ fig : histogram nrmse polynomial ] ) .finally , in order to determine the number of repetitions , one can easily estimate several models for an increasing , and compare the simulation errors on a validation data set .once the simulation error increases , the variance error outweighs the model error , and one should select less repetitions . here , the normalized rms error for is lower than the normalized rms error for in out of the cases . in the remaining cases , one would select a model with . on the validation data sets for the monte carlo simulations ( polynomial nonlinearity ) .[ fig : histogram nrmse polynomial],scaledwidth=40.0% ]an identification procedure for siso wiener systems with finite - order iir dynamics and a polynomial nonlinearity was formulated and its asymptotic behavior was analyzed in an output - error framework .it is shown that the estimated output converges in probability to the true system output .fast convergence rates can be obtained .the identification procedure is mainly linear in the parameters .the proposed identification procedure is thus well suited to provide an initial guess for nonlinear optimization algorithms .the approach can be applied to parallel wiener systems as well .this work was supported by the erc advanced grant snlsid , under contract 320378 .the orthogonality of with respect to the set ( ) can be shown either by working out the inner products ( see below ) or by choosing a pole structure in the shifted basis functions .consider the basis functions , given by for and .we now prove that they are orthogonal to , by showing that the inner product is equal to zero . denotes the unit circle .1 . + 2 . + the obfs are linear combinations of the basis functions and are thus orthogonal to . since the norm of is equal to one , the set ( ) is a set of obfs .a volterra series generalizes the impulse response of an lti system to a nonlinear time - invariant system via multidimensional impulse responses .the input - output relation of a volterra series is split in different contributions of increasing degree of nonlinearity for periodic excitations , the output fourier coefficient at frequency is with . here , is the symmetrized frequency domain representation of the volterra kernel of degree .the volterra series is uniformly bounded if with , and as in definition [ def : excitation signal class ] .
many nonlinear systems can be described by a wiener - schetzen model . in this model , the linear dynamics are formulated in terms of orthonormal basis functions ( obfs ) . the nonlinearity is modeled by a multivariate polynomial . in general , an infinite number of obfs is needed for an exact representation of the system . this paper considers the approximation of a wiener system with finite - order infinite impulse response dynamics and a polynomial nonlinearity . we propose to use a limited number of generalized obfs ( gobfs ) . the pole locations , needed to construct the gobfs , are estimated via the best linear approximation of the system . the coefficients of the multivariate polynomial are determined with a linear regression . this paper provides a convergence analysis for the proposed identification scheme . it is shown that the estimated output converges in probability to the exact output . fast convergence rates , in the order , can be achieved , with the number of excited frequencies and the number of repetitions of the gobfs . , dynamic systems ; nonlinear systems ; orthonormal basis functions ; system identification ; wiener systems .
word sense disambiguation is the process of identifying the correct sense of words in particular contexts .the solving of wsd seems to be ai complete ( that means its solution requires a solution to all the general ai problems of representing and reasoning about arbitrary ) and it is one of the most important open problems in nlp ,, , ,, . in the electronical on - line dictionary wordnet ,the most well - developed and widely used lexical database for english , the polysemy of different category of words is presented in order as : the highest for verbs , then for nouns , and the lowest for adjectives and adverbs .usually , the process of disambiguation is realized for a single , target word .one would expect the words closest to the target word to be of greater semantical importance for it than the other words in the text .the context is hence a source of information to identify the meaning of the polysemous words .the contexts may be used in two ways : a ) as _ bag of words _ , without consideration of relationships with the target word in terms of distance , grammatical relations , etc . ;b ) with relational information .the _ bag of words _approach works better for nouns than verbs but is less effective than methods that take other relations in consideration .studies about syntactic relations determined some interesting conclusions : verbs derive more disambiguation information from their objects than from their subjects , adjectives derive almost all disambiguation information from the nouns they modify , and nouns are best disambiguated by directly adjacent adjectives or nouns .all these advocate that a global approach ( disambiguation of all words ) helps to disambiguate each pos .in this paper we propose a global disambiguation algorithm called * chain algorithm * for disambiguation , chad , which presents elements of both points of view about a context : because this algorithm is it belongs to the class of algorithms which depend of relational information ; in the same time it does nt require syntactic analysis and syntactic parsing .in section 2 of this paper we review lesk s algorithm for wsd . in section 3we present `` triplet '' algorithm for three words and chad algorithm . in section 4we describe some experiments and evaluations with chad .section 5 introduces some conclusions of using the chad for translation ( here from romanian language to english ) and for text entailment verification .section 6 draws some conclusions and further work .work in wsd reached a turning point in the 1980s when large - scale lexical resources , such as machine readable dictionaries , became widely available .one of the best known dictionary - based method is that of lesk ( 1986 ) .it starts from the idea that a word s dictionary definition is a good indicator for the senses of this word and uses the definition in the dictionary directly . let us remember basic algorithm of lesk : suppose that for a polysemic target word there are in a dictionary senses given in an equal number of definitions .here we mean by the set of words contained in the -th definition .consider that the new context to be disambiguated is .the * reduced form * of lesk s algorithm is : the score of a sense is the number of words that are shared by the different sense definitions ( glosses ) and the context .a target word is assigned that sense whose gloss shares the largest number of words .the algorithm of lesk was successfully developed in by using wordnet dictionary for english .it was created by hand in 1990s and includes definitions ( glosses ) for individual senses of words , as in a dictionary .additionally it defines groups of synonymous words representing the same lexical concept ( synset ) and organizes them into a conceptual hierarchy .the paper uses this conceptual hierarchy for improving the original lesk s method by augmenting the definitions with non - gloss information : synonyms , examples and glosses of related words ( hypernyms , hyponyms ) . also , the authors introduced a novel overlap measure between glosses which favorites multi - word matching .first of all we present an algorithm for disambiguation of a triplet . in a sense , our triplet algorithm is similar with global disambiguation algorithm for a window of two words around a target word given .instead , our chad realizes disambiguation of all - words in a text with any length , ignoring the notion of `` window '' and `` target word '' and target word in similar studies , all that without increasing the computational complexity .the algorithm for disambiguation of a triplet of words for dice measure is the following : begin for each sense do for each sense do for each sense do endfor endfor endfor / * sense of is , sense of is , sense of is * / end for the overlap measure the score is calculated as : for the jaccard measure the score is calculates as : shortly , chad begins with the disambiguation of a triplet and then adds to the right the following word to be disambiguated .hence it disambiguates at a time a new triplet , where first two words are already associated with the best senses and the disambiguation of the third word depends on these first two words .chad algorithm for disambiguation of the sentence is : begin disambiguate triplet while do calculate calculate endwhile end due to the brevity of definitions in wn many values of are 0 .we attributed the first sense in wn for in this cases .in this section we shortly describe some experiments that we have made in order to validate the proposed chain algorithm * chad*. we have developed an application that implements * chad * and can be used to : * disambiguate words ( [ res ] ) ; * translate words into romanian language ( [ app2 ] ) ; * text entailment verification ( 5.2 ) .the application is written in jdk 1.5.0 . and uses _ httpunit _ 1.6.2 api . written in java, httpunit is a free software that emulates the relevant portions of browser behavior , including form submission , javascript , basic http authentication , cookies and automatic page redirection , and allows java test code to examine returned pages either as text , an xml dom , or containers of forms , tables , and links .we have used _ httpunit _ in order to search wordnet through the dictionary from .more specifically , the following java classes from are used : * _ webconversation_. it represents the context for a series of http requests .this class manages cookies used to maintain session context , computes relative urls , and generally emulates the browser behavior needed to build an automated test of a web site . * _webresponse_. this class represents a response to a web request from a web server . * _webform_. this class represents a form in an html page . using this classwe can examine the parameters defined for the form , the structure of the form ( as a dom ) , and the text of the form .we have used _ webform _ class in order to simulate the submission of the form with corresponding parameters .we tested our chad on 10 files of brown corpus , which are pos tagged .recall that wn stores only stems of words .so , we first preprocessed the glosses and the input files , replacing inflected words with their stems .the reason for choosing brown corpus was the possibility offered by semcor corpus ( the best known publicly available corpus hand tagged with wn senses ) to evaluate the results .the correct disambiguated words means the disambiguated words as in semcor .we ran separately chad for : 1 . nouns , 2 .verbs , and 3 .nouns , verbs , adjectives and adverbs . in the case of chadaddressed to nouns , the output is the sequence of nouns tagged with senses .the tag means that for noun the wn sense was found .analogously for the case of disambiguation on verbs and of all pos .the results are presented in tables 1 and 2 .as our chad algorithm is dependent on the length of glosses , and as nouns have the longest glosses , the highest precision is obtained for nouns . in figure 3 , the precision progress can be traced . by dropping and rising, the precision finally stabilizes to value 0.767 ( for the file br - a01 ) .the most interesting part of this graph is that he shows how this chain algorithm works and how the correct or incorrect disambiguation of first two words from the first triplet influences the disambiguation of the next words .it is known that , at senseval 2 contest , only 2 out of the 7 teams ( with the unsupervised methods ) achieved higher precision than the wordnet sense baseline .we compared in figures 1 , 2 and 3 the precision of chad for 10 files in brown corpus , for dice , overlap and jaccard measures with wordnet sense . comparing the precision obtained with the overlap measure and the precision given by the wordnet sense for 10 files of brown corpus ( br - a01 , br - a02 , br-11 , br-12 , br-13 , br-14 , br - a15 , br - b13 , br - b20 and br - c01 ) , we obtained the following results : * for nouns , the minimum difference was 0.0077 , the maximum difference was 0.0706 , the average difference was 0.0338 ; * as a whole , for 4 files difference was greater or equal to 0.04 , and for 6 files was lower ; * in case of all parts of speech , the minimum difference was 0.0313 , the maximum difference was 0.0681 , the average difference was 0.0491 ; * as a whole , for 7 files difference was greater or equal to 0.04 , and for 3 files was lower ; * relatively to verbs , the minimum difference was 0.0078 , the maximum difference was 0.0591 , the average difference was 0.0340 ; * as a whole , for 4 files difference was greater or equal to 0.04 , and for 6 files was lower .let us remark that in our chad the standard concept of windows better size parameter is not working : simply , a window is the variable space between the previous and the following word in respect to the current word .file & words & dice & jaccard & overlap & wn1 + bra01 & 486&0.758&0.758&0.767&0.800 + bra02 & 479&0.735&0.731&0.758&0.808 + bra14 & 401&0.736&0.736&0.754&0.769 + bra11 & 413&0.724&0.726&0.746&0.773 + brb20 & 394&0.740&0.740&0.743&0.751 + bra13 & 399&0.734&0.734&0.739&0.746 + brb13 & 467&0.708&0.708&0.717&0.732 + bra12 & 433&0.696&0.696&0.710&0.781 + bra15 & 354&0.677&0.674&0.682&0.725 + brc01 & 434&0.653&0.653&0.661&0.728 + file & words & dice & jaccard & overlap & wn1 + bra14 & 931&0.699&0.701&0.711&0.742 + bra02 & 959&0.637&0.685&0.697&0.753 + brb20 & 930&0.672&0.674&0.693&0.731 + bra15 & 1071&0.653&0.651&0.684&0.732 + bra13 & 924&0.667&0.673&0.682&0.735 + bra01 & 1033&0.650&0.648&0.674&0.714 + brb13 & 947&0.649&0.650&0.674&0.722 + bra12 & 1163&0.626&0.622&0.649&0.717 + bra11 & 1043&0.634&0.639&0.648&0.708 + brc01 & 1100&0.625&0.627&0.638&0.688 +wsd is only an intermediate task in nlp . in machine translation wsdis required for lexical choise for words that have different translation for different senses and that are potentially ambiguous within a given document .however , most machine translation models do not use explicit wsd ( in introduction ) . the algorithm implemented by usconsists in the translation word by word of a romanian text ( using dictionary at http://lit.csci.unt.edu/ rada / downloads / ronlp / r.e .tralexand ) , then the application of chain algorithm to the english text .as the translation of a romanian word in english is multiple , the disambiguation of a triplet is modified as following .let be the word with translations , the word with translations and the word with translations .each triplet is disambiguated with the triplet disambiguation algorithm and then the triplet with the maxim score is selected : begin for do for do for do disambiguate triplet in calculate endfor endfor endfor calculate optimal translation of triplet is end let us remark that , for example , is a synset which corresponds to the best translation for produced by chad algorithm .however , since in romanian are used many words linked by different spelling signs , these composed words are not found in the romanian - english dictionary .accordingly , not each romanian word produces an english correspondent as output of the above algorithm .however , many translations are still correct .for example , the translation of expression _ vreme trece _ ( in the poem `` glossa '' of our national poet mihai eminescu ) , is _ word : ( rom)vreme ( eng)age , word : ( rom)trece ( eng) _ . as another example from the same poem , where the synset of a word occurs ( as an output of our application ) , _ ine toate minte _ ,is translated in _ word : ( rom ) tine ( eng ) : \{keep , maintain } , word : ( rom ) toate ( eng ) : \{wholly , entirely , completely , totally , all , altogether , whole } , word : ( rom ) minte ( eng ) : \{judgment , judgement , assessment}_. the recognition of text entailment is one of the most complex task in natural language understanding .thus , a very important problem in some computational linguistic applications ( as question answering , summarization , segmentation of discourse , and others ) is to establish if a text _ follows _ from another text .for example , a qa system has to identify texts that entail the expected answer .similarly , in ir the concept denoted by a query expression should be entailed from relevant retrieved documents . in summarization, a redundant sentence should be entailed from other sentences in the summary .the application of wsd to text entailment verification is treated by authors in the paper `` text entailment verification with text similarity '' in this volume .in this paper we presented a new algorithm of word sense disambiguation .the algorithm is parametrized for : 1 .all words ( that means nouns , verbs , adjectives , adverbs ) ; 2 .all nouns ; 3 .all verbs .some experiments with this algorithm for ten files of brown corpus are presented in section 4.2 .the stemming was realized using the list from http://snowball.tartarus.org/algorithms/porter/diffs.txt .the precision is calculated relative to the corresponding annotated files in semcor corpus .some details of implementation are given in 4.1 .we showed in section 5 how the disambiguation of a text helps in automated translation of a text from a language into another language : each word in the first text is translated into the most appropriated word in the second text .this appropriateness is considered from two points of view : 1 .the point of view of possible translation and 2 .the point of view of the real sense ( disambiguated sense ) of the second text .some experiments with romanian - english translations and text entailment verification are given ( section 5 ) .another problem which we intend to address in the further work is that of optimization of a query in information retrieval . finding whether a particular sense is connected with an instance of a word is likely the ir task of finding whether a document is relevant to a query .it is established that a good wsd program can improve performance of retrieval .as ir is used by millions of users , an average of some percentages of improvement could be seen as very significant .
a large class of unsupervised algorithms for word sense disambiguation ( wsd ) is that of dictionary - based methods . various algorithms have as the root lesk s algorithm , which exploits the sense definitions in the dictionary directly . our approach uses the lexical base wordnet for a new algorithm originated in lesk s , namely _ chain algorithm for disambiguation _ of all words ( chad ) . we show how translation from a language into another one and also text entailment verification could be accomplished by this disambiguation .
the investigation of structure and dynamics of networks has been a powerful strategy to analyze interacting many - body problems present in many different areas : biological , ecological , economical and social systems , to name some of them .the map of these systems into graphs is a fruitful old idea , and the knowledge of the interconnection between its vertices is a necessary condition that allows us to examine a myriad of pratical problems .nowadays , there are several research interests involving complex networks .we can say , for instance , that there is an effort to obtain a better understanding of networks from some of its internal structures like the formation of communities , or a more complex interconnection of graphs like the multilayer networks .the complexity of the internal structure reflects on the entropy of the network , which shows the possibility of classifying several internal structures . at the same time, we still have progress on important questions that use complex networks as a framework to define other problems on it ; for instance , we can cite the active area of epidemiological models , or statistical models on complex networks to analyze critical phenomena . at this point, it is worth mentioning that despite the progress in several directions , it is natural that analytical results are less frequent than numerical ones , which is understandable due to the technical complexities presented by many relevant questions .furthermore , many existing analytical results come from stationary regime . in this scenario ,we propose a scheme that estimates the time - dependent degree distribution . in order to illustrate our idea, we revisited the watts - strogatz model .although not being a complex networkin the sense that it does not display a heterogeneous degree distribution , it has small - world property and has high clustering , two properties shared with many real networks .the model was originally defined as an intermediate configuration between a regular lattice and a graph where all their nodes are randomly linked , and we will present a slightly modified version from the original one in order to capture its dynamical evolution analytically . the layout of this work is as follows . in purpose of illustrating the main idea of the work , we start with a dynamical version of the erds - rnyi model in section 2 and we introduce the main model , the time - dependent watts - strogatz graph , in section 3 . then , we present the main idea that allow one to achieve an analytical form for the dynamic degree distribution in section 4 . some final comments are presented in section 5 .the initial condition of the model consists of vertices and no edges at time . at each time step ,two vertices are randomly chosen and linked ; this includes the possibility of having a loop ( an edge that connects a vertex to itself ) .it is clear that each end of an edge links to a vertex with probability .therefore , defining as the probability that a vertex has degree at time , one can represent the dynamics as with as the initial condition , where is the kronecker symbol ( when , and otherwise ) .furthermore , is the time - independent conditional probability of changing the degree of a vertex from to ; in the present case , by introducing the time - dependent degree distribution , the time evolution equation ( [ er_me ] ) can be written as if now one introduces the z - transform it is straightforward that where the initial condition is and from ( [ l_er ] ) , one sees that the right - hand side of ( [ phiphi0 ] ) is a polynomial in , and the time - dependent degree distribution is the coefficient of the term of order ( which we will refer as -term ) in the right hand side of ( [ phiphi0 ] ) .hence , since , one can show by a direct calculation that this result will be revisited in section [ rw ] , where we will treat the problem of finding the time - dependent degree distribution as a random walk in degree space . when , which is the time equivalent to the number of possible distinct edges , one recovers the poisson distribution from the usual erds - rnyi model for , the exact form of the time - dependent degree distribution can be used to investigate the shannon entropy , \ , .\label{shannon}\end{aligned}\ ] ] the profile of the entropy can be investigated numerically and is presented in figure 1 .it starts from a low value and achieves the maximum for , which is when the original ( static ) erds - rnyi model realizes , and the inclusion of more connections decreases the entropy , as one can see from ( [ poisson ] ) .this phenomenon can be heuristically understood by realizing that the inclusion of edges randomly ( with uniform probability to each possible pair of nodes ) leads the distribution to converge to a kronecker delta , _i.e. _ , the vertices tend to have all the same degree ( that increases with time ) from the statistical standpoint .figure 1 : entropy of time - dependent erds - rnyi model ( and ) .the watts - strogatz model is a small - world network that , unlike the erds - rnyi graph , keeps high clustering .the analytical approach treats it as a static model , despite the fact that it is obtained as an intermediate configuration in rewiring process between a regular lattice and a random graph .we will define a dynamical model that generates a small - world network similar to the one introduced by watts and strogatz . although being slightly different from the original watts - strogatz model , it is statistically equivalent and suitable for analytical investigations .the initial condition of our model consists of a ring with vertices , and each vertex has degree by having a single link to its next - neighbors as in watts - strogatz model .the model has , therefore , edges with total degree .the dynamics obeys the following scheme : \(i ) an edge end is chosen with uniform probability .\(ii ) this extremity is reconnected with probability ( and kept without reconnection with probability ) .\(iii ) back to ( i ) ( repetition for a fixed number of iterations ) .therefore , the probability of a vertex having degree at time obeys the discrete time recurrent equation where stands for the discrete - time transition rate ( conditional probability ) from the state of degree to degree , as in the previous section .furthermore , the initial condition is consider now a vertex at time ; it can have degree at time in the following scenarios : \i ) the vertex has degree at time and degree at time : an edge - end , which is not connected to , is chosen with probability .then , it rewires with probability , and links to with probability ; therefore , one has \ii ) the vertex has degree at time and degree at time : an edge - end connected to is chosen with probability .then , it rewires with probability , and links to another vertex , say , with probability ; therefore , one has \iii ) the vertex has degree at time and remains with degree at time : this scenario is divided in four cases , as follows .iiia ) an edge - end connected to is chosen with probability , rewires with probability , and links again to with probability ; iiib ) an edge - end connected to is chosen with probability , but does not rewire ( this happens with probability ) ; iiic ) an edge - end not connected to is chosen with probability , rewires with probability , and links to a vertex that is not with probability ; iiid ) an edge - end not connected to is chosen with probability , but does not rewire ( this happens with probability ) ; the conditional probability associated to the union of disjoint events iiia to iiid is the dynamics defined above can generate a graph similar to the watts - strogatz model . for ,one has a interval of where the system displays high clustering and low mean shortest path length , as shown in figure 2 .figure 2 : clustering and shortest path length ( normalized by and , respectively ) of the graph generated by the dynamics of section [ tdws ] .the parameters are , and with realizations of the simulations ; the error bars are smaller than the size of the points. the time - dependent degree distribution can be evaluated iteratively from the recurrent equation ( [ me ] ) and ( [ pkt ] ) , and this allows one to compute the entropy of the model , which is shown in figure 3 .figure 3 : entropy of the time - dependent watts - strogatz model for , and .the entropy starts from a low value , as expected since the initial condition of watts - strogatz model is a regular lattice with .the entropy , then , grows with time , but reaches a constant value : differently from the erds - rnyi model , the watts - strogatz graph has no new connection being added , and the system converges to a stationary degree distribution different from a kronecker - delta - like as in the erds - rnyi case .introducing , again , the z - transform ( [ z ] ) to the recurrent equation of the time - dependent degree distribution obtained by combining ( [ me ] ) and ( [ pkt ] ) , one has where the initial condition stands for each vertex having exactly connections .the explicit form of the operator , which acts on this polynomial , will be presented in the next section . fornow , it is sufficient to state that the analytical form of the time - dependent degree distribution is not well explored in the literature .this section is devoted to develop the arguments that will establish analytic results concerning the time - dependent degree distribution of the two models above .the erds - rnyi case will support and illustrate our arguments , since its a simpler laboratory and the exact form ( [ pkt_er ] ) is already known . as seen in section [ tder ] ,the time - dependent degree distribution is the coefficient of the -term in , as one can see from ( [ z ] ) .moreover , from ( [ phiphi0 ] ) and , we have .this means that one should search for the -term of a polynomial resulted from the application of for times on .the operator , however , can be divided into a sum of three operators , , and .this separation is convenient , since when these operators are applied on a monomial ( ) , one has the following behavior : hence , starting from degree , one can see the procedure of applying times the operator as follows . since the -term is a sum of many terms , each of them a product of , and .let us consider as an example ; in this case , the -term of is and this is . in the first term ,the system remains with degree zero at time and increases two unities at ; similar interpretation can be made for the second and third terms .the time - dependent degree distribution is , therefore , a sum of all trajectories , which are random walks in degree space ( see figure 4 ) , that leads at to degree at time . at each time step, the degree can increase one unity , or two unities , or stay constant with probabilities , and , respectively ( note that and ) .hence , denoting by the degree at time , it is straightforward that where for and the first two kronecker deltas refer to the initial and final conditions ; each term inside the parenthesis indicates if the degree at time remains constant or increases ( with one or two unities ) when compared to the degree at the previous instant , .figure 4 : three examples of possible evolution of the degree ( these examples do not apply for the erds - rnyi model , where the degree never decreses ) . the initial and final degrees should be and , respectively . the continuous version of ( [ er_rw ] ) is a path - integral formulation of the problem .nevertheless , it does not lead to an expression that can be trivially tackled by the usual methods .the time - dependent degree distribution can be evaluated explicitely by exploring the property that , and are -numbers . during the time interval , there should be , and terms of , and , respectively , such that and . therefore , which yields the same result of ( [ pkt_er ] ) , as expected . in ( [ er_step ] ) , is the largest integer equal or less than , and the last equality can be shown after a lengthy induction argument . finally , one can also restate the recurrent equation as , where now we have two types of operators , that act for an interval of time equal to on the initial condition .similarly as in the previous case , the time - dependent watts - strogatz degree distribution is the -term of , where now the initial condition is and with the form of these operators , which are not -numbers anymore , can be deduced by ( [ w(k|k-1 ) ] ) , ( [ w(k|k+1 ) ] ) , ( [ w(k|k ) ] ) and the z - transform of ( [ me ] ) .when these operators are applied on a polynomial of degree , one has with note that now the coefficients , and are not constants and the operators , and do not commute as in erds - rnyi case .following the same argument that has led to ( [ er_rw ] ) , we have for the watts - strogatz model .the degree starts with at time and ends with at time . between these boundaries ,the variable performs a random walk .this expression is not analytically treatable , and we will invoke some simplifications , which consist of choosing the dominant contributions ( paths ) to . in this section , we will concentrate on the dominant contributions to the degree distribution .this follows by choosing a class of paths that starts at and ends at . by noticing that , the dominant contributions come from terms that maximize the number of -factorsthis implies minimizing the number of -factors or -factors such that they should appear only to change the degree from to . in other terms, we have terms of ( ) type if ( ) , and the remaining terms are of type . note that these are monotonic paths in the sense that the degree only increases ( if ) or decreases ( if ) .let us consider initially , the case .writing the sum of all monotonic paths as being equal to the time - dependent degree distribution leads to the terms are functions of the degree ( see equation ( [ bad ] ) ) , and not on the instant they appear . in the monotonic crescent path , therefore , each term , should appear one and only one time in this order .the remaining segments of the path are filled by -terms , and there should be of them that are , of them that are , and so on ( see figure 5 ) .firstly it is immediate from ( [ bad ] ) that figure 5 : increasing monotonic paths .all the monotonic paths , when , are located inside the envelope defined by the dashed lines .the upper dashed line corresponds to the path , and the lower dashed line is the monotonic path . on the other hand , by using one has where we have performed the change of variables , , up to in the last passage .therefore , one has and by ( [ bbb > ] ) and ( [ aaa>2 ] ) one finally finds ^{\delta } \\ & & ( k\geq k_{0})\ , .\label{pkt>}\end{aligned}\ ] ] the monotonic paths when is such that since now the -terms are needed to decrease the degree .since by following a similar procedure as before , one has ^{\left|\delta\right| } \\ & & ( k < k_{0})\ , .\label{pkt<}\end{aligned}\ ] ] for . the comparison between the ( exact ) numerical time - dependent degree distribution obtained from the recurrent equation and the estimations ( [ pkt > ] ) and ( [ pkt < ] ) are shown in figure 6 . the formulas ( [ pkt > ] ) and ( [ pkt < ] ) should be asymptotically exact for and .the reason for this statement comes from a simple analysis of the order of magnitude of the paths . remembering that , and , a monotonic path is , while there are of them .the first correction is due terms that have a and terms more than the monotonic paths terms ( and two terms less ) .each one of its first correction terms are , and there are of them .the contribution of the first correction is roughly times the contribution of the monotonic paths .this argument can be extended to corrections of all orders .therefore , for , one expects that the formulas from the monotonic paths only are asymptotically exact for and .naturally , the same argument concludes that our estiamtions fail in the case .the numerical solution in figure 6 shows that our estimations apply in the case , while the same comment can not be made for , as expected .figure 6 : time - dependent degree distribution .the points are associated to numerically exact results , and were obtained from the recurrent equation ( [ me ] ) .the points generated from equations ( [ pkt > ] ) and ( [ pkt < ] ) were interpolated with lines for better visualization .inset : a detailed visualization of the time - dependent degree distribution ( logarithmic scale for the vertical axis ) for and .in this work , we have formulated the erds - rnyi and watts - strogatz graphs as a dynamic model and characterized their behavior from the standpoint of their entropies .we have also examined their time - dependent degree distribution analytically .the erds - rnyi model is analytically accessible , while the same does not extend to the watts - strogatz model .we have , nevertheless , obtained a formula that is asymptotically exact for and confirmed this validity numerically .the main ideia to achieve this result was to consider the evolution of the degree distribution as a random walk in degree space and select the paths that have dominant contribution .we have also presented the argument that support the range of validity of our formula , which is based on the estimation of the order of magnitude of contribution of relevant terms .hlcg thanks pnpd / capes ( ed .82/2014 ) for financial support .mc acknowledges the oea scholarship program and capes for financial support .
in this work , we propose a scheme that provides an analytical estimate for the time - dependent degree distribution of some networks . this scheme maps the problem into a random walk in degree space , and then we choose the paths that are responsible for the dominant contributions . the method is illustrated on the dynamical versions of the erds - rnyi and watts - strogatz graphs , which were introduced as static models in the original formulation . we have succeeded in obtaining an analytical form for the dynamics watts - strogatz model , which is asymptotically exact for some regimes .
* dag realization problem : * given is a finite sequence with does there exist an acyclic digraph ( without parallel arcs ) with the labeled vertex set such that we have indegree and outdegree for all ? + if the answer is `` yes '' , we call sequence _ dag sequence _ and the acyclic digraph ( a so - called `` dag '' ) a _ dag realization_. a relaxation of this problem not demanding the acyclicity of digraph is called _ digraph realization problem_. in this case , we call _ digraph realization _ and _ digraph sequence_. the digraph realization problem can be solved in linear - time using an algorithm by wang and kleitman . unless explicitly stated , we assume that a sequence does not contain any _ zero tuples _ .moreover , we will tacitly assume that , as this is obviously a necessary condition for any realization to exist , since the number of ingoing arcs must equal the number of outgoing arcs .furthermore , we denote tuples with and as _ sink tuples _ , those with and as _ source tuples _ , and the remaining ones with and as _ stream tuples_. we call a sequence only consisting of source and sink tuples , _ source - sink - sequence_. a sequence with source tuples and sink tuples is denoted as _ canonically sorted _ , if and only if the first tuples in this labeling are decreasingly sorted source tuples ( with respect to the ) and the last tuples are increasingly sorted sink tuples ( with respect to the ) . + * hardness and efficiently solvable special cases .* nichterlein very recently showed that the dag realization problem is np - complete . on the other hand ,there are several classes of sequences for which the problem is not hard .one of these sequences are source - sink - sequences , for which one only has to find a digraph realization .the latter is already a dag realization , since no vertex has incoming as well as outgoing arcs .furthermore , sparse sequences with are polynomial - time solvable as we will show below .we denote such sequences by _ forest sequences_. the main difficulty for the dag realization problem is to find out a `` topological ordering of the sequence '' . in the case where we have one , our problem is nothing else but a directed -factor problem on a complete dag .the labeled vertices of this complete dag are ordered in the given topological order .this problem can be reduced to a bipartite undirected -factor problem which can be solved in polynomial time via a further famous reduction by tutte to a bipartite perfect matching problem . in a previous paper , we proved that a certain ordering of a special class of sequences _opposed sequences_ always leads to a topological ordering of the tuples for at least one dag realization of a given dag sequence . on the other hand , it is not necessary to apply the reduction via tutte if we possess one possible topological ordering of a dag sequence .the solution is much easier .next , we describe our approach . + * realization with a prescribed topological order .* we denote a dag sequence which possesses a dag realization with a topological numbering corresponding to the increasing numbering of its tuples by _ dag sequence for a given topological order _ and analogously the digraph by _ dag realization for a given topological order_. without loss of generality , we may assume that the source tuples come first in the prescribed numbering and are ordered decreasingly with respect to their values .a realization algorithm works as follows .consider the first tuple from the prescribed topological order which is not a source tuple .then there must exist source tuples with a smaller number in the given dag sequence .reduce the first ( i.e. with largest ) source tuples by one and set the indegree of tuple to that means , we reduce sequence to sequence if we get zero tuples in then we delete them and denote the new sequence for simplicity also by furthermore , we label this sequence with a new numbering starting from one to its length and consider this sorting as the given topological ordering for .we repeat this process until we get an empty sequence ( corresponding to the realizability of ) or get stuck ( corresponding to the non - realizability of ) .the correctness of our algorithm is proven in lemma [ th : topologicalrealization ] .[ th : topologicalrealization ] is a dag sequence for a given topological order is a dag sequence for its corresponding topological order .* discussion of our main theorem and its corresponding algorithm .* we do not know how to determine a feasible topological ordering ( i.e. , one corresponding to a realization ) for an arbitrary dag sequence .however , we are able to restrict the types of possible permutations of the tuples . for that ,we need the following order relation , introduced in . given are and we define : note , that a pair equals with respect to the opposed relation if and only if and the opposed relation is reflexive , transitive and antisymmetric and therefore a partial , but not a total order .our following theorem leads to a recursive algorithm with exponential running time and results in corollary [ korollartopologischerealisation ] which proves the existence of a special type of possible topological sortings provided that sequence is a dag sequence .[ realisationdagsequenzen ] let be a canonically sorted sequence containing source tuples .furthermore , we assume that is not a source - sink - sequence .we define the set is a dag sequence if and only if and there exists an element such that is a dag sequence .( ) return true sequence may contain zero tuples . if this is the case , we delete them and call the new sequence for simplicity also .theorem [ realisationdagsequenzen ] ensures the possibility for reducing a dag sequence into a source - sink - sequence .the latter can be realized by using the algorithm for realizing digraph sequences .the whole algorithm is summarized in algorithm [ alg : dag realization ] , where we consider the maximum subset of only containing pairwise disjoint stream tuples .the bottleneck of this approach is the size of set we give an example for the execution of algorithm [ alg : dag realization ] in the appendix .our pseudocode does not specify the order in which we process the elements of in line 3 .several strategies are possible which have a significant influence on the overall performance . the most promising deterministic strategy( as we will learn in the next sections ) is to use the lexicographic order , starting with the lexicographic maximum element within . in introduced a special class of dag sequences _ opposed sequences _ where we have if sequence is not a source - sink - sequence .we call a sequence _ opposed sequence _ , if it is possible to sort its stream tuples in such a way , that and is valid for stream tuples with indices and in this case , we have the property for all stream tuples .at the beginning of the sequence we insert all source tuples such that the build a decreasing sequence and at the end of sequence we put all sink tuples in increasing ordering with respect to the corresponding the notion _ opposed sequence _ describes a sequence , where it is possible to compare all stream tuples among each other and to put them in a `` chain '' .indeed , this is not always possible because the opposed order is not a total order .however , for opposed sequences line ( 3 ) to line ( 9 ) in algorithm [ alg : dag realization ] are executed at most once in each recursive call , because we have always overall , we obtain a linear - time algorithm for opposed sequences . however , there are many sequences which are not opposed , but theorem [ realisationdagsequenzen ] still yields a polynomial decision time .consider for example dag sequence which is not an opposed sequence , because stream tuples and are not comparable with respect to the opposed ordering .however , we have and so we reduce to , leading to the realizable source - sink - sequence theorem [ realisationdagsequenzen ] leads to further interesting insights. we can prove the existence of special topological sortings .[ korollartopologischerealisation ] for every dag sequence , there exists a dag realization with a topological ordering of all vertices corresponding to stream tuples , such that we can not find for we call a topological ordering of a dag sequence obeying the conditions in corollary [ korollartopologischerealisation ] an _ opposed topological sorting_. at the beginning of our work ( when the complexity of the dag realization problem was still open ) , we conjectured that the choice of the lexicographical largest tuple from in line ( 3 ) would solve our problem in polynomial time .we call this approach _ lexmax strategy _ and a dag sequence which is realizable with this strategy _ lexmax sequence _ , otherwise we call it _non - lexmax sequence_. hence , we conjectured the following .[ co : lexmaxconjecture ] each dag sequence is a lexmax sequence .we soon disproved our own conjecture by a counter - example ( example [ example ] , described in the following section and in appendix [ app : examples ] ) . in systematic experiments we found out that a large fraction of sequences can be solved by this strategy in polynomial time .we tell this story in the next section [ storyexperiments ] .moreover , we use the structural insights from our main theorem to develop a randomized algorithm which performs well in practice ( section [ randomisiertealgorithmen ] ) .proofs and further supporting material can be found in the appendix and in .* why we became curious .* to see whether our lexmax conjecture [ co : lexmaxconjecture ] might be true , we generated a set of dag sequences , called _ randomly generated sequences _ in the sequel , by the following principle : starting with a complete acyclic digraph , delete of its arcs uniformly at random .we take the degree sequence from the resulting graph .note that we only sample uniformly with respect to random dags but not uniformly degree sequences since degree sequences have different numbers of corresponding dag realizations . in a first experiment we created with the described process one million dag sequences with 20 tuples each , and .likewise , we built up another million dag sequences with 25 tuples and .the fact that the lexmax strategy realized all these test instances without a single failure was quite encouraging .the lexmax conjecture [ co : lexmaxconjecture ] seemed to be true , only a correctness proof was missing .but quite soon , in an attempt to prove the conjecture , we artificially constructed a first counter - example , a dag sequence which is definitely no lexmax sequence , as can easily be verified : even worse : we also found an example ( example [ ex : strategiesichtweise ] ) showing that no fixed strategy which chooses an element from in algorithm [ alg : dag realization ] and does not consider the corresponding set of sinks , will fail in general .[ ex : strategiesichtweise ] we consider the two sequences and only differing in their sink tuples .sequence can only be realized by the lexmax strategy , while several strategies but not the lexmax strategy work for .thus , there is no strategy which can be applied in both cases .these observations give rise to several immediate questions : why did we construct by our sampling method ( for and ) only dag sequences which are lexmax sequences ?how many dag sequences are not lexmax sequences ?therefore , we started with systematic experiments . for small instances with tuples we generated systematically the set of all dag sequences with all possible , see for an example the case in figure [ fig : percentagelexmaxsequences ] and appendix [ app : material ] .more precisely , we considered only _ non - trivial sequences _ , i.e. we eliminated all source - sink sequences and all sequences with only one stream tuple .we denote this set by _ systematically generated sequences_. note that the number of sequences grows so fast in that a systematic construction of all sequences with a larger size is impossible .we observed the following : 1 .the fraction of lexmax sequences among the systematically generated sequences is quite high .for all it is above , see figure [ fig : percentagelexmaxsequences ] ( blue squares ) .the fraction of lexmax sequences strongly depends on .it is largest for sparse and dense dags .lexmax sequences are overrepresented among one million randomly generated sequences ( for each ) , we observe more than for all densities of dags , see figure [ fig : percentagelexmaxsequences ] ( red triangles ) .this leads to the following questions : given a sequence for which we seek a dag realization .how should we proceed in practice ? as we have seen , the huge majority of dag sequences are lexmax sequences .is it possible to find characteristic properties for lexmax sequences or non - lexmax sequences , respectively ?+ * distance to opposed sequences .* let us exploit our characterization that opposed sequences are efficiently solvable .we propose the _ distance to opposed _ for each dag sequence consider for that the topological order of a dag realization given by algorithm [ alg : dag realization ] , if in line ( 3 ) elements are chosen in decreasing lexicographical order .this ordering corresponds to exactly one path of the recursion tree .thus , we obtain one unique dag realization for , if existing .now , we renumber dag sequence such that it follows the topological order induced by the execution by this algorithm , i.e. by the sequence of choices of elements from then the distance to opposed is defined as the number of pairwise incomparable stream tuples with respect to this order , more precisely , [ [question-1-do - randomly - generated - sequences - possess - a - preference - to - a - small - distance - to - opposed - in - comparison - with - systematically - generated - sequences ] ] question 1 : do randomly generated sequences possess a preference to a `` small '' distance to opposed in comparison with systematically generated sequences ? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in figure [ fig : systematicdifferencetoopposed ] ( left ) , we show the distribution of systematically generated sequences ( in % ) with their distance to opposed , depending on we compare this scenario with the same setting for randomly generated sequences , shown in figure [ fig : systematicdifferencetoopposed ] ( right ) . percentage of systematically generated sequences ( left ) and randomized generated sequences ( right ) with their difference to opposed for tuples and .,scaledwidth=115.0% ] percentage of systematically generated sequences ( left ) and randomized generated sequences ( right ) with their difference to opposed for tuples and .,scaledwidth=115.0% ] _ observations : _ systematically generated sequences have a slightly larger range of the `` distance to opposed '' than randomly generated sequences .moreover , when we generate dag sequences systematically , we obtain a significantly larger fraction of instances with a larger distance to opposed than for randomly generated sequences , and this phenomenon can be observed for all .[ [ question-2-do - non - lexmax - sequences - possess - a - preference - for - large - opposed - distances ] ] question 2 : do non - lexmax sequences possess a preference for large opposed distances ? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + since opposed sequences are easily solvable , we conjecture that sequences with a small distance to opposed might be easier solvable by the lexmax strategy than those with a large distance to opposed .if this conjecture were true , it would give us together with our findings from question 1 one possible explanation for the observation that the randomly generated sequences have a larger fraction of efficiently solvable sequences by the lexmax strategy ._ observations : _ a separate analysis of non - lexmax sequences ( that is , the subset of unsolved instances by the lexmax strategy ) , displayed in figure [ fig : opposeddistancenonlexmax ] , gives a clear picture : yes ! for systematically generated sequences with , we observe in particular for instances with a middle density that the fraction of non - lexmax sequences becomes maximal for a relatively large distance to opposed .[ [ question-3-can - we - solve - real - world - instances - by - the - lexmax - strategy ] ] question 3 : can we solve real - world instances by the lexmax strategy ? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1 .ordered binary decision diagrams ( obdds ) : in such networks the outdegree is two , that is constant .this immediately implies that the corresponding sequences are opposed sequences , and hence can provably be solved by the lexmax strategy .food webs : such networks are almost hierarchical and therefore have a strong tendency to be acyclic ( `` larger animals eat smaller animals '' ) . in our experiments we analyzed food webs from the pajek network library .train timetable network : we use timetable data of german railways from 2011 and form a time - expanded network .its vertices correspond to departure and arrival events of trains , a departure vertex is connected by an arc with the arrival event corresponding to the very next train stop .moreover , arrival and departure events at the same station are connected whenever a transfer between trains is possible or if the two events correspond to the very same train .flight timetable network : we use the european flight schedule of 2010 and form a time - expanded network as in c ) . the characteristics of our real - world networks b ) - d ) are summarized in table [ tab : real - world ] . the _ dag density _ of a network is defined as . to compare the distance to opposed for instances of different sizes , we normalize this value by the theoretical maximum , where denotes the number of stream tuples , andso obtain a _normalized distance to opposed_. without any exception , all real - world instances have been realized by the lexmax strategy .
we study the following fundamental realization problem of directed acyclic graphs ( dags ) . given a sequence with , does there exist a dag ( no parallel arcs allowed ) with labeled vertex set such that for all indegree and outdegree of match exactly the given numbers and , respectively ? recently this decision problem has been shown to be np - complete by nichterlein . however , we can show that several important classes of sequences are efficiently solvable . in previous work , we have proved that yes - instances always have a special kind of topological order which allows us to reduce the number of possible topological orderings in most cases drastically . this leads to an exact exponential - time algorithm which significantly improves upon a straightforward approach . moreover , a combination of this exponential - time algorithm with a special strategy gives a linear - time algorithm . interestingly , in systematic experiments we observed that we could solve a huge majority of all instances by the linear - time heuristic . this motivates us to develop characteristics like dag density and `` distance to provably easy sequences '' which can give us an indicator how easy or difficult a given sequence can be realized . furthermore , we propose a randomized algorithm which exploits our structural insight on topological sortings and uses a number of reduction rules . we compare this algorithm with other straightforward randomized algorithms in extensive experiments . we observe that it clearly outperforms all other variants and behaves surprisingly well for almost all instances . another striking observation is that our simple linear - time algorithm solves a set of real - world instances from different domains , namely ordered binary decision diagrams ( obdds ) , train and flight schedules , as well as instances derived from food - web networks without any exception .
featuring simultaneous tracking and targeting of tumours , image - guided radiation therapy ( igrt ) paves a promising path towards a safer , more efficient and more accurate treatment . in autumn 2004 , varian medical systems , inc .( vms ) , palo alto , ca , installed the first clinically - applicable solution for igrt : the vms clinac accelerator equipped with imaging functionality . unless a number of calibrations have been performed on such a complex system , the device is not operable .the aim of the present paper is to describe one such important process , the geometric calibration .the method outlined here applies equally to imaging and delivery units where the source and a flat - panel detector ( mounted in such a way as to face one another ) move in circular orbits around the irradiated object .the rotation plane of the source is assumed to contain the geometric centre of the detector ; modifications are needed in case of tilted geometry , e.g. , in a c - arm configuration .the aim of the geometric calibration is threefold .( i ) to yield the values of the important parameters defining the geometry of the system .if an imaging unit is calibrated , these values are subsequently used in the processing of the scan data ( reconstruction phase ) .if a delivery unit is calibrated , the output may lead to the suspension of the operation of the system in case , for instance , that the deviations of the parameter values from the nominal ones are beyond the tolerance limits .( ii ) to provide an assessment of the deviation from the ideal world , where the machine components move smoothly and rigidly in exact circular orbits ; these effects have appeared in the literature as ` mechanical flex ' , ` nonidealities ' , ` nonrigid motion of the system components ' , etc .( iii ) to enable the investigation of the long - term mechanical stability of the system ; this examination involves the creation of a database containing the output of each calibration . on top of the arguments which were just put forth concerning the necessity of the geometric calibration separately for imaging and delivery units ,there is one additional remark which specifically applies to igrt systems : the geometric calibration links together the two units comprising the machine . with the relationship between the two components of the system having been set ,the updated anatomic information ( obtained from the imaging unit ) may be quickly processed and the dose distribution re - evaluated in the area which is subsequently subjected to radiation ( delivery unit ) ; thus , modifications in the treatment plan are enabled on the daily basis , reflecting the most up - to - date information in the region of interest . concerning earlier methods pertaining to the calibration of the geometry in computed tomography , an overview may be obtained from the paper of noo ( 2000 ) ; that work set forth an approach to extract the parameters of the geometry from the elliptical projections of fixed points , and inspired additional research in the field .fahrig and holdsworth ( 2000 ) employed a small steel ball bearing ( bb ) placed at the isocentre , and traced its projection across a series of images ; corrections to the scan data were subsequently calculated ( as a function of the gantry angle ) from the movement of the image centroid on the detector .a similar approach was followed by jaffray ( 2002 ) , whereas mitschke and navab ( 2003 ) developed a method featuring a ccd camera attached to the head of the x - ray source .siewerdsen ( 2005 ) use a phantom consisting of a helical pattern of bbs ; the data analysis results in the creation of their ` flex maps ' , which then lead to the assessment of the mechanical nonidealities .finally , the interesting paper of cho ( 2005 ) introduces a phantom consisting of an arrangement of steel bbs in two circular patterns and achieves the description of the geometry of the system via a set of spatial lengths and distortion angles . in this paper , we present one of the methods which are currently in use in the geometric calibration of the vms devices ; a shorter description of this image - based calibration technique has appeared in matsinos ( 2005 ) .contrary to the bulky cylinders which are generally needed in other schemes in order to extract the parameters of the geometry , we use an easy - to - handle ( i.e. , to mount and dismount ) and light needle phantom which , additionally , can easily be manufactured .the extraction of the values of the model parameters is achieved via a method which has been developed with attention to the robustness of the final outcome .the needle phantom comprises five cylindrical -mm - long metallic needles ( mm ) embedded in a urethane compound ( obomodulan 500 , obo - werke gmbh & co. kg , stadthagen , germany ) of cylindrical form .the dimensions of the urethane housing are : mm ( diameter ) and mm ( height ) .the weight of the needle phantom is about gr .the needles are made of a chromium - nickel alloy ( 1.4305 , edelstahlwerke sdwestfalen gmbh , siegen , germany ) and are coplanar , parallel and equidistant ( -mm separation ) ; the axis of the central needle coincides with the symmetry axis of the needle phantom .marks have been incised into its surface to enable the proper alignment with a laser system .the needle phantom may easily be mounted directly onto the vms couch ( exact couch ) ; it is placed in such a way that the needles are parallel to the rotation axis .the vms paxscan 4030cb amorphous - silicon flat - panel detector , currently used in the data acquisition , is a real - time digital x - ray imaging device comprising square elements ( pixels ) ; the detector spans an approximate area of . in order to expedite the data transfer and processing , the so - called half - resolution ( -binning ) mode is used in almost all applications; thus , the detector is assumed to consist of ( logical ) pixels ( pitch : m ) .the detector is connected to the body of the system via a set of robotic arms enabling three - dimensional ( 3d ) movement .the experimental data which are analysed in the present paper were obtained at the vms laboratory in baden , switzerland , and involved two vms devices : the acuity simulator ( as ) , a machine dedicated to imaging ( figure [ fig : placementatisocentre ] ) , and the ` on - board imager ' ( obi ) system , the imaging unit of the vms igrt devices . the description of these two units may be obtained directly from the website of the manufacturer ( ` www.varian.com ' ) . in both devices ,the x - ray pulses are produced by an x - ray tube , the vms model g242 .the position of the x - ray source is fixed in the as device , but adjustable ( 2d movement ) in the obi unit . to correct for the ( large ) anisotropy in the radiation field ,wedge filters have been mounted at the head of the gantry in the as device ; there are no such filters in the obi unit .the ideal geometry and placement of the needle phantom are shown in figure [ fig : placementatisocentre ] , where and denote the source - isocentre ( often referred to as sad ) and isocentre - detector distances ; is the gantry angle , identifying the position of the source . in this ideal world , the isocentre , defined as the intersection of the rotation axis and the plane on which the x - ray - source locus s lies during a scan , is a fixed point in space ; a point source is assumed ( cone - beam geometry ) , circumscribing an exact circle .the projection of the isocentre onto the detector ( point m ) coincides with the geometric centre of the detector ; ideally , the line sm is perpendicular to the detector surface . concerning the perfect orientation of the detector , two of its sides are paraller to the rotation plane ( and two perpendicular to it ) .in the isocentre reference frame , a point will be represented as ( ,, ) ; facing the gantry , the axis points to the right , the axis towards the gantry and the axis upwards .the second coordinate system pertains to the machine .the coordinates of points in this reference frame will carry the subscript , thus , a point will be represented as ( ,, ) .the origin of the machine reference frame is the isocentre ; additionally , .the isocentre and machine reference frames are related via a simple rotation around the axis , the rotation angle being denoted as ( figure [ fig : coordinatesystems]a ) ; in the ideal case ( ) , the machine and isocentre reference frames are identical .the third coordinate system is attached to the needle phantom .the coordinates of points in this reference frame will carry the subscript , thus , a point will be represented as ( ,, ) .the couch angle will relate this reference frame and an auxiliary one ( figure [ fig : coordinatesystems]b ) , denoted as ( ,, ) , having the same origin ( the geometric centre of the needle phantom ) , but being parallel to the isocentre reference frame ( ,, ) . from figure[ fig : coordinatesystems]b , it is evident that the auxiliary and isocentre reference frames are parallel to one another . denoting the isocentre coordinates in the auxiliary reference frame as , and , one obtains from figure [ fig : coordinatesystems]a , it is deduced that these three transformations relate the sets of coordinates of a point in the needle - phantom and machine reference frames .two parameters ( and ) have been introduced to account for the miscalibration of the couch and gantry angles .additionally , the parameter takes account of the movement of the source between the time instant corresponding to the retrieval of the gantry - angle value from the system and an average time associated with the actual exposure . including also the quantities , and , the transformation of the coordinates ( from the needle - phantom to the machine reference frame ) involves five parameters in total .the ideal geometry has already been shown in figure [ fig : placementatisocentre ] ; we will now introduce two deviations from this hypothetical situation _ on the rotation plane_. first , the straight line drawn from the isocentre perpendicular to the detector might not contain the source ; in figure [ fig : deviationsonplane ] , this imperfection is represented by the angle ( positive clockwise ) .second , the projection m of the isocentre onto the detector might not coincide with the middle of the segment defined by the intersection of the detector and the rotation plane ; the lateral ( tangential to the circular orbit ) displacement m m of the detector will be denoted as ( positive to the right as one faces the gantry at ) .starting from a point p(, ) on the rotation plane and introducing its polar coordinates as and , the projected length onto the detector is given by the formula as the geometric centre of the detector is the origin of the detector reference frame , the trace q corresponds to a lateral reading equivalent to .the projected length onto the detector in the longitudinal direction ( parallel to the rotation axis ) for an abritrary point ( ,, ) is given by the formula in analogy to the lateral direction , one must introduce a parameter representing the longitudinal displacement of the detector ; this parameter will be denoted as .thus , the trace p corresponds to a longitudinal reading equivalent to the length .the deviation in the orientation of the detector from the ideal geometry is a source of systematic effects .this misorientation may easily be described in terms of three rotations around the principal axes of the detector , corresponding to the lateral and longitudinal directions , as well as to the one which is perpendicular to its surface . due to the fact that some of our parameters show sensitivity to the last rotation ( i.e. , around the detector normal ), we will introduce one parameter ( ) to account for this degree of freedom ; concerning the two former rotations ( i.e. , around the lateral and longitudinal directions ) , our results show no sensitivity , at least up to the level at which these distortions are present in the vms devices which were calibrated .on the contrary , the inclusion of the parameter leads to an improved description of the longitudinal residuals ; we will address this point later on . denoting the coordinates of a trace on the detector as and in the nonrotated reference frame ,the coordinates and in the rotated reference frame are obtained via the tranformation equations ( [ eq : mprimeq])-([eq : detectorrotation ] ) establish a relationship between the coordinates , and of an arbitrary point and its corresponding trace on the detector ; the trace is obtained from ( [ eq : mprimeq ] ) and ( [ eq : qpprime ] ) after involving the detector displacement ( that is , the parameters and ) and the transformation ( [ eq : detectorrotation ] ) on the detector plane . by using also the transformations ( [ eq : relationbetweenfandt])-([eq : relationbetweenmandi ] ) , one may determine the projection of an arbitrary point onto the detector plane starting from its coordinates , and in the needle - phantom reference frame .a number of parameters have been introduced in three steps : five parameters ( , , , and ) are associated with the transformation of the coordinates from the needle - phantom to the machine reference frame ; another five parameters ( , , , and ) pertain to the geometry in the machine reference frame ; finally , the parameter describes the orientation of the detector .all these parameters , save for and , vanish in an ideal world , devoid of mechanical imperfection and inaccurate calibrations .the input data to the geometric calibration comprise one complete scan of the needle phantom .the traces corresponding to the end - points of the needles are identified in the acquired images ; these two coordinates ( for each needle , in each input image ) represent the ` experimental ' values . for a given set of parameter values , the projections of the needle end - points onto the detector planeare calculated using the chain of equations ( [ eq : relationbetweenfandt])-([eq : detectorrotation ] ) ; these calculated values comprise the ` predictions ' .an optimisation scheme is set , by varying the model parameters and seeking the best description of the input data ; a standard function is minimised .we will now touch upon four issues which we consider important in our work .an image of the needle phantom is shown in figure [ fig : needlephantomprojection ] .the detection of the traces of the needle end - points is done in two steps . to suppress noise , the average signal along the detector midline ( along the axis ) is created by averaging the contents of pixels in the direction , on either side of the midline . to remove the urethane background reliably , a fixed - width running windowis applied to this average - signal data , creating the difference of the integrated signal to the ( linear ) background ( defined by the pixel values at the window limits ) .when the window covers places where only the urethane housing is projected , the values of the transformed signal are nearly .as the window approaches the projected axis of a needle , the transformed signal first attains a positive ( local ) maximum , then a negative ( local ) minimum ( at the position corresponding to maximum attenuation ) ; this signature is easily identified via simple software cuts .the algorithm is very efficient in detecting the signals corresponding to the needles , save for gantry - angle values around where the projected signals ( of the needles ) overlap .after the signal modes ( the peaks in the transformed - signal spectrum ) are assigned to the needles , the projection of each needle is followed towards the couch ( along the negative direction in figure [ fig : needlephantomprojection ] ) .two signal levels are identified : one corresponding to the projection of the needle , one to the urethane background . the projection of the needle end - point is assumed to correspond to the position where the signal is equal to the geometric mean of these two values .two coordinates ( and ) are thus determined for each needle in each input image . if more or fewer than five signals are detected in the lateral direction , the entire image is rejected . if a needle axis was found , but the identification of the end - point failed , the information relating to the particular needle is removed from the contribution of the current image to the database . to decrease the correlations among the model parameters in the fits ,it was decided to fix the distance to the nominal value corresponding to the unit being calibrated .for one thing , the description of the data with variable hardly improves ; for another , if and are both treated as free parameters , the largeness of their correlation might result in cases where the fit ` drifts ' .due to the different sensitivity of the two directions ( and in figure [ fig : needlephantomprojection ] ) to the model parameters , the fit in the lateral direction ( coordinates ) is performed first ; this fit achieves the extraction of the values of the parameters , and ( associated with the relationship among the various coordinate systems ) , and , and ( associated with the geometry in the machine reference frame ) .the remaining parameters are determined from the fit to the coordinates , which is performed with the values of the aforementioned six parameters fixed from the -direction fit . for small ,the movement of a trace in the direction is always small , whereas in the direction it may be small or large , depending on the position of the point being projected ; this is one of the reasons for the larger uncertainties in the determination of and ( compared to and ) , the other one being their strong correlation .the minuit package of the cern library , see james ( 1994 ) , has been invariably used in the present paper .all output uncertainties contain the birge factor , which adjusts the output uncertainties for the goodness of each fit ; ndf denotes the number of degrees of freedom .the output shows no sensitivity to the values of the model parameters which are used in the first iteration of the optimisation scheme .the dependence of the parameters of our model on the gantry angle can only indirectly be assessed ; in our scheme , their values are assumed constant within one scan .if present at a significant level , a departure from constancy will manifest itself in the creation of large residual effects ; the point will be discussed later on . experience has shown that the presence of noise in the input data might lead to the extraction of erroneous signal modes in the transformed - signal spectrum ; this failure rate never exceeded the level .however , to safeguard against such cases , it was decided to precede the main optimisation by one performed on the data of each needle _ separately _ ; this approach is more efficient in detecting and excluding the outliers .since the results obtained with logarithmic forms of the objective function are considerably less sensitive to the presence of outliers ( compared with those of the standard optimisation ) , the minimisation of a simple logarithmic form has been implemented in this part of the analysis .the outliers are identified via a software cut corresponding to a effect for the normal distribution ; other values ( e.g. , and cuts ) have also been used , resulting in tiny differences in the numbers quoted here .the residuals are defined as the differences between the experimental values and the corresponding predictions obtained when using the optimal values of the model parameters .there are three reasons why the residuals are not identically zero : ( i ) the statistical fluctuation ( random noise ) which is always present in measurements , ( ii ) the inclusion of erroneous data in the input database and ( iii ) the use of an insufficient ( incomplete ) model in the description of the measurements .concerning potential sources of systematic effects in the present work , one may recall some assumptions on which our model is based ; for instance , it may be that ( within one scan ) the rotation axis is not constant , the movement of the x - ray source is irregular , the connecting arms are distorted under the weight of the gantry and/or of the detector , etc .additional effects may have been introduced by approximations assumed in the geometry of the systems ; for example , it could be that additional degrees of freedom are needed in describing the orientation of the detector . in any case ,given the smallness of these effects ( which will presently be shown ) , the question one has to answer is whether the complete description of the observations in terms of a model is called for , or it is adequate to make use of a simple model , grasp the main features of the geometry and attempt the empirical description of the residual effects ; a number of reasons compel us to adopt the latter strategy .examples of the residuals in the lateral and longitudinal directions are shown in figures [ fig : residuals ] . in both directions , when plotted versus the gantry angle separately for each needle , the residual distributions overlap ; therefore , two average numbers ( i.e. , one per direction ) may be used in each image . to demonstrate the reproducibility of the effect ,the results of five scans taken in identical conditions are shown .two conclusions can be drawn from figures [ fig : residuals ] : ( i ) the residual effects are small , the variation being of the order of one pixel size of the detector in the lateral and two in the longitudinal direction and ( ii ) the residual effects are systematic , hence they can be modelled. the description of the lateral ( ) and longitudinal ( ) residuals will be attempted by using the empirical formulae : and experimental data were acquired on january 31 , 2006 .the validity of the conclusions drawn from those data was confirmed by an analysis of additional measurements obtained on april 26 , 2006 ; the results of the analysis of the april data will not be given here .the ( adjustable ) distance was set to mm both in the as device and the obi unit . the nominal value in theas is mm ; was set to mm in the obi . on each machine , the following steps were taken .the voltage of the x - ray tube was set ( kv in the as , kv in the obi ) .the dark- and flood - field calibrations were performed , so that flat images may be obtained in open - field geometry .the needle phantom was placed close to the ideal geometry of figure [ fig : placementatisocentre ] .the acquisition settings ( x - ray - tube current and pulse width ) were chosen in such a way as to yield a good - quality signal on the detector .the frame rate was set to images per second on both machines , thus resulting in about images on the as device and on the obi unit ( the rotation of the clinac is slower ) . to investigate the short - term reproducibility of the results of the geometric calibration, five counterclockwise ( ccw ) and five clockwise ( cw ) scans were acquired in identical geometry ( except for the rotation of the gantry , no other movement of the system components was allowed ) on each of the two machines .both series started with a ccw scan ; successive scans were taken in opposite directions .it is noteworthy that , save for a trivial difference in the values of parameter , no other effect ( between ccw and cw scans ) is expected in systems behaving identically in the two rotation modes .we have already mentioned that takes account of two effects : ( i ) the miscalibration of the gantry angle and ( ii ) the movement of the source during the time interval from the retrieval of the gantry - angle value ( from the system ) to the instant associated with the ` average ' of the actual exposure ; as the gantry - angle value is obtained a few micro - seconds after the ` beam - on ' condition ( i.e. , at the beginning of the emission of the radiation pulse ) , the time delay of case ( ii ) above may be thought of as being equal to half the acquisition setting for the pulse width .since -msec pulses were used , the difference in the values between ccw and cw scans is expected to be about in the as device and in the obi unit .the average of the two values between ccw and cw scans provides an estimate of the miscalibration of the gantry angle .our results for the optimal values of the model parameters are shown in table [ tab:1-table ] , separately for ccw and cw scans .inspection of this table leads to the following conclusions . *the values of the model parameters come out reasonable .the values of the isocentre - detector distance are almost identical in ccw and cw scans and close to the expectation value .the values of the distortion angles are small . *the variation in the values of the model parameters within each rotation mode is , in most cases , smaller than the corresponding statistical uncertainty . *the and values come out different in the two rotation modes in the as device .we will prove , however , that these differences are the product of the strong correlation between these two parameters .concerning the obi data , the difference in the values may entirely be attributed to the delay time . * the largest difference between the two rotation modes in the as device relates to the description of the residuals in the longitudinal direction .the two amplitudes of equation ( [ eq : residualsy ] ) come out reasonably close , but the phase shifts do not .this discrepancy has been systematic for a long period of time and has been verified using other techniques ; it is _ not _ a result of strong correlations among the model parameters . due to the lack of obvious structure in the distribution of the residuals in the lateral direction in the as device( some structure was observed in part only of each scan ) , it was decided not to attempt a fit using equation ( [ eq : residualsx ] ) there .the difference in the values for the obi unit is statistically significant , but not so pronounced as in the case of the as device . *the remaining differences are small in absolute value ; for instance , the effect in , the largest one observed , corresponds to one - ninth of the pixel size of the detector in the as device and between one - sixth and one - seventh in the obi unit .notwithstanding the smallness of the mismatch , it is evident that the systems do not behave identically in the two rotation modes , suggesting the introduction of two sets of corrections to the scan data .the issue of the correlation among the model parameters has to be properly addressed .the parameters and are strongly correlated in the fit to the coordinates ( of the traces of the needle end - points ) , whereas and are correlated in the -direction fits ; the remaining elements of the hessian matrix are ( in absolute value ) smaller than , indicating insignificant correlations . due to the fact that the differences in the values of the parameters and in the two rotation modes are not statistically significant and , additionally ,these two parameters are not correlated with any other , we will investigate the correlation only between the parameters and . from the theoretical point of view, these two parameters are independent ; the distortion angle defines the offset of the source with respect to the detector in the machine reference frame , whereas , pertaining to the relationship between the isocentre and machine reference frames , applies equally to all machine components . in reality however , possibly coupled with the smallness of these distortion angles , a strong correlation between and has been observed . to investigate whether the differences between the two rotation modes may be attributed to the correlation between these two parameters , we analysed the measurements acquired in the as device further , after fixing at ; the results for the model parameters are given in table [ tab:2-table ] .we found that the description of the data with fixed is as good as when it is allowed to vary freely .the changes induced by fixing are entirely absorbed by parameter , the values of which turn out now to be in good agreement with the expectation , based on the movement of the source within the delay time ( ` beam - on ' condition to average exposure ) .the values of all other parameters are almost intact .our conclusion is that the correlation between the model parameters and affects only the values of these two parameters .a good measure of the mechanical stability of a system may be obtained from the unexplained variation in the data , that is , from whichever fluctuation survives after all model contributions have been deducted .as previously mentioned , in the case of the as runs , the unexplained variation in the lateral direction is represented by the entire fluctuation contained in these residuals , whereas in the longitudinal direction the empirical formula ( [ eq : residualsy ] ) is assumed to be part of the model .for the ccw scans , the rms of the unexplained variation in the lateral direction is equal to m ; for the cw scans , it is m. therefore , the ccw scans seem to be somewhat smoother in the as device .the two values of the rms of the unexplained variation in the longitudinal direction come out almost identical : m . as the residuals in both directionsare structured in the case of the obi unit , they have been fitted to via equations ( [ eq : residualsx ] ) and ( [ eq : residualsy ] ) .figures [ fig : residuals ] correspond to the five ccw scans in the obi unit . in case of the ccw scans ,the rms of the unexplained variation in the lateral direction is equal to m ; in the cw scans , it is m .the corresponding numbers , assuming no modelling of the lateral residuals , are and m ; therefore , the unexplained variation in the lateral direction drops significantly when involving the empirical modelling of these residuals . in both rotation modes ,the rms values of the unexplained variation in the longitudinal direction come out identical : m .comparing the results obtained in the as device with those extracted from the obi unit , one notices that the as scans are less noisy in the longitudinal direction .the values of the unexplained variation in the data in the longitudinal direction come out the same for ccw and cw scans , smaller than one - fifth of the pixel size of the detector in the as device , one - third in the obi unit . in the lateral direction ,the unexplained variation is smaller than one - sixth of the pixel size of the detector in both systems .it is interesting to note that , after equation ( [ eq : residualsx ] ) has been invoked in the description of the residuals in the lateral direction , the ccw scans on obi correspond to a significantly smaller value of the unexplained variation ; a similar effect is observed in the as device , where equation ( [ eq : residualsx ] ) was not used . in any case , the description of the data has been achieved at the sub - pixel level in both systems which were calibrated .the aim of the geometric calibration of cone - beam imaging and delivery systems is threefold : to yield the values of important parameters in relation to the geometry of the system , to provide an assessment of the deviation from the ideal world ( where the machine components move smoothly and rigidly in exact circular orbits as the gantry rotates ) and to enable the investigation of the long - term mechanical stability of the system .the method described here applies to devices where an x - ray source and a flat - panel detector ( facing each other ) move in circular orbits around the irradiated object .contrary to the bulky cylinders , which are generally needed in other works in order to extract the parameters of the geometry , we introduce a light needle phantom which is easy to manufacture .a model has been set up to describe the geometry and the mechanical imperfections of the system being calibrated .the model contains five parameters associated with the transformation of the coordinates from the needle - phantom to the machine reference frame ; another five parameters account for the geometry in the machine reference frame ; finally , one parameter is introduced to account for the deviation in the orientation of the detector from the ideal geometry . to avoid strong correlations among the important parameters of the model , the source - isocentre distanceis set to the nominal value of the device being calibrated .the input data comprise one complete scan of the needle phantom .the end - points of the needles are identified in the acquired images and comprise the ` experimental ' values. a robust optimisation scheme has been put forth to enable the extraction of the model parameters from the entirety of the input data . the application of the approach to two sets of five counterclockwise ( ccw ) and five clockwise ( cw ) scans , acquired in two imaging devices manufactured by varian medical systems , inc ., yielded consistent and reproducible results .the values of the model parameters come out reasonable .the description of the data has been achieved at the sub - pixel level .a number of differences have been seen between ccw and cw scans , suggesting that the devices do not behave identically in the two rotation modes ; we are not aware of other papers which have investigated and reported this effect . as a indispensable part of our project , we have introduced and implemented a calibration scheme in which different parameter sets apply to the scan data depending on the rotation mode used .we would like to draw attention to this effect , since the differences , albeit small , are systematic .mitschke m and navab n 2003 recovering the x - ray projection geometry for three - dimensional tomographic reconstruction with additional sensors : attached camera versus external navigation system _ med .* 7 * 65 - 78 noo f ,clackdoyle r , mennessier c , white t a and roney t j 2000 analytic method based on identification of ellipse parameters for scanner calibration in cone - beam tomography _ phys .biol . _ * 45 * 3489 - 508 siewerdsen j h , moseley d j , burch s , bisland s k , bogaards a , wilson b c and jaffray d a 2005 volume ct with a flat - panel detector on a mobile , isocentric c - arm : pre - clinical investigation in guidance of minimally invasive surgery _ med .phys . _ * 32 * 241 - 54 the optimal values of the model parameters in the vms acuity - simulator ( as ) and ` on - board imager ' ( obi ) units .the values quoted represent averages of five counterclockwise ( ccw ) and five clockwise ( cw ) scans .all lengths are in mm , all angles in degrees .the first uncertainties are systematic , the second statistical .left : the vms acuity - simulator device ( vms laboratory , baden , switzerland ) .the x - ray source is at a position corresponding to about .right : a schematic view of the various elements of a system on the rotation plane ( facing the gantry , as in the figure on the left ) .shown is the placement of the needle phantom in the ideal geometry ; the axis of the central needle coincides with the rotation axis . and denote the source - isocentre and isocentre - detector distances.,title="fig:",width=283 ] left : the vms acuity - simulator device ( vms laboratory , baden , switzerland ) .the x - ray source is at a position corresponding to about .right : a schematic view of the various elements of a system on the rotation plane ( facing the gantry , as in the figure on the left ) .shown is the placement of the needle phantom in the ideal geometry ; the axis of the central needle coincides with the rotation axis . and the source - isocentre and isocentre - detector distances.,title="fig:",width=283 ] the coordinate systems used in the present paper .a ) the isocentre ( subscript ) and the machine ( subscript ) reference frames ; they are related via a simple rotation ( angle ) .b ) the needle - phantom ( subscript ) and the auxiliary ( subscript ) reference frames ; they are related via a simple rotation ( angle ) .the auxiliary and isocentre reference frames are related via a simple translation involving the vector ( ,,).,width=302 ] machine reference frame : derivation of the projection of a point p ( on the rotation plane ) onto the detector ; the quantity represents the length ip .two deviations from the ideal geometry ( on the rotation plane ) have been introduced : the angle and the lateral displacement of the detector .the isocentre position is denoted by i.,width=283 ] an example of an image of the needle phantom .the origin of the coordinate system shown coincides with the geometric centre of the detector ; ideally , is parallel to and to at .the shift of the axis of the central needle to the right is due to the nonzero values of the parameters and .a small tilt , hardly visible , is due to the nonzero values of the couch angle and of the detector orientation angle .,height=377 ] the residuals in the lateral direction ( left ) and those in the longitudinal direction ( right ) versus the gantry angle for the vms ` on - board imager ' ( obi ) unit ; each point represents the average value over the needles whose end - points were successfully identified in the corresponding image . shownare the results of five counterclockwise scans taken in identical conditions ; these scans are represented by different symbols.,title="fig:",height=283 ] the residuals in the lateral direction ( left ) and those in the longitudinal direction ( right ) versus the gantry angle for the vms ` on - board imager ' ( obi ) unit ; each point represents the average value over the needles whose end - points were successfully identified in the corresponding image . shownare the results of five counterclockwise scans taken in identical conditions ; these scans are represented by different symbols.,title="fig:",height=283 ]
we propose a method to achieve the geometric calibration of cone - beam imaging and delivery systems in radiation therapy ; our approach applies to devices where an x - ray source and a flat - panel detector , facing each other , move in circular orbits around the irradiated object . in order to extract the parameters of the geometry from the data , we use a light needle phantom which is easy to manufacture . a model with ten free parameters ( spatial lengths and distortion angles ) has been put forth to describe the geometry and the mechanical imperfections of the units being calibrated ; a few additional parameters are introduced to account for residual effects ( small effects which lie beyond our model ) . the values of the model parameters are determined from one complete scan of the needle phantom via a robust optimisation scheme . the application of this method to two sets of five counterclockwise ( ccw ) and five clockwise ( cw ) scans yielded consistent and reproducible results . a number of differences have been observed between the ccw and cw scans , suggesting a dissimilar behaviour of the devices calibrated in the two rotation modes . the description of the geometry of the devices was achieved at the sub - pixel level . + _ keywords _ : geometric calibration +
the geometric brownian motion is the stochastic process described by the differential equation where is a wiener process and are constants describing the drift and the variance of the noise , respectively .the solution can be written as geometric brownian motion is used for modelling many phenomena in a variety of contexts .a prominent role is played in financial applications , where the distribution of returns can be approximated by a log - normal distribution , at least in specific regimes . for the computation of certain properties ,it is necessary to compute the integral of over a time interval = \int_0^t f(w_s , s ) ds.\ ] ] the evaluation of this functional is also involved in the solution of the geometric brownian motion with logistic corrections . in general , averages of the form ) \rangle = \sum_{k=0}^\infty a_k \langle f[w , t]^k\rangle \equiv \sum_{k=0}^\infty a_k r_k.\ ] ] are quite common .the evaluation of averages of powers of the integrated exponential brownian motion , then , is instrumental for the computation of these observables .detailed studies of this functional and of its powers are already available in the literature . in this paper , we will derive exact formulas for the evaluation of these integrals , under the assumption of the ito formulation for the wiener process .similar results have been given in .motivated by obtaining exact formulas for asian options , in yor obtained an exact formula in terms of polynomials for the following moments : using girsanov s theorem , one can derive a series of identities , in which the last is bougerol s formula where and in this work , we take a different route with the use of combinatorics .we prove a recurrence relation for the integrals involved at the -th order in terms of integrals at the -th order , and after resummation , we get an identity in terms of a determinant .the central quantity of interest in the present paper is given by the average over the wiener process : ^k \rangle .\label{eq : cumulant}\ ] ] if we expand eq .( [ eq : cumulant ] ) , we obtain ^k \rangle=\int_0^t d { \tilde t } _ k \cdots \int_0^t d { \tilde t } _ 1 \left\langle e^{\sum_{i=1}^k [ ( \mu-\frac{\sigma^2}{2}){\tilde t}_i+\sigma w_{{\tilde t}_i } ] } \right\rangle.\ ] ] we will use the following formula due to the properties of integrals with gaussian measure , and in which we assume that is of the ito type .this implies by using this property , we can now prove the following fact : [ lem1 ] for the average over the wiener process of ito type , the following formula holds true : by a direct application of eq .( [ eq : ito ] ) } d{\tilde t}_1 \cdotsd{\tilde t}_k \right\rangle \nonumber \\ & = & \int_0^t \cdots \int_0^t e^{\sum_{i=1}^k[(\mu-\frac{\sigma^2}{2 } ) { \tilde t}_i+ \frac{\sigma^2}{2 } \sum_j \text{min}({\tilde t}_i,{\tilde t}_j ) } dt_1 \cdots dt_k.\end{aligned}\ ] ] due to symmetry of integrand , we can order the integration variables as , obtaining } d{\tilde t}_1 \cdots d{\tilde t}_k , \end{aligned}\ ] ] whence : } d{\tilde t}_1 \cdotsd{\tilde t}_k \nonumber \\ & = & \gamma(k ) \int_0^t \int_0^{{\tilde t}_{k-1 } } \cdots \int_0^{{\tilde t}_{2 } } e^{\sum_{i=1}^k \mu { \tildet}_i+ \sigma^2 \sum_{i=1}^k ( k - i){\tilde t}_i ] } d{\tilde t}_1 \cdots d{\tilde t}_k.\end{aligned}\ ] ] after rearranging carefully the terms , we arrive at the final result : let us now expand further on eq .( [ eq : formula ] ) .it is convenient to first perform the rescaling .then , by defining , we obtain therefore , the computation of reduces to the computation of a very similar formula had been obtained in ; the main difference between the treatment made there and our relies on what follows .an important observation will enable us to evaluate these integrals exactly by means of combinatorics : it is possible in fact to prove the following result , which establishes a recursion relation among the . for the quantity , we have with .let us write with .integrating by parts , where finally , the identity gives the desired result .( [ eq : recursion ] ) suggests the evaluation of the averages by means of combinatorial considerations .indeed , the evaluation of the integral can proceed graphically , for any fixed order of the moment , as in fig .[ fig : rec ] .starting from the top and using the properties of the recurrence relation , at each order , we organize the recurrence on a binary tree . in fig .[ fig : rec ] , each left branch will pull out a factor , where is the obtained from the previous order .one has to consider the fact that however , in each right branch one pulls out a factor , and sums the two factors of s . to obtain the final formula , once the empty set has been reached , one multiplies the final term by all the factors in the branch .-th moments of the exponential integrated gaussian process to the case . ]we now give exact formulas for all the terms obtained from the recurrence . to lighten the notation, we define and . by iterating the second term of the recurrence ,we obtain therefore , [ lem2 ] let with .then we proceed the proof by induction on .we have , which holds for .we assume that the claim holds for and let us prove it for . by the recurrence relation , we have by induction hypothesis , we have which completes the induction step .if and , then eq . can be written as applying lemma [ lem2 ] , we obtain which leads to the following result . for all , is given by the above theorem gives hence , the coefficient of in is given by a cleaner formula can be obtained in terms of determinants . if we define for all , where we define .if and , then eq . can be written as using theorem 4.20 in ( see also ) , we obtain the following result : for all , is given by where and .this expression can be elucidated by means of an example .consider the case and . in this case, we have an exact formula for : \right ) = 1/6\,{\frac { \left ( { e^{\mu\,t}}-1 \right ) ^{3}}{{\mu}^{3 } { t}^{3}}},\ ] ] which provides the exact value obtained from the deterministic logistic equation .the previous results are general enough to hold also for averages of the form ^n\rangle = e^{(\mu-\frac{\sigma^2}{2})t } \langle e^{\sigma w_t}f[w , t]^n\rangle.\ ] ] it is easy to see that eq .( [ eq : det ] ) applies .we just substitute and multiply by a factor : the above expression will be used in the following section as an application to averages in the logistic stochastic differential equation .appendix [ sec : appendix ] contains the analytical values of the functions and for the cases .it is immediate that , when , we have this facts implies that eq .( [ eq : detgen ] ) is analytic in .[ fig : plotsn ] is a plot of for , . and provided in appendix [ sec : appendix ] as a function of for , for .,title="fig : " ] and provided in appendix [ sec : appendix ] as a function of for , for .,title="fig : " ]as an application of the formula ( [ eq : detgen ] ) , we focus on the solution of the logistic stochastic differential equation motion , , \label{eq : diffeq}\ ] ] given by ( we follow ) we evaluate the average of the solution , and observe that it involves the moments of ( [ eq : detgen ] ) . in the limit , eq .( [ eq : pertth ] ) reduces to the average of the geometric brownian motion .we consider now truncations of the mean of at -th order , , and compare the truncated solution to the one obtained numerically . in fig .[ fig : simulations ] , we plot for obtained for , , by means of a stochastic euler method with , and the averages then obtained using monte carlo over samples .we observe that the higher the order of the approximation , the closer we are to the solution obtained by monte carlo . , , , and with , solved numerically using a stochastic euler method and averaged over 1000 simulations ( solid red ) , versus the analytical solution obtained at -th order , considering the perturbative parameter .we can observe that at higher order we obtain a solution closer to the one simulated . ]in this paper , we have presented new exact formulas for the moments of the integrated exponential brownian motion in terms of sums and determinants , and based on recent results obtained in .we described a simple graphical method to evaluate them , based on a recurrence relation .exact formulas were proved in in terms of polynomials . in this paper however , we have taken an alternative route based on combinatorics . after realizing that the mean can be evaluated exactly using the properties of gaussian integrals , and after observing that these moments feature a recurrence relation , we have shown that exact expressions can be obtained via a combinatorial argument .these exact expressions were then observe to be equivalent to evaluating the determinant of a specific linear operator which depends on the order of the moment . to complete the presentation ,we have applied the formulas to the exact solution of the logistic stochastic differential equation . there, the evaluation of the ensemble expectation values of certain observables can be carried out with our method , via taylor expansion . in particular, the comparison of the mean solution obtained by means of monte carlo simulations with our method shows that our formula permits to approximate to a higher precision some ensemble averages of properties of the solution of the stochastic differential equation .f. c. would like to thank the london institute for mathematical sciences for hospitality .s. s. is supported by the royal society and epsrc .99 c. gardiner , `` stochastic methods '' , springer - verlag , berlin ( 2009 ) m. yor , `` exponential functionals of brownian motion and related processes '' , springer - verlag , berlin ( 2001 ) h. matsumoto , m. yor , probability surveys , vol . 2 ( 2005 ) , 312 - 347 h. matsumoto , m. yor , probability surveys , vol . 2 ( 2005 ) , 348 - 384 m. yor , adv .( 1992 ) , 509 - 531 i. v. girsanov , theory of probability and its applications 5 ( 3 ) : 285 - 301 ( 1960 ) m. salmhofer , `` renormalization - an introduction '' , springer - verlag , berlin ( 1999 ) r. vein , p. dale , `` determinants and their applications in mathematical physics '' , springer - verlag , berlin ( 1999 ) m. janji , determinants and recurrence sequences , _ j. int. seq . _ * 15 * ( 2012 ) p. e. kloeden , e. platen , numerical solution of stochastic differential equations springer science & business media ( 2013 ) f. caravelli , l. sindoni , f. caccioli , c. ududec , _ upcoming _
we present new exact expressions for a class of moments for the geometric brownian motion , in terms of determinants , obtained using a recurrence relation and combinatorial arguments for the case of a ito s wiener process . we then apply the obtained exact formulas to computing averages of the solution of the logistic stochastic differential equation via a series expansion , and compare the results to the solution obtained via monte carlo .
in practical implementations of the kohn - sham density functional theory the exchange - correlation ( xc ) energy is usually described by a suitable explicit functional of the electron density and the parameters characterizing the density inhomogeneity . a very promising step to improve the accuracy of density functional calculations relies on the use of orbital - dependent density functionals , in which the xc energy is expressed as an explicit functional of kohn - sham ( ks )orbitals . a rigorous approach to implement the orbital - dependent functionals withinthe ks formalism is the optimized effective potential ( oep ) method , where the xc potential is described by a local multiplicative term and the total energy functional is orbital - dependent . by virtue of the hohenberg - kohn theorems ,the oep solution is equivalent to the minimization of the total electronic energy with respect to the density . in the case of the exchange - only i. e. , the hartree - fock ( hf ) energy functional , the corresponding oep method ( xoep ) was first formulated by sharp and horton and numerically solved in real space for atoms by talman and shadwick .applications of real space xoep formalism to atoms , molecules and solids , thanks to the use of pseudopotentials , have been reported in the literature . a number of approximations to the xoep method , such as the method of krieger - li - iafrate , the local hf and the effective local potential methods have been developed .the real space resolution of the xoep method has only been efficiently applied to highly - symmetric systems , such as spherically symmetric atoms and diatomic molecules .application of the xoep formalism to polyatomic molecules requires its formulation in terms of basis sets suitable for molecular calculations .currently , there exist several formulations of the xoep method in terms of basis sets of local gaussian - type ( gto ) functions . the most popular implementation of the xoep formalism employs two different basis sets , one for the expansion of the ks orbitals and another one for representing the local multiplicative potential . within this approach, a special care must be taken when selecting the auxiliary basis set for the potential , thus leading to the concept of a balanced basis set firmly connected to the orbital basis set .alternatively , a set of the products of the occupied and virtual ks orbitals can be employed for the solution of the xoep equations for the local potential .the computational complexity of the xoep method in a basis set representation depends critically on the size of the orbital expansion basis set . for obtaining faithful solutions of the xoep equations , the orbital basis set should support the linear dependence in the space of the occupied - virtual orbital products . with the use of gto basis functions, this requirement leads to very large orbital basis sets with hundreds of basis functions even for small molecules .on the other hand , the use of slater - type ( sto ) basis functions give considerably more compact orbital basis sets which can be beneficial for the application of the xoep formalism , and a very efficient implementation of quantum chemical formalisms with slater - type basis functions has been achieved in the smiles suite of programs .it is the primary purpose of the present work to implement the xoep method within the smiles package and to analyze the advantages which can be obtained from the use of the sto basis sets , testing it for a small number of atoms and diatomic molecules , for which both numerical solutions and stos are available . in this workwe employ the xoep algorithm outlined in refs . [ ] and [ ] and the xoep equations , formulated in terms of the stos .the solution of the xoep equation will be carried out through the truncated singular value decomposition ( tsvd ) technique .it will be demonstrated that the use of the slater - type basis sets leads to considerable computational savings in every step of the selfconsistent procedure , without deteriorating the accuracy of the calculated xoep total and orbital energies .in this section the main features of the oep method will be outlined . in the oep method oneseeks for a local multiplicative potential such that its eigenfunctions ( atomic units will be used in this paper ) minimize the total energy functional given by & = & \sum_{\sigma } \sum_{i } \int \phi_{i\sigma}^ { * } ( { \mathbf r } ) \left ( -\frac{1}{2 } \nabla^{2 } \right ) \phi_{i\sigma } ( { \mathbf r } ) d{\mathbf r } \\ & + & \int \rho ( { \mathbf r } ) v_{ext } ( { \mathbf r } ) d{\mathbf r } + \frac{1}{2 } \int \rho ( { \mathbf r } ) \int \rho ( { \mathbf r } ' ) \frac{1}{|{\mathbf r}-{\mathbf r}'| } d{\mathbf r } d{\mathbf r } ' + e_{xc}\left [ \ { \phi_{i\sigma } \ } \right ] , \end{aligned}\ ] ] where runs over occupied orbitals and over the spin , being the electron density and ]is replaced with the hf exchange energy , = - \frac{1}{2 } \sum_{i , j } \sum_{\sigma } \int \frac { \phi_{i\sigma}^ { * } ( { \mathbf r } ) \phi_{j\sigma } ( { \mathbf r } ) \phi_{j\sigma}^ { * } ( { \mathbf r } ' ) \phi_{i\sigma } ( { \mathbf r}')}{|{\mathbf r}-{\mathbf r}|'}.\ ] ] the local multiplicative potential is splitted into the external potential ( i. e. the potential due to the nuclei ) , the coulomb potential of the electron cloud and the local exchange potential .the xoep equations in a basis set representation are obtained from the minimization of the total energy presented in eq .( [ etot ] ) with respect to the local potential , being this minimization equivalent to the minimization respect to the density , by virtue of the sham - schlter condition and the hohenberg - kohn theorems .an expansion of the exchange part of the local potential is assumed in terms of an appropiate set of functions , where labels the spin , labels all needed values of the basis index and are the expansion coefficients of the exchange potential in this basis . following the literature the expansion functionsare conveniently defined as where are square integrable functions .note that this definition implies that the expansion functions are not necessarily square integrable ; this is not a problem as the local potential does not satisfy this condition .the requirement that the total energy to be stationary under the variations of the local potential , i. e. , is then equivalent to finding a minimum of the total xoep energy with respect to the set of the expansion coefficients of the the local potential , .if we work with a real orbital basis and if we introduce a scalar product of two functions and of our expansion space as the minimization of the xoep energy i. e. eq .( [ etot ] ) with defined as the fock exchange energy given in eq .( [ xenergy ] ) leads to the equation \phi_{a\sigma } ( { \mathbf r } ' ) = 0 .\ ] ] here the non - local potential , , is defined as } % { \delta \phi_{j \sigma}(\mathbf{r } ' ) } , \ ] ] being the solutions of eq .( [ eigen ] ) .using the matrix in eq .( [ eqpot ] ) we get a matrix equation equivalent to the minimization of the xoep energy , where is the vector of the expansion coefficients for the local exchange potential and is the projection in the chosen basis set of the non - local hf potential , . in this workour expansion basis set will be a scaled form of the occupied - virtual products , specifically thus , the elements of the vector are and the matrix elements reduce to note that with the basis set of occupied - virtual products it is not possible to get any term having a asymptotic decay .this is corrected by the addition to our basis set of the fermi - amaldi function , where is the number of electrons of the system .that procedure reproduces the fermi - amaldi potential for long distances and will make the xoep homo eigenenergies to be very close , but not equal , to those found using the hf method .this prescription is very different to the one adopted when the exchange potential is expanded in an auxiliary basis set .as the products of the occupied and virtual states are linearly dependent , the matrix appearing in eq .( [ nablaoep ] ) is singular and the equation can not be solved by inversion . following the argument given in refs . and , only the linearly independent functions ( or ) can be used in the expansion of the xoep , i. e. for a faithful solution of the xoep equation , eq .( [ nablaoep ] ) , linearly - independent orbital products must be employed .we will then apply the truncated singular value decomposition ( tsvd ) technique to separate the linear dependent occupied - virtual products and the independent ones , seeking for the linear independent set of products by diagonalization of the matrix . in order to fulfill this condition ,a threshold is chosen to discriminate the elements of the matrix that correspond to the linearly independent functions .perfectly invertible ) , then eq . ( [ nablaoep ] ) would have had a unique solution that would correspond to the lowest variational energy obtainable with the functional given in eq .( [ xenergy ] ) , that is the hf energy . ] in this way , the mapping between the density and density matrix becomes non - unique and a solution with the energy is obtained . in our case ,the tsvd method requires the diagonalization of the matrix , in general a very large one .but as the stos represent the unoccupied orbitals in a much more efficient way than the gtos , when slater - type orbital basis sets are used the size of the matrix , and consequently the expansion space for the local potential in eq .( [ vxspan ] ) , is much smaller than when using gtos .this is the main point of this paper : when sto basis sets are used , the computational effort ( in memory size and in speed of the calculations ) for each self - consistent cycle is much lower , whereas the quality of the results is preserved .the algorithm outlined in the previous section was implemented in the smiles suite of programs , which employs the sto basis sets in quantum chemical calculations .we will compare the results of the xoep calculations obtained with the slater - type basis functions ( xoep - sto ) to both numerical exact solutions and xoep results obtained with the use of the gaussian - type basis functions ( xoep - gto ) .the latter results were obtained with the use of the molpro2008.1 code , where the xoep formalism employing gaussian - type basis sets was recently implemented by some of us . in order to use comparable basis sets ,the correlation - consistent basis sets of dunning ( cc - pvtz , cc - pvqz , and cc - pv5z ) were used in the xoep - gto calculations , and the sto of similar quality ( vb1 for cc - pvtz , vb2 for cc - pvqz and vb3 for cc - pv5z ) were selected for the xoep - sto calculations .these sto and gto basis sets yield the total hf energies in close agreement ( see tables [ totalbe ] [ totalco ] below ) .all the basis sets we have used ( xoep - sto and xoep - gto calculations ) were employed in their uncontracted form . due to the few exact numerical xoep solutions found in the literature , and to the small number of available stos for atomic and molecular computations , the calculations we present here were performed for the be and ne atoms and for the lih , bh , li and co molecules .the numerical solutions were given in makmal et al . and we have used the same internuclear distances ( in a.u . ) : for lih , for bh , for li and for co. this distance for the co molecule has also been used by heelmann et al . in the xoep - gto solution , and we will also compare our results with theirs in section [ sec : results ] .the dependence of the xoep - sto and the xoep - gto total energies on the size of the basis set and on the tsvd cutoff criterion for neglecting ( near ) zero eigenvalues of the matrix has been investigated .the results are collected in tables [ totalbe ] , [ totallih ] and [ totalco ] . in the calculations , the tsvd cutoff criterion was varied in the range .note that when is employed , the expansion space for the potential is very small because only a few eigenvalues of the matrix are greater than , yielding energies noticeably above the numeric xoep values .for that reason , we have not reported that energies in the tables .reducing the tsvd cutoff makes the expansion space bigger and , as a consequence , the total xoep - sto and xoep - gto energies decrease , approaching to the accurate numeric xoep values . for in the range ,the total xoep energies remain constant to within a fraction of mha .when the xoep - gto method is used with a very tight cutoff criteria ( ) , the iterative solution of the xoep equation ( [ nablaoep ] ) becomes unstable and the xoep total energy collapses towards the hf energy . as a matter of fact , the procedure breaks - down as the matrix becomes noninvertible .on the other hand , the xoep - sto implementation shows somewhat greater stability and begins to break down at smaller values of , i. e. when .this can be attributed to the fact that , with the use of sto functions , the expansion set of the potential is much smaller than with the more traditional gaussian - basis sets ( see below ) .the number of the eigenfunctions of the matrix used for the expansion of the potential is also given in tables [ totalbe ] , [ totallih ] and [ totalco ] , as well as the total dimension of the matrix .it is seen that the dimension of is considerably smaller in the xoep - sto method , and the dimension of the potential expansion space does not grow as fast as in the case of the xoep - gto method .so , the xoep - sto implementation gives a noticeably memory savings and a greater stability with respect to the cutoff criterion of the tsvd procedure .for all the systems in tables [ totalbe ] , [ totallih ] and [ totalco ] , the differences between the xoep - sto and the xoep - gto total energies are typically smaller than 1 mha .when large basis sets are used the total xoep - sto and xoep - gto energies approach the exact numeric values with an accuracy better than 100 ha .it is important to stress here that the xoep and hf total energies converge in a somewhat different way to respect to the basis set size ( remember that the basis sets we use in this paper were nor developed neither optimized for the xoep calculations ) so the differences oscillate .table [ totalxoep ] summarizes the results of calculations for be and ne atoms and a number of diatomic molecules studied by makmal et al . using the real space xoep method . for the sake of comparison , the xoep energies obtained by heelmann et al . with the use of balanced auxiliary basis sets for the potential are also shown when available .note that the xoep - sto energies are typically in a somewhat better agreement with the numerical values than the xoep - gto energies obtained with similar basis set ( the sto basis sets give results about mha below the energies obtained by heelmann et al . ) .table [ orbitalxoep ] collects the energies of the occupied orbitals obtained with the use of the xoep - sto and xoep - gto methods for the be atoms and the lih and li molecules .there is a good agreement between the numerical values of these orbital energies and the energies obtained with the two xoep methods .furthermore , the orbital energies from the xoep - sto and from the xoep - gto calculations agree with each other to within a few mha . besides the total and orbital xoep energies , we have also studied the energy decomposition into the kinetic , nuclear - electron attraction , electron - electron repulsion and the exchange energies .table [ edecomp ] presents the above components of the total xoep energy as obtained using the sto and gto basis sets for the be atom and the co molecule ( there are no numerical solutions available in this case for the xoep ) .[ fig : ne ] and [ fig : co ] show the results of the xoep potential for the ne atom and the co molecule along the main axis . for the ne atom , note the close agreement between our result and that obtained with the exact numerical calculation by kurth and pittalis , thus yielding a smooth potential that shows only small deviations from the numerical potential ; in any case , these xoep potentials are also very similar to that evaluated with a gaussian basis set using the procedure presented in refs . and . in the lower panel of fig .[ fig : co ] the results we have obtained using the vb3 basis set for the co molecule are compared with the calculation by heelmann et al . , using an auxiliary basis set within a gaussian representation ; the sto results show a good agreement with the xoep - gto potential . for the sake of completeness , in the upper panel we also present several other calculations for the internuclear region , using sto basis sets of different quality ( vb1 , vb2 and vb3 ) .note that our xoep potentials do not present any unphysical wiggle as those found by staronerov et al . and have a good agreement with both the numerical calculation and the xoep - gto solution by hessellmann et al . with the use of auxiliary basis set . in summary ,the previous results show that both xoep - sto and xoep - gto methods yield close results to the numerical exact ones for the total energies , the one - electron energies of the occupied orbitals and the xoep potentials .the implementation of the xoep formalism with slater - type basis functions has been developed .this procedure has been tested in a small group of closed shell atoms and diatomic molecules , for which both numerical xoep solutions and sto basis sets are available . when compared to the exact numerical solution of the xoep equations we have obtained very good results ; they are even a bit better than those given by the gaussian - type basis sets procedure . on the other hand , both xoep - sto and xoep - gto results obtained with the prescription proposed in refs . [ ] and [ ] give energies about mha below those obtained for xoep - gto by heelmann et al ., and thus they are closer to the exact results .the new method leads to a considerably more compact expansion space for the xoep local multiplicative potential , yielding noticeable savings in the computational effort to be done in each one of the cycles of the self - consistent procedure .yet another advantage of using the slate - type basis sets is that , within the tsvd algorithm , fewer eigenvalues of the ( near ) singular matrix need to be employed , which leads to an increased numeric stability of the xoep - sto method as compared to the xoep - gto algorithm . as a final remark, it is known that a more efficient xoep algorithm can be developed based on the use of the incomplete cholesky decomposition technique .the use of this technique would facilitate the application of the xoep - sto method to larger molecules .this implementation is currently in progress and will be reported elsewhere . on the other hand , this work is a first step to develop a local potential formalism for both exchange and correlation . due to the smaller memory requirements of the sto scheme we have presented here , it can be used to study bigger molecules than those that can be solved with the standard gto approach .the implementation of the correlation part of the potential is under development .the authors thank prof .jaime fernndez rico and his research group for providing a copy of the smiles suite of programs and supporting this work .the authors acknowledge the financial support from the spanish ministerio de ciencia e innovacin through the research project grant fis2010 - 21282-c02 - 02 ..total energies ( hartree ) for the be atom using the hf and the xoep methods with the sto and the gto basis sets . in the first columnwe indicate the value used as a threshold in the tsvd decomposition of the tsvd decomposition of the matrix in the xoep method . for the sto and gto xoep resultswe have included , within brackets , the total number of occupied - virtual products ( first number in the brackets ) and the number of them that are used ( second number in the brackets ) in each calculation . [cols="^,^,^,^,^",options="header " , ] a.u . ) . in the lower panel ,the red line show the results of this paper and the green one reflects the xoep - gto results of heelmann et al . . in the upper panel the region between the nuclei of the moleculeis depicted in more detail ; the xoep - gto results are shown again in green , and results for xoep - sto calculations with different sto basis sets are also plotted ( vb1 , blue dashed line ; vb2 , magenta dotted line ; vb3 , red thick line ) . , width=453 ]
the exchange - only optimized effective potential method is implemented with the use of slater - type basis functions , seeking for an alternative to the standard methods of solution with some computational advantages . this procedure has been tested in a small group of closed shell atoms and diatomic molecules , for which numerical solutions are available . the results obtained with this implementation have been compared to the exact numerical solutions and to the results obtained when the optimized effective equations are solved using the gaussian - type basis sets . this slater - type basis approach leads to a more compact expansion space for representing the potential of the optimized effective method and to considerable computational savings when compared to both the numerical solution and the more traditional one in terms of the gaussian basis sets .
quantum computation ( qc ) promises to solve factoring , database searches , and myriad optimization problems with dramatically greater efficiency than is possible in the classical computation model .the traditional approach to quantum computation is based on unitary quantum logic gates that control the interactions between well - defined quantum states ( aka qubits ) .this approach is inherently difficult to scale because of challenges controlling decoherence .one way quantum computing with continuous variables ( cv ) is potentially scalable due to the advantages provided by deterministic entanglement generation .a fault tolerance threshold for cv one way qc has also recently been discovered .recent experiments exhibited the simplicity of one - way qc in the discrete variable regime by demonstrating grover s algorithm with a four - qubit cluster state and by providing the first implementation of topological error correction with an eight photon cluster state .however , these discrete implementations are more difficult to scale due to the probabilistic generation of entanglement necessary to form the cluster state .recent demonstrations of cv cluster states have shown promise as the most scalable implementation of quantum resources for one way qc yet realized , with deterministic generation of entanglement that far outstrips the levels achieved by other systems .experimental demonstrations of cv cluster states have included 10000 time - multiplexed modes sequentially entangled ( though only a few of these modes existed at any given time ) and an experimental implementation of a cluster state in a frequency comb with more than 60 modes entangled and available simultaneously .a cluster state with 16 simultaneous bright modes was also generated in multi - mode opos above threshold using pulsed light sources and filtered local oscillators ( los ) to measure entanglement among different portions of signal and idler pulses .the elegance of these schemes , involving only beam splitter interactions and simple bipartite entanglement measurements , will likely see broad application in quantum optics and quantum information systems in the very near future . the first theoretical proposal for multi - party cv entanglement in a compact single - opo form utilized nonlinear crystals with concurrent nonlinearities , followed by experimental evidence that concurrent nonlinearities that could support such multipartite entanglement can be engineered , and finally , the use of concurrences to generate clusters . however , concurrent interactions in frequency and polarization are a means to an end , and several other methods have also been proposed , including nonlinear interactions that are concurrent in time or space .the main requirement is that the bloch - messiah reduction be applicable to the system ; that is the combined nonlinear optics and linear optics systems can be described by linear bogoliubov tranforms .the reduction states that any combination of multimode nonlinear interactions and interferometric interactions can be decomposed into an arrangement of single mode nonlinearities and linear optical elements , and it has been previously shown that this transform applies to cv cluster states derived from 2nd order nonlinearities . in this manuscript , we outline the generation of mutipartite entanglement and cv cluster states utilizing concurrent nonlinear interactions spread across the spatial domain .we detail an experimental scheme using concurrent phase insensitive amplifiers based on four - wave mixing ( fwm ) in alkali metal vapors in which this method can be applied .each concurrent amplifier operates on independent spatial modes . by choosing the los to measure specific entangled spatial modes ,a spatial frequency comb can be generated from the amplified spatial modes , which can be mixed via a linear transformation to generate a cluster state .a fault tolerance threshold for cv qc in cluster states has also recently been derived in terms of quantum correlations below the shot noise .the fwm geometry has been shown to reach 9 db of quantum noise reduction , and a cluster state with this level of squeezing would represent a promising step on the path to fault tolerant cv qc .the absence of an optical cavity in the fwm process allows for a compact and stable experiment that requires no phase locks , cavity length locks , or interferometric control , thereby enabling a potentially practical approach to quantum computation over cluster state resources .the `` optical spatial mode comb '' is analagous to the optical frequency comb .the requirements for generation of a comb involve an amplifier operating on a large continuum of modes followed by application of a filter to discretize the continuum ( roughly speaking , and keeping in mind that a single frequency with infinitesimal bandwidth corresponds to a discrete monochromatic mode of the electric field , a single `` discrete '' axial mode in an optical cavity may consist of multiple monochromatic modes ) . in the case of a multimode opo ( either pulsed or cw ) , the resonant modes of an optical cavity that overlap with the phasematching bandwidth of a nonlinear medium are enough to define a comb . in the more familar case involving lasers ,the pulse bandwidth may overlap the gain bandwidth and the axial modes of the optical cavity simultaneously as one means of making a comb .naturally , optical cavities are narrow band spatial filters , with often only a few modes overlapping with the gain bandwidth ( e.g. a tem mode resonant inside a laser cavity ) .however , it is possible to use optical cavities to discretize continuous spatial frequencies into up to three simultaneously concurrent nonlinear interactions over spatial modes . without the discretizing filter ,many nonlinear media emit into multiple spatial modes simultaneously .perhaps the most important aspect is that the lo used for detection must match the desired modes , whether they be in the form of a comb or not . here , we discuss using the input of a nonlinear amplifier to shape the los in such a way that an analogy to a frequency comb can be detected . consider a bare nonlinear medium , such as a bbo crystal , which emits photons into multiple spatial modes simultaneously .a biphoton spatial mode may be denoted by a pair of k - vectors , frequencies , polarizations , etc . in the limit of large gain, quantum correlations may be detected in any of the spatial modes as long as a lo matching them can be generated .the quickest route to generating the proper lo is to seed the same nonlinear process with a coherent field , treating it as an amplifier , to obtain a bright field whose phase front , frequency , polarization , etc ., match the signal field ( see fig .[ logen ] ) .thus , the problem of measuring entanglement over the spatial mode comb is essentially a problem of producing the proper los . herewe assume that the amplifier is phase - insensitive ; the requirement for generating los is that the pump and probe phases be set for amplification .a large assumption implicit in fig .[ logen ] is that an input field can be expressed in terms of an eigenfunction expansion of the amplifier modes .for instance , it would be difficult to input a probe field that has the same wavefront as an arbitrary schmidt spatial mode , but if the probe field can be expressed as an eignefunction expansion of the amplifier modes , then the generated lo will interfere with a summation of amplifier vacuum spatial modes .that is , if where corresponds to the spatial mode , then the overlap of the lo with some spatial distribution of output amplifier modes will be nonzero .this can be empirically verified by ensuring a large nonlinear gain in the probe field , and is essentially determined by phase matching conditions .this summation of spatial modes may be considered orthogonal to a large number of other summations of spatial modes by showing that the separate sums do not overlap in their image planes .one may produce multiple los as assuming that each mode is independently amplified by the nonlinear medium , the interaction hamiltonian for a single amplifier mode is ( assuming media ) where each corresponds to the k - vector for a given optical frequency ( signal , s , or idler , i ) .we may consider that a single amplifier mode consists of a set of three k - vectors that satisfies the phase matching condition ( in the limit of exact phase - matching ) .many amplifier modes may be concurrently phase matched , such that one has a set of independent interaction hamiltonians ( assuming equal gain for each mode ) : solving the equations of motion yields a set of biparite entangled modes ( for ) whose entanglement witnesses are the bipartite epr operators : where the pump field operater has been approximated as a classical number and subsumed in .we note that the form of eqs .[ h2 ] - [ eprops2 ] implies that the amplifier modes denoted by , are coincident with the squeezed eigenmodes of the system .that is , the interaction hamiltonian has been written with a squeezing parameter matrix , , where is a field mode operator vector and is an interaction matrix which defines a graph of mode pairs ( for the case of four modes for brevity ) : [ opmatrix ] the corresponding graph states for eight modes are given in fig . [ bigraph ] .note that while each k - vector pair represents an individual mode of the amplifier , the concept of the mode is immaterial until measured with a lo .one may produce two los , , per eq .[ los ] to detect the modes in fig .[ bigraph ] with two homodyne detectors ( one corresponding to each lo ) , which effectively transforms the graphs to a single bipartite graph .the converse is also true : one may produce arbitrary spatial mode combs by selecting appropriate los within the amplifier s phase matching bandwidth in order to detect individual k - vectors or combinations of k - vectors analogous to those in fig .[ bigraph ] .it was previously shown that epr states ( eigenstates of the operators in [ eprops1 ] and [ eprops2 ] ) , which are cluster states of the type in fig .[ bigraph ] , can be concatenated into a dual rail cluster state , or quantum wire . while the proposal and implementations have been in compact , single opos , it is illuminating to draw a more explicit equivalent picture via bloch - messiah reduction . unfolding the dual - pumped , single opo cavity in into a series of opos, one can show that _ identical _ two mode squeezers interfered on a train of beam splitters leads to a dual rail cluster state ( after applying a phase shift to every odd mode , or a redefinition of the quadrature operators ) , as shown in fig .[ concopo ] .( blue lines ) and ( orange lines).,width=240 ] this picture is analogous to that drawn over the frequency comb in optical cavities .the differences are in the degree of freedom used to represent the nodes on each graph . in the axial mode case ,each node is an axial mode of an optical cavity , separated by a free spectral range , where two or more pump fields serve to overlap optical frequencies with an additional degree of freedom ( such as polarization ) in order to allow for interference of distinct modes later ( the axial mode pairs that comprise the epr states would otherwise not interfere with one another due to their frequency separation ) . in the spatial mode case, the k - vector serves as a stand in for the cavity axial modes .since the superpositions of spatial modes are separable after diffraction limited propagation , an additional degree of freedom is not needed in order to interfere distinct modes at beam splitters and subsequently homodyne detectors .the entanglement witnesses that need to be measured to verify the bipartite correlations , between the first two modes for instance , necessary to form a cluster state are : ^{-\kappa t}\label{wit1 } \\\left[(p^{(2)}_1+p^{(3)}_1)-(p^{(4)}_2-p^{(5)}_2)\right]e^{-\kappa t } \label{wit2}\end{aligned}\ ] ] the subscripts in eqs .[ wit1]-[wit2 ] denote the frequency in fig .[ concopo ] while the superscripts denote the k - vector .here we outline a multispatial mode amplifier configuration that yields a dual rail cluster state over the optical spatial mode comb .the scheme uses a well known nonlinear amplifier : fwm in alkali vapor based on a double system near the d1 or d2 transition .the amplifier is based on a third order nonlinearity which is isotropic in homogeneous vapor .a finite interaction length quasiphasematches a set of k - vectors that fall within an angular acceptance bandwidth .the inverse transverse gain region sets the size of a `` spatial mode '' in the far field , otherwise known as a coherence area .these coherence areas can be considered independent spatial modes in the sense that they do not interfere in their image planes and can be detected with separate homodyne detectors as discussed in section ii .the amplifier can be made analogous to our hypothetical amplifier with equal gain for all modes by considering one set of modes along a circle within the angular acceptance bandwidth ( the gain is cylindrically symmetric about the gain region , given the gain region s own cylindrical symmetry ; see fig .[ cyl ] ) .the hamiltonian for the fields along the constant gain circle is given by the process is a third order nonlinearity supported by the double system shown in fig.[setup ] .the two pump fields can be taken as classical numbers , which reduces the system to an effective second order interaction with epr eigenstates as in eqs .[ eprops1 ] , [ eprops2 ] .system in rb vapor at the d1 line ( 795 nm ) .bottom : output modes for the probe ( red ) and conjugate ( blue ) fields for two pump positions within the vapor cell for a given input probe image .the black and green arrows connecting closed and open circles respectively denote nonlinear interactions between mode pairs that are correlated in an image reflected symmetrically about the pump axis .each line carries equal weight if the input probe image is coincident with a semicircular gain region symmetric about the pump axis in the far field . in that case, the graph states shown in the lower portion correspond to the output epr states for each mode pair.,width=288 ] using a spatial light modulator ( see fig . [ setup ] ) , multiple probe fields can be shaped from a single beam in the form of an image input to the amplifier .probe fields on one half of the output circle produce conjugate fields on the opposite half of the circle , as shown in the bottom half of fig .[ setup ] .if the squeezing parameters are equal between each mode pair , and if each `` dot '' in fig [ setup ] does not interfere with any other `` dot '' in its image plane , then the entanglement witnesses [ wit1]-[wit2 ] apply directly , where each `` dot '' is a single mode at one of two frequencies .the caveat in fig .[ setup ] is that certain practical parameters of the experiment must be taken into account .only a finite number of independent modes will fit within the constant gain region ( primarily dictated by the pump focus ) .for instance , it has been shown that about 200 modes can fit within the gain bandwidth for typical pump / probe beam sizes of 1000 m / 500 m respectively , with detunings of 1 ghz / 3036 mhz , and a relative incidence angle of 3 mrad .the modes were counted by observing the transverse conjugate beam size within the far field as a function of probe focus for a constant pump focus .a separate experiment measured the approximate size and physical arrangement of coherence areas within an image produced by the fwm medium as a function of pump / probe overlap .these empirical observations show that it is possible to amplify and separate a large number of coherence areas within a single fwm process . in order to scale the number of modes indefinitely ,multiple pumps and multiple probes can be used , as shown in fig .[ cascades ] . this can be accomplished by either splitting pump and probe fields and using multiple spaces in the same vapor cell , or by cascading multiple vapor cells .the latter option has an added advantage in that it will allow for cascaded gain regions which can lead to increased squeezing .its similarity with fig .[ concopo ] is also apparent .the former has the advantage that phase control is much simpler , as the pump and probe fields may be split as close to the cell as possible , ensuring that the optical paths are as close to one another as possible .each beam in fig .[ cascades ] may be taken to represent los for either a single or multiple k - vectors ( such as those produced with images as shown in fig . [ setup ] ) .the fields can be interfered with one another if the gain regions are identical .it has been shown empirically that the interference of whole images with their local oscillators yields high visibility even for complicated arrangements in which the los are amplified and attenuated in multiple optical paths , rather similar to the arrangement proposed in fig .[ combs ] finally , modes with like frequency must be interfered on beam splitters in order to concatenate their graphs .the los must undergo the same process in order to ensure good mode matching to the vacuum fields during the detection process , thus , both the vacuum and the los must be interfered on beam splitters amongst themselves .then , the los and vacuums are finally mixed during the detection stage .a practical question arises from these requirments .it is natural to consider the question of how many beam splitters would be required to interfere all coherence areas with each other . if the number of beam splitters is approximately equivalent to the number of modes , then the scheme is perhaps less practical than other schemes which may use a single beam splitter multiplexed in time or optical frequency .fortunately , we may likewise multiplex beamsplitters spatially by treating each spatial mode comb as an image .entire images , which contain multiple coherence areas , can then be interefered with one another .this concept was verified empirically in while the frequency - independent separability of coherence areas was empirically studied in .thus , a single beamsplitter is needed in order to concatenate two spatial mode combs and a second beam splitter is needed for the homodyne detection step .a single beam splitter can be used to concatenate multiple spatial mode combs by further multiplexing a single beam splitter spatially .the limit to the number of modes that can be interfered on a single beam splitter is essentially dictated by the number of spatial modes that can be imaged independently in each port .[ combs ] shows a schematic setup of the complete system .two pump fields generate four lo combs using an input image on two probe fields , where both input sets are derived from the same initial field ( generally , n pump fields with n probes may produce 2n los with a number of coherence areas , where the n input sets are all derived from the same initial field ) . because the frequency combs can be treated as individual images , they can interfere with one another on the same beam splitter with high visibility .that is , the `` dots '' in fig .[ setup ] can be treated as a single image and interfere with one another both during the concatenation step and at the homodyne detector where the los and vacuum fields interfere .afterwards , the coherence areas are finally separated and sent to individual photodiodes for balanced detection . in the limit of perfect lo - signal overlap ,the entanglement witnesses can be used to verify the final state as the dual rail cluster state shown in fig .[ concopo ] .the los undergo the same interferences and traverse the same optical paths as the two mode squeezed vacuum signals , meaning their wavefronts will interfere well with the signals .however , each independent vacuum mode is actually an expansion of amplifier modes .if all modes can be detected , then measuring the bipartite epr operators should show squeezing between mode pairs .in the limit that the los are perfectly matched to the signals , the noise on the position difference operator ( chosen for brevity , but other entanglement witnesses follow similarly ) approaches that of two individual entangled modes ( normalized to the shot noise ) : where is the detector efficiency and is the nonlinear gain .however , if the los are misaligned from the correct vacuum modes , the detected correlations are reduced : where is the total power contained in the lo , is the portion of the lo that wholly overlaps with the corresponding spatial modes in the vacuum field , is the detector efficiency , and is the portion of the lo power that overlaps partially with the vacuum spatial mode , where the overlap is determined by an effective reduced detector efficiency .a typical number for is 95%-96% in off the shelf components ( for which the fwm system based on rb is capable of 9db of quantum noise reduction ) , while custom photodiode coatings can achieve efficiencies of greater than 99% .also note the constraint on the total power : which implies that as .the third term in eq . [ lo_overlap ] is due to the excess noise contained within uncorrelated vacuum spatial modes that are accidentally detected in each lo .note that the nonlinear interaction spontaneously amplifies all vacuum amplifier modes , and the los are used to pick out each mode in a comb .if the los pick out neighboring modes , they will measure anticorrelations .this is a disadvantage relative to an opo , whose misaligned lo measurement would yeild only the first two terms in eq .[ lo_overlap ] because the optical cavity effectively filters out all of the vacuum modes that do not overlap with the lo .all of the entanglement witnesses required to measure the cluster state will suffer from this excess noise , meaning that the purity of the cluster state will be degraded by lo misalignment .very good alignment of the los with their signal modes will minimize this effect , as it requires .another consideration is the initial state purity before homodyne detection . ascluster states are resources for one - way quantum computers , which run quantum algorithms that formally start from initialized pure states , quantum computing with statistical mixtures is not necessarily defined .therefore it is important to consider whether a system that produces entanglement resources is also capable of producing pure states .we note that the fwm process is under certain conditions a quantum - noise - limited amplifier .this means that the noise added to a pure input state is the minimum amount required by quantum mechanics under these conditons , and the two - mode output is in a pure two - mode squeezed state .this is true if all other classical noise sources on the input can be minimized , meaning the probe is in a coherent state .this can be achieved by minimizing classical laser noise at the working analyzer frequency .finally , a physical system that can produce cluster states would not be useful for universal quantum computing without supporting a non gaussian operation .a cubic phase gate is one operation that would fulfill this requirement .we note that a large amount of quantum noise reduction is needed to achieve high cubic phase gate fidelity .the fwm system is potentially capable of the levels of squeezing needed for a successful application , but we relegate further discussion of non gaussian operations to a future study .in this manuscript we have drawn an analogy between the optical spatial mode comb and the optical frequency comb over which dual rail cluster states can be generated .we presented an example of an experimental system in which these cluster states can be generated and detected using images to synthesize appropriate los .the experimental system considered suffers a potential disadvantage with respect to the single opo implementations , in that excess noise is introduced for any lo misalignment .however , the system offers the potential advantages of simple phase control , ease of alignment , and scalability via the use of multiple gain regions .we would like to thank olivier pfister and pavel lougovski for useful discussions .this work was performed at oak ridge national laboratory , operated by ut - battelle for the u.s .department of energy under contract no .de - ac05 - 00or22725 .j. j acknowledge the support from the nsfc under grants no .11374104 and no .10974057 , the srfdp ( 20130076110011 ) , the shu guang project(11sg26 ) , the program for eastern scholar , and the ncet program(ncet-10 - 0383 ) . p.w. shor , proc .35nd annual symposium on foundations of computer science , ieee computer society press , washington , dc , 124 - 134 ( 1994 ) .l. k. grover , phys .79 * , 325 ( 1997 ) .r. raussendorf and h. j. briegel , phys .. lett . * 86 * , 5188 ( 2001 ) .s. lloyd and s. l. braunstein , phys .lett . * 82 * , 1784 ( 1999 ) .n. c. menicucci , phys .rev . lett . * 112 * , 120504 ( 2014 ) .p. walther _ et al ._ , nature * 434 * , 169 - 176 ( 2005 ) .yao _ et al ._ , nature * 482 * , 489 - 494 ( 2012 ) .j. zhang and s. l.braunstein , phys . rev .a * 73 * , 032318 ( 2006 ) .s. yokoyama _ et al ._ , nature photonics * 7 * , 982 - 986 ( 2013 ) . m. chen , n. c. menicucci , and o. pfister , phys . rev . lett .* 112 * , 120505 ( 2014 ) .j. roslund , r. m. de arajo , s. jiang , c. fabre and n. treps , nature photonics , * 8 * , 109 ( 2013 ). o. pfister , s. feng , g. jennings , r. pooser , and d. xie , phys .a * 70 * , 020302 ( 2004 ) .r. c. pooser and o. pfister , opt .* 30 * , 2635 ( 2005 ) .m. pysher , y. miwa , r. shahrokhshahi , r. bloomer , and o. pfister , phys .lett . * 107 * , 030505 ( 2011 ) .b. chalopin , f. scazza , c. fabre , and n. treps , phys .rev . a * 81 * , 061804(r ) ( 2010 ) . s. l. braunstein , phys .rev . a * 71 * , 055801 ( 2005 ) .p. van loock , c. weedbrook , and m. gu , phys .a * 76 * , 032321 ( 2007 ) ; n. c. menicucci , x. a. ma , and t. c. ralph , phys .lett . * 104 * , 250503 ( 2010 ) ; n. c. menicucci , physa * 83 * , 062314 ( 2011 ) .v. boyer , a. m. marino , r. c. pooser , and p. d. lett , science * 321 * , 544 ( 2008 ) . c. k. law and j. h. eberly , phys .lett . * 92 * , 127903 ( 2004 ) .m. d. reid , phys . rev . a * 40 * , 913 ( 1989 ) .r. s. bennink and r. w. boyd , phys .a * 66 * , 053815 ( 2002 ) . c. f. mccormick ,v. boyer , e. arimondo , and p. d. lett , opt. lett . * 32 * , 178 ( 2007 ) ; m. guo _ et al .rev . a * 89 * , 033813 ( 2014 ) .r. c. pooser , a. m. marino , v. boyer , k. m. jones , and p. d. lett , opt .* 17 * , 16723 ( 2009 ) .e. brambilla , a. gatti , m. bache , and l. a. lugiato , phys .a * 69 * , 023802 ( 2004 ) .o. jedrkiewicz _ et al .lett . * 93 * , 243601 ( 2004 ) .b. j. lawrie and r. c. pooser , optics express , * 21 * 7549 ( 2013 ) .z. qin , l. cao , h. wang , a. m. marino , w. zhang , and j. jing , phys .113 * , 023602 ( 2014 ) .
one way quantum computing uses single qubit projective measurements performed on a cluster state ( a highly entangled state of multiple qubits ) in order to enact quantum gates . the model is promising due to its potential scalability ; the cluster state may be produced at the beginning of the computation and operated on over time . continuous variables ( cv ) offer another potential benefit in the form of deterministic entanglement generation . this determinism can lead to robust cluster states and scalable quantum computation . recent demonstrations of cv cluster states have made great strides on the path to scalability utilizing either time or frequency multiplexing in optical parametric oscillators ( opo ) both above and below threshold . the techniques relied on a combination of entangling operators and beam splitter transformations . here we show that an analogous transformation exists for amplifiers with gaussian inputs states operating on multiple spatial modes . by judicious selection of local oscillators ( los ) , the spatial mode distribution is analogous to the optical frequency comb consisting of axial modes in an opo cavity . we outline an experimental system that generates cluster states across the spatial frequency comb which can also scale the amount of quantum noise reduction to potentially larger than in other systems .
stochastic resonance ( sr ) is a resonant phenomenon triggered by noise which can be described as a noise - enhanced signal transmission that occurs in certain non - linear systems .it reveals a context where noise ceases to be a nuisance and is turned into a benefit . loosely speakingone says that a ( non - linear ) system exhibits sr whenever noise _ benefits _ the system .qualitatively , the signature of an sr benefit is an inverted - u curve behaviour of the physical variable of interest as a function of the noise strength .it can take place in systems where the noise helps detecting faint signals .for example , consider a _threshold detection _ of a binary - encoded analog signal such that the threshold is set higher than the two signal values .if there is no noise , then the detector does not recover any information about the encoded signals since they are sub - threshold , and the same occurs if there is too much noise because it will wash out the signal .thus , there is an optimal amount of noise that will result in maximum performance according to some measure such as signal - to - noise ratio , mutual information , or probability of success .recently the idea that noise can sometimes play a constructive role like in sr has started to penetrate the quantum information field too . in quantum communication , this possibility has been put forward in refs . and more recently in refs . . in thissetting it has been shown that information theoretic quantities may `` resonate '' at maximum value for a nonzero level of noise added intentionally .it is then important to determine general criteria for the occurrence of such phenomenon in quantum information protocols .continuous variable quantum systems are usually confined to gaussian states and processes , and sr effects are not expected in any linear processing of such systems . however , information is often available in digital ( discrete ) form , and therefore it must be subject to `` de - quantization '' at the input of a continuous gaussian channel and `` quantization '' at the output .these processes are usually involved in the conversion of digital to analog signals and vice versa . since these mappingsare few - to - many and many - to - few , they are inherently non - linear , and similar to the threshold detection described above. we can thus expect the occurrence of the sr effect in this case .the simplest model representing such a situation is one in which a binary variable is encoded into the positive and negative parts of a real continuous alphabet and subsequently decoded by a threshold detection . in some cases , one may not always have the freedom in choosing the threshold , and in such cases it becomes relevant to know that sr can take place .this may happen in _ homodyne detection _ if the square of the average signal times the overall detection efficiency ( which accounts for the detector s efficiency , the fraction of the field being measured , etc . )is below the vacuum noise strength .it is also the case in discrimination between lossy channels , where the unknown transmissivities together with a faint signal make it unlikely to optimally choose the threshold value . in this paper, we consider a bit encoded into squeezed - coherent states with different amplitudes that are subsequently sent through a gaussian quantum channel ( specifically , a lossy bosonic channel ) . at the output , the states are subjected to threshold measurement of their amplitude .in addition to such a setting , we consider one involving entanglement shared by a sender and receiver as well as one involving quantum channel discrimination .finally , we also consider the sr effect in quantum communication as well as in private communication .for all of these settings , we determine conditions for the occurrence of the sr effect .these appear as _ forbidden intervals _ for the threshold detection values .a `` forbidden interval '' ( or region ) is a range of threshold values for which the sr effect does not occur .we can illustrate this point by appealing again to the example of threshold detection of a binary - encoded analog signal .suppose that the signal values are or where .then if the threshold value is smaller in magnitude than the signal values , so that , the sr effect does not occur adding noise to the signal will only decrease the performance of the system . in the other case where , adding noise can only increase performance because the signals are indistinguishable when no noise is present . as we said before , adding too much noise will wash out the signals , so that there must be some optimal noise level in this latter case .our results extend those of refs . to other schemes .remarkably , in the private communication scheme , the width of the forbidden interval can vanish depending on whether the sender or the receiver adds the noise .this means that in the former case the noise is always beneficial .let us consider a lossy bosonic quantum channel with transmissivity .our aim is to evaluate the probability of successful decoding , considered as a performance measure , when sending classical information through such a channel .we consider an encoding of the following kind .let us suppose that the sender uses as input a displaced and squeezed vacuum .working in the heisenberg picture , the input variable of the communication setup is expressed by the operator : encoding a bit value , where is the position quadrature operator , is the displacement amplitude , and is the squeezing parameter . under the action of a lossy bosonic channel with transmissivity the input variable transforms as follows : where is the position quadrature operator of an environment mode ( assumed to be in the vacuum state for the sake of simplicity ) . at the receiver s end , let us consider the possibility of adding a random , gaussian - distributed displacement , with zero mean and variance , to the arriving state .then , the output observable becomes as follows : notice that we could just as well consider the addition of noise at the sender s end . in that case , the last term of ( [ noiseb ] ) would appear with a factor in front . upon measurement of the position quadrature operator , the following signal value results following ref . , we define a random variable summing up all noise terms : its probability density is the convolution of the probability densities of the random variables , and , these being independent of each other .moreover , they are distributed according to gaussian ( normal ) distribution , and so reads as where denotes convolution , and \ ] ] denotes the normal distribution ( as function of ) with mean and variance .notice that the noise term ( [ eq : all - noise1 ] ) does not depend on the encoded value and neither does its probability density . from ( [ eq : noise - density1 ] )we explicitly get the output signal ( [ signal1 ] ) can now be written as the receiver then thresholds the measurement result with a threshold to retrieve a random bit where and is the heaviside step function defined as if and if . to evaluate the probability of successful decoding , we compute the conditional probabilities p_{n}\left ( n\right ) \;dn\nonumber\\ & = & 1-p_{y|x}(1|0 ) \p_{y|x}(1|1 ) & = & \int_{-\infty}^{+\infty}h\left ( n + \sqrt{\eta } \ , \alpha_{q } - \theta\right ) p_{n}\left ( n\right ) \;dn\nonumber\\ & = & 1-p_{y|x}(0|1 ) \ , .\end{aligned}\ ] ] using ( [ px1 ] ) , we find , \label{p00}\\ p_{y|x}(1|1 ) & = & \frac{1}{2 } - \frac{1}{2 } \mathrm{erf}\left [ \frac { \theta - \sqrt{\eta } \ , \alpha_{q } } { \sqrt { 1 - \eta + \eta e^{-2r } + \sigma^{2 } } } \right ] , \label{p11}\end{aligned}\ ] ] where denotes the error function : this situation is identical to the one treated in ref . , and the forbidden interval can be determined in a simple way by looking at the probability of successful decoding ( note that others have also considered the probability - of - success , or error , criterion ) . the probability of success is defined as setting and , the probability of success is as follows : our goal is to study the dependence of the success probability on the noise variance .this leads us to the following proposition : [ the forbidden interval][prop : unassisted - comm ] the probability of success shows a non - monotonic behavior versus iff ] , where are the unique solutions of the equation , i.e. , equation ( [ eq - p1 ] ) .finally , we notice that equation ( [ eq - p1 ] ) implies , i.e. , ] or ] or ] , where are the two roots of the following equation : \ , , \ ] ] such that .we consider the probability of success as a function of . in order to have a non - monotonic behavior for , we must check for the presence of a local maximum . by solving obtain the following expression for the critical value of : } \,.\ ] ] the condition for non - monotonicity of the probability of success as a function of , , is verified iff ] .we consider as a function of . in order to have a non - monotonic behavior, we must check for the presence of a local maximum for . the condition yields the following expression for the critical value of we hence conclude that the average fidelity is a non - monotonic function of iff ] , where are the two roots of equation ( [ eq - p1 ] ) .second , suppose that the noise is added at the sender s end . in this case ,equation ( [ noiseb ] ) changes as follows : using a threshold decoding , we get the same expression as in equation ( [ pe1 ] ) for the success probability , upon replacing . from this , it is straightforward to calculate the mutual information . in turn , we assume that the eavesdropper has access to the conjugate mode at the output of the beam - splitter transformation , and so its variable is given by the maximum mutual information between the sender and eavesdropper is given in terms of the holevo information . since the average state corresponding to the variable ( [ evemode ] ) is non - gaussian , the analytical evaluation of its holevo information appears not to be possible. however , the monotonicity property of the holevo information under composition of quantum channels ensures that it has to be a monotonically decreasing function of the noise variance . as a consequence, we expect that its contribution to the private communication rate will increase with increasing value of the noise .indeed , a numerical analysis suggests that the private communication rate can exhibit a non - monotonic behavior as function of for all values of .examples of this behavior are shown in figure [ private ] .we are then led to formulate the following conjecture : [ forbprivate ] [ the forbidden interval ] the private communication rate shows a non - monotonic behavior as a function of for all if is added at the sender s end . ) , equation ( [ cp ] ) , versus , for the case of noise added by the sender .the values of the parameters are , and .curves from top to bottom correspond respectively to , , , , , .,scaledwidth=45.0% ]in conclusion , we have determined necessary and sufficient conditions for observing sr when transmitting classical , private , and quantum information over a lossy bosonic channel or when discriminating lossy channels .nonlinear coding and decoding by threshold mechanisms have been exploited together with the addition of gaussian noise .specifically , we have considered a bit encoded into coherent states with different amplitudes that are subsequently sent through a lossy bosonic channel and decoded at the output by threshold measurement of their amplitudes ( without and with the assistance of entanglement shared by sender and receiver ) .we have also considered discrimination of lossy bosonic channels with different loss parameters . in all these cases ,the performance is evaluated in terms of success probability .since the mutual information is a monotonic function of this probability , the same conclusions can be drawn in terms of mutual information .sr effects appear whenever the threshold lies outside of the different forbidden intervals that we have established .if it lies inside of a forbidden interval , then the sr effect does not occur .actually , absolute maxima of success probability are obtained when the threshold is set in the middle of the forbidden interval .generally speaking , sr effects are known to improve analog - to - digital conversion performance .in fact , if two distinct signals by continuous - to - binary conversion fall within the same interval they can no longer be distinguished .in such a situation the addition of a moderate amount of noise turns out to be useful as long as it shifts the signals apart to help in distinguishing them .while it is important to confirm this possibility also in the quantum framework , we have also shown that the same kind of effects may arise in a purely quantum framework .indeed , we have also considered the transmission of quantum information , represented by a qubit which is encoded into the state of a bosonic mode and then decoded according to a threshold mechanism .the found nonmonotonicity of the average channel fidelity and of the output entanglement ( quantified by the logarithmic negativity ) outside the forbidden interval , represents a clear signature of a purely quantum sr effect . in all the above mentioned cases it does not matter whether the sender or the receiver adds the noise . the exception occurs when the goal is to transmit private information . in fact , by considering achievable rates for private transmission over the lossy channel , we have pointed out that the forbidden interval can change drastically , depending on whether the receiver or the sender adds noise . in the former case ,it is exactly the same as the case of sending classical ( non private ) information . in the latter case, we conjecture that it vanishes , i.e. , the noise addition turns out to be beneficial always .this feature of the private communication rate can be interpreted as a consequence of the asymmetry between the legitimate receiver of the private information and the eavesdropper .in fact , while the legitimate receiver is restricted to threshold detection , we have allowed the eavesdropper to use more general detection schemes .
we determine conditions for the presence of stochastic resonance in a lossy bosonic channel with a nonlinear , threshold decoding . the stochastic resonance effect occurs if and only if the detection threshold is outside of a forbidden interval . we show that it takes place in different settings : when transmitting classical messages through a lossy bosonic channel , when transmitting over an entanglement - assisted lossy bosonic channel , and when discriminating channels with different loss parameters . moreover , we consider a setting in which stochastic resonance occurs in the transmission of a qubit over a lossy bosonic channel with a particular encoding and decoding . in all cases , we assume the addition of gaussian noise to the signal and show that it does not matter who , between sender and receiver , introduces such a noise . remarkably , different results are obtained when considering a setting for private communication . in this case the symmetry between sender and receiver is broken and the forbidden interval may vanish , leading to the occurrence of stochastic resonance effects for any value of the detection threshold .
in contrast to existing cellular networks that focus primarily on increasing the mobile data rates , fifth - generation ( 5 g ) network is expected to efficiently support the so - called vertical industries ( e.g. , industrial , ehealth , and automotive vertical , among other ) . supporting automotive vertical , in the form of vehicle - to - anything ( v2x ) communication ,is seen as one of the most challenging tasks for 5 g networks , particularly in terms of end - to - end latency and reliability .one of the key distinguishing features of v2x communication is the high mobility , possibly on both sides of the link ( as in the case of v2v communication ) .another salient aspect of v2x communication is that it is often related to safety , either directly ( e.g. , emergency braking , intersection collision avoidance application , etc . ) or indirectly ( e.g. , platooning , lane - change maneuvers , etc . ) . to ensure that v2x communication systems can support the application requirements efficiently , a key initial step is realistically defining the channel characteristics for different environments ( e.g. , urban , highway , rural ) and v2x communication types ( e.g. , v2v , v2i , v2p ) . given the v2x application requirements , one of the most relevant aspects of channel modeling is the time evolution of v2v links and the related concept of spatial consistency .time and space evolution of los blockage refers to time - consistent realization of los blockage for v2v channels , based on the location of the transmitter ( tx ) , the receiver ( rx ) , and the composition of their surroundings .consistent los blockage realization is important in order to assign the appropriate path loss , shadowing , small - scale , and large - scale parameters over time and space .this implies that there should be a continuity in terms of vehicle locations over time ( i.e. , requiring a continuous vehicle movement ) and in terms of scatterer distribution around the vehicles .in the realm of the 3gpp channel modeling ( e.g. , scme model ) , which resort to independent `` drops '' of devices in space and time , time evolution and spatial consistency have not been taken into consideration .on the other hand , time evolution and spatial consistency are inherently supported by geometry - based deterministic models ( e.g. , , ) . however , for performing efficient simulations it is also beneficial to have a model that can generate consistent channel realizations for a chosen environment without the need for complex geometric modeling using location - specific map information . while v2x channel measurements and modeling have attracted considerable attention in recent years , a comprehensive geometry - based stochastic model for time and space evolution of los blockageis currently not available for v2v channels . in this paper, we attempt to fill this gap by performing a comprehensive study of time evolution of v2v links in urban and highway environments .we employ a markov chain comprised of three states : i ) los ; ii ) nlosb non - los due to static objects ( e.g. , buildings , trees , etc . ) ; and iii ) nlosv non - los due to mobile objects ( vehicles ) , since these three states were shown to have distinct path loss and shadowing parameters . to calculate los and transition probability statistics ,we perform geometry - based deterministic simulations of los blockage and extract the parameters from real cities and highways for los blockage and transition probabilities . based on the empirical results from five large cities ( downtown rome , new york , munich , tokyo , and london ) and a highway section ( 10 km , including two- and three - lane per direction as well as on - ramp traffic ), we perform curve fitting and obtain a set of distance - dependent polynomial equations for both los and transition probabilities . to test whether the parameters extracted from a set of cities can generalize to other cities , we compare the model parameterized on the five cities to downtown paris .the results show a high correlation for both los probabilities and transition probabilities .our results can be used to generate time- and space - consistent los blocking for v2v channels by assigning appropriate parameters for each of the three states in representative urban and highway environments .the most important contributions of this work are as follows .* we perform a systematic large - scale analysis of los blockage and transition probability using real locations of vehicles and other objects to arrive at a practical model for time evolution of v2v links in real urban and highway environments ; * we incorporate the moving environment ( vehicles ) into the calculations of los and transition probability and show that its impact is significant in both urban and highway environment ; we also show that , due to the low height of both tx and rx antennas , vehicle density has a strong impact on los and transition probabilities , thus requiring separate los blockage analysis for low , medium , and high density of vehicular traffic . *we extract parameters that can be used to perform efficient simulations of time - evolved v2v channels , without the need for complex geometry - based deterministic modeling .the rest of the paper is organized as follows .section [ sec : relwork ] discusses existing work on spatially consistent v2v channels .section [ sec : setup ] describes the model for time evolution of v2v links and the tools we use for estimating the los and transition probabilities .section [ sec : results ] shows the los and transition probability parameterization results , whereas section [ sec : discussion ] discusses model validation , usage , and comparison with a state of the art model .section [ sec : conclusions ] concludes the paper .while geometry - based deterministic models intrinsically support time evolution and spatial consistency , stochastic models , ( either geometry - based or not ) need additional mechanisms to support these features . while los probability has been extensively explored in the literature for cellular systems ( for example , see ) , the first steps to achieve time- and space - consistent models have been discussed only recently in the community . for v2x and other dual - mobility communication systems such as device - to - device ( d2d ) communication ,this is an even more challenging problem , since mobility on both sides of the link makes the modeling more complicated .markov chains are often used to efficiently model the time - dependent evolution of different systems .they have been used to characterize different aspects of wireless channels .gilbert - elliot burst noise channel , a two state hidden markov model , has been used to model the bit error probabilities for a wireless channel , .similarly , modeling time evolution of v2v links using markov chains has been previously explored in literature .dhoutaut et al . propose a shadowing - pattern model for v2v links that uses two - state markov chains to determine the level of shadowing caused by other vehicles on an urban street .similarly , abbas et al . use the same model to quantify the shadowing effect of obstructing vehicles on v2v links in car following scenario in highway and urban environments .both studies employ a small set of measurements to validate the model .wang et al . extend the markov model to include v2v link cross - correlation ( correlation of two geographically close v2v links ) .however , while the above papers employ markov chains to model time evolution of v2v , they are based on limited measurement or simulation data applicable to a single scenario . to the best of our knowledge, our work is the first to perform a large - scale study to realistically parameterize probability of being in each of the three states ( los , nlosv , nlosb ) as well as the transition probabilities in both urban and highway environments .the whitepaper by key 3gpp stakeholders noted that the missing components in 3gpp standardized channel models , both above and below 6 ghz , include : i ) spatially consistent los probability / existence ; ii ) blockage modeling ; and iii ) moving environment ( e.g. , cars ) .this paper provides a model that contributes to the solution of these shortcomings for v2v communication .there exist measurement studies that analyze the los blockage of v2v links using video recordings collected during the measurements ( e.g. , ,, ) . however , extracting a general model for los blockage evolution using the limited amount of data collected therein is not possible , since the data does not encompass the blockage behavior across an entire environment ( be it city or highway ) or for different traffic conditions .additionally , such studies may be imprecise , because they rely on estimation of los blockage inferred from videos . in this work ,we resort to a large - scale simulation study , which enables us to analyze v2v links between large number of vehicles in various environments and with different vehicle densities . to model the time evolution of v2v links, we apply a three - state discrete - time markov chain , as shown in fig .[ fig : markovchain ] .this is in contrast to state of art where two states ( los and nlos ) are usually assumed for v2v channels .the probability of the states and the transition probabilities are parameterized through extensive simulation . in order to obtain realistic v2vlos blockage behavior from simulations , vehicles need to move in a realistic way over real roads . for this reason , we used sumo ( simulation of urban mobility ) to generate vehicular mobility in cities and on highway . by using real roadways and traffic rules and employing vehicular traffic dynamics models such as car - following , sumo is capable of generating accurate vehicle positions , speeds , inter - vehicle distance , acceleration , overtaking and lane - changing maneuvers , etc .previous work ( e.g. , ) has shown that v2v channel characteristics are affected by the density of vehicular traffic . to that end, for each environment , we generated three traffic densities , qualitatively characterized as low , medium and high .the densities in urban areas were generated according to values proposed by ferreira et al . , whereas for highways we use traffic flow measurement values reported by wisitpongphan et al . . to allow the vehicular traffic to move into steady state, we run the mobility model for 300 seconds in each environment before performing the los blockage analysis .we used one second as time step for mobility simulation . to determine los and transition probabilities in urban environments, we extracted the roadway ( used for mobility simulation ) and object outlines ( buildings , trees , walls , etc . ) from openstreetmap for downtown areas of five cities : downtown rome , new york , munich , tokyo , and london .the reason for using multiple cities is to obtain a set of `` generic '' urban parameters , i.e. , those that can readily describe a typical urban environment in terms of los blockage , mobility patterns , road configurations , etc .the criteria for selecting the cities were practical : we selected those cities where the object outlines available in openstreetmap are as complete as possible .additionally , we analyzed whether the vehicular mobility simulation generated by sumo contained any anomalies ( e.g. , complete gridlocks due to incorrect intersection configurations or disconnected road segments ) .from the initially considered 10 large cities , we selected downtown rome , new york , munich , tokyo , and london ( along with paris , used for testing the model ) based on the described criteria . in cities , all simulated vehicles were personal cars .more details on the locations used can be found in table [ tab : environments ] .note that , while the areas used for modeling are of limited size , for high vehicle density there were up to a hundred thousand v2v communication pairs per time step in the urban area since , for a given vehicle , all other vehicles within 500 m radius were considered as possible v2v communication pairs . for medium and high vehicular density ,we perform the simulations for 300 seconds . for low density ,the number of vehicles and therefore v2v links was smaller , which required 1000-second simulation runs to obtain a representative sample size .we performed highway simulations with sumo on a 10 km stretch of a6 highway between braunsbach and wolpertshausen in bavaria , germany .the road alternates between two and three lanes per direction and has several entry / exit ramps .we simulated the traffic so that 80% of the vehicles enter and exit at the each end of the highway , whereas remaining 20% enter and exit over two ramps near the beginning and the end of the stretch .since vehicle speeds and dimensions affect the los blockage results , 10% of the simulated vehicles were trucks and remaining were personal cars . some buildings and forestexist on each side of the highway ; they can block los for larger tx - rx distances , particularly for the vehicles moving on the entry / exit ramps . for medium and high density ,we perform the simulations for 1000 seconds . for low density ,we perform simulations for 2000 seconds .note that different highways will have different configurations in terms of lane numbers , surrounding objects , etc , which will affect los blockage .it is difficult to encompass all different variations , therefore we focus on the most common highway configuration .l c c c + * location * & & * size * + & * lower left * & * upper right * & + * rome * & 41.8896,12.4751 & 41.9011,12.5016 & 2.8 km + * new york * & 40.7274,-74.0068 & 40.7397,-73.9764 & 3.5 km + * munich * & 48.1301,11.5553 & 48.1431,11.5928 & 4 km + * tokyo * & 35.6540,139.7256 & 35.6673,139.7561 & 4 km + * london * & 51.5105,-0.1166 & 51.5230,-0.0794 & 3.6 km + * a6 highway * & 49.1715,9.7441 & 49.1846,9.8630 & 10 km + + * paris * & 48.8489,2.3161 & 48.8575,2.3439 & 2 km + [ + 5pt ] & + * environment * & * low * & * medium * & * high * + * urban * & 24 veh / km & 120 veh / km & 333 veh / km + * highway * & 500 veh / hr / dir . & 1500 veh / hr / dir . &3000 veh / hr / dir .+ we start our analysis by acknowledging that v2v links can have their los blocked by two distinct object types , static and mobile , which have distinct impact on v2v links .furthermore , static objects such as buildings , trees , etc . , typically block the los for v2v links between vehicles that are on different roads ( e.g. , perpendicular roads joined by intersections ) . on the other hand , mobile objects ( predominantly other vehicles )block the los over the surface of the road .we use the los blockage classification provided by gemv , a freely available , geometry - based v2x propagation modeling tool .gemv uses the outlines of vehicles , buildings , and foliage to distinguish between los , nlosv , and nlosb links . in order to do so, gemv performs geometry - based deterministic los blockage analysis using the outlines of buildings and foliage from openstreetmap and vehicular mobility traces from sumo .provided that realistic mobility model is used and the openstreetmap database contains majority of buildings and foliage , gemv produces highly realistic los blockage results . in simulations ,we place the antenna in the middle of the roof on vehicles and set its height to 10 cm . for a vehicle to block the v2v link ( i.e. , for nlosv blockage to occur ), it needs to be within 60% of first fresnel zone between the antennas on the communicating vehicles .when the los is blocked by both static and mobile objects , we classify this as nlosb state , because in vast majority of cases static objects such as buildings are the dominant blocking factor . bin .when determining los , gemv takes into account the electromagnetic interpretation of los , wherein it calculates whether 60% of first fresnel ellipsoid is free of any obstructions , in order to determine whether los is blocked .that said , we evaluated the los blockage results for frequencies between 2 ghz and 6 ghz and the results do not differ significantly .the reason is that the tx - rx distances are up to 500 meters , and the resulting difference between frequencies is small ; for example , the largest difference between 60% of first fresnel zone between 2 ghz and 6 ghz at 500 m tx - rx distance is 1 meter .the results showed in the paper refer to 2 ghz carrier frequency .[ fig : urbanlos ] [ fig : highwaylos ] [ fig : urbantransprob ] [ fig : highwaytransprob ] in order to obtain the transition probability in the three - state markov chain shown in fig .[ fig : markovchain ] , we separate the data based on tx - rx distance into 10-meter distance bins and generate the transition probabilities for each of them . for any single step transition between states and , we have the following transition probability from measurement time step to and for a given distance bin : in effect , this creates a set of distance - dependent transition probabilities with one transition probability matrix for each tx - rx distance bin .in addition to los blockage , gemv provides information about transitions between the los states .we use the results from gemv to calculate the transition probability using a frequency - based approach : for each los state ( where ) , we count the transition from to state within a distance bin ( note that is possible , since state can transition in itself ) .then , we divide this number by the total number of transitions from to all possible states ( i.e. , total number of occurrences of ) to obtain the empirical transition probability : where is the indicator of event .we use the value of time interval equal to one second as a good trade - off between the ability to train the model ( shorter time interval would require proportionally more data to obtain representative results ) and precision ( within one second , in vast majority of cases , there will be at most one transition between los states , whereas in a longer period this might not be the case ) .[ fig : urbanlos ] shows the los probabilities for each of the three states in urban , whereas fig .[ fig : highwaylos ] shows the results for highway .results for different vehicle densities show an intuitively expected result : the higher the vehicle density , the more probable the nlosv state , since in low density scenarios there simply are not that many vehicles around to block the los . for high density urban scenario ,nlosv probability reaches 50% when tx and rx are between 30 and 70 meters apart . in terms of the relationship between the probabilities of all three states , note that nlosb remains unaffected by the increased vehicle density .the increase in nlosv probability is at the expense of los probability .while the nlosb in urban is most often caused by buildings lining the roads , in highway scenarios , nlosb is caused mainly by the forest surrounding the highway , with occasional building creating nlosb for on- and off - ramp traffic . figs .[ fig : urbantransprob ] and [ fig : highwaytransprob ] shows the state transition probabilities for urban and highway environments , respectively , along with the curve fits according to equations presented in table [ tab : eqnstrans ] . across densities in urban environment ( fig .[ fig : urbantransprob ] ) , los transition probability to itself reduces with increasing distance and corresponding transition from los to nlosv and nlosb increases .the remaining transition probabilities are comparatively independent of distance , with the exception of the expected increase from nlosb to itself with increasing distance .transition probabilities in highway are more dynamic .the most interesting is the relationship between nlosv - to - nlosv and nlosv - to - los transitions : with distance increasing from zero to 100 meters , the nlosv - to - nlosv rapidly increases , with symmetric decrease in nlosv - to - los transitions .this is a result of car - following behavior of vehicles , where vehicles are following each other at the equivalent of 1-second gap , resulting in 30 to 50-meter distance , thus making nlosv - to - nlosv transition increasingly more likely as distance increases . above 100 meters ,nlosv - to - nlosb transitions also come into play , limiting the nlosv - to - nlosv increase .additionally , between 60 and 100 meters , there is a visible spike in nlosv - to - nlosv transitions , accompanied by a dip in nlosv - to - los for all vehicle densities .this is again a result of the car - following model . in cases when there are three or more vehicles following each other in the same lane, the middle vehicle is often blocking the los between the vehicle in front and behind it .since the car - following is bound to continue for some time , it results is increased nlosv probability between front and rear vehicle , which are between 60 and 100 meters apart .note that the resulting transition probability for low density is somewhat variable because of the smaller number of data points available . to provide a tractable model for generation of time - evolved v2v links , we perform curve fitting for los and transition probabilities .table [ tab : eqnslos ] shows the equations of resulting los probability curves , whereas table [ tab : eqnstrans ] shows the equations for transition probabilities . for los probabilities in highway , the equations are in the form of second degree polynomials : , where is the tx - rx distance . in urban environment ,exponential and log - normal distributions are a better fit for los probabilities .similarly , curve fits for transition probabilities predominantly take the form of second degree polynomials , with the exception of transitions involving nlosb state in highway environment , which are better approximated by a log - normal distribution . for more complex transition probability curves in highway ,we perform piece - wise curve fitting , since there are notable discontinuities , particularly in case of transitions involving nlosv ( e.g. , nlosv - to - nlosv transition : fig .[ fig : highwaytransprob ] ) , due to the effect of in - lane car - following on los blocking by vehicles .we list los probability equations for two out of three curves ; the third one can be obtained by subtracting from one the remaining two terms . the same applies for transition probabilities : we show two outgoing probabilities from each state .note that , due to the imperfections of the curve fitting process , in certain scenarios at low and high distances , the probability equations will result in a probability above one .for this reason , we introduce a ceiling of one for each probability .similarly , at very low distances ( e.g. , below 10 meters ) , the summation of the los probability equations ( and similarly , summation of outgoing transition probability equations ) can amount to more than one . in these situations ,we advise to use equation for one of the three probabilities , forcing the lowest of the three probabilities to zero , and subtracting the first probability from one to obtain the third probability .l c c c c c c c c c + * density * & & & + & * a * & * b * & * c * & * a * & * b * & * c * & * a * & * b * & * c * + + * los * & & -0.0015 & 1 & & -0.0025 & 1 & & -0.003 & 1 + * nlosb * & & 0.00059 & 0.0017 & & 0.00061 & 0.015 & & 0.00067 & 0 + + + * density * & & & + + * los * & & & + * nlosv * & & & + l c c c c c c c c c + * density * & & & + + * los los * & & & + * los nlosb * & & & + + * nlosb los * & & & + * nlosb nlosb * & & & + + * nlosv los * & + & & & + & & & + * nlosv nlosb * & + & & & + & & & + + + * density * & & & + & * a * & * b * & * c * & * a * & * b * & * c * & * a * & * b * & * c * + * los los * & & & & & & & & & + * los nlosb * & & & -0.012 & & & & & & 0.025 + + * nlosb los * & & & & & & & & & + * nlosb nlosb * & & & 0.83 & & & 0.86 & & & 0.89 + + * nlosv los * & & & & & & & & & + * nlosv nlosb * & & & -0.0059 & & & -0.0046 & & & 0.0058 +to test how well the proposed model generalizes to other cities ( i.e. , those on which it was not trained ) , we compared the model trained on the data from five cities shown in table [ tab : environments ] and a data set obtained in downtown paris ( also described in table [ tab : environments ] ) .table [ tab : corrcoeff ] shows the results of comparison in terms of pearson correlation coefficient for both the los probabilities and transition probabilities .the correlation is above 0.95 for all three los probabilities .similarly , the correlation coefficients for transition probabilities are high ( average correlation equals 0.84 ) , with only the nlosv below 0.8 .the most likely reason for nlosv discrepancy is that the streets in paris are significantly narrower and with fewer lanes per direction than in the five cities that the model was trained on . the narrower street configuration results in higher probability of continued los blockage by vehicles .overall , however , the correlation results show that the model fits well to urban environments outside those that it was trained on and can thus be used with confidence as a representative model for `` typical '' urban environments .algorithm [ alg : transprob ] describes how the developed model can be used to generate los states for a v2v link . for time steps , the algorithm generates a set of -tuple states by using transition equations ( table [ tab : eqnstrans ] ) .the states represent the time evolution of the link in the given environment and for a given density .initial state is selected randomly based on the probability of each state given the distance ( figs .[ fig : urbanlos ] , [ fig : highwaylos ] ) .since the model takes as input distance between tx and rx for each time step , there needs to exist consistency between subsequent distances between tx and rx in order to obtain credible results for los blockage and transition probabilities .therefore , we analyze the distribution of speed in each of the six scenarios ( highway / urban , low / medium / high density ) , which combined with the one - second interval results in the tx - rx distance variation . fig .[ fig : speed ] shows the distribution of relative speeds between tx and rx ; the figure helps in quantifying reasonable distance that the vehicles can travel in a given environment . in urban environments, the relative bearing of two vehicles can be between 0 and 360 degrees , whereas on highway the vehicles travel either in the same or opposite directions , with the slight deviation to this rule caused by on- and off - ramp traffic .therefore , as shown in fig .[ fig : speed ] , the relative speed in urban areas is mostly limited to 0 - 20 m / s , whereas the relative speed on highway will be distinct for same and opposite traffic , with same direction traffic ranging from 0 - 25 m / s and opposite ranging from 50 - 100 m / s .note that the speeds can readily be translated in distance traveled , since the time step under consideration is fixed to one second . for considerably different vehicle speeds ( e.g. , in case of heavy traffic jams ) , the los blockage and transition probabilities would need to be adjusted accordingly .l c c c + + * paris / model * & & & + corr .0.9565 & 0.9823 & 0.9528 + + + + * paris / model * & & & + los & 0.8037 & 0.9857 & 0.8319 + nlosb & 0.9587 & 0.8316 & 0.8102 + nlosv & 0.8488 & 0.9080 & 0.5851 + * input : * + * output : * since there are no comprehensive v2v link los blockage and transition probability models available in the literature , we compare the proposed model with the well - established 3gpp / itu urban micro ( umi ) los probability model , which is currently also used for d2d communication : where is distance between tx and rx , is a parameter set to 18 meters , and to 36 meters .note that umi los probability model distinguished between los and a generic non - los state , where los blockage is assumed to occur predominantly due to static objects ( i.e. , akin to nlosb ) . for illustration purposes, we use the states generated by the two models to calculate the path loss for a tx - rx pair that moves apart at 1 m / s starting from 1 to 500 meters .we use the parameters for urban medium density ( figs .[ fig : medurban ] and [ fig : medurbantrans ] ) . for los and nlosb path loss parameters ,we use the values based on measurements reported in .for nlosv , we use the multiple knife - edge attenuation model described in .however , to make the difference between states more clearly visible , we simplify the nlosv model so that it adds a constant 8 db attenuation compared to free - space path loss . also for clarity reasons ,we show path loss only ( i.e. , without shadow fading ) .[ fig : comparisonpl ] shows the path loss results for the proposed model and 3gpp umi los probability model .since umi model does not model dependency on the previous los state , the number of transitions between the states is considerably higher than in the proposed model , particularly when the probability of los is close to 50% ( i.e. , between 50 and 100 meters ) .this is an unrealistic behavior , since two vehicles can not move between los and non - los states so rapidly .the proposed model , on the other hand , takes into account the previous state and smoothly evolves the link between los , nlosb , and nlosv states .while fig .[ fig : comparisonpl ] shows a single realization of state changes and resulting path loss , we ran simulations for a large number ( 10 ) of v2v pairs with distances between 0 and 500 m. umi model resulted in an average state change every 5 seconds , while the proposed model averaged one state change every 17 seconds .the repercussion of a more realistic los evolution are manifold .specifically , performing simulations without the proposed model and using simple probability models such as umi results in : i ) inaccurate estimate of interference , since the calculated interference contributions from specific vehicles will be more sustained than what a simple model estimates ; ii ) overestimating the benefits of retransmissions ; a link in nlos state is likely to stay in that state for a longer period of time than what is estimated by a simple model , thus making the retransmissions less effective ; iii ) erroneous estimate of performance for applications requiring continuous transmission between two vehicles , since the link duration will be impacted by the unrealistically high number of transitions between the states .the result in fig .[ fig : comparisonpl ] also indicates why shadowing correlation models are not sufficient for modeling time- and space- evolved v2v links .shadowing decorrelation distance in urban and highway environment for v2v links is on the order of tens of meters ( e.g. , abbas et al . report decorrelation distances of 5 m in urban and 30 m in highway environment ) and is assumed do be independent of the tx - rx distance . however , fig . [fig : comparisonpl ] shows that a single decorrelation distance value can not capture the changing behavior as tx - rx distance changes : at low distances , the los decorrelation distance is high and decreases with increasing tx - rx distance , whereas nlosb decorrelation distance increases with increasing tx - rx distance . by using distance - dependent transition probabilities ,the employed model is capable of capturing this behavior .we performed a comprehensive analysis of los blockage evolution for v2v links in real cities and highways . to efficiently model the time evolution of los blockage for v2v links ,we employed a three - state markov chain , which we trained using a large set of realistically simulated v2v links ( more than 10 v2v links in high density scenarios ) in urban and highway environments . to enable simple incorporation of los blockage evolution in simulations , we performed curve - fitting of the model parameters .the resulting los probability and transition probability parameters provide a detailed and realistic evolution of los blockage that is a function of tx - rx distance , environment ( highway / urban ) , and vehicle density ( low / medium / high ) .while we focused on v2v communication , the methodology we presented can be used for other types of communication ( e.g. , v2i , d2d with pedestrians carrying the devices , etc . ) , provided that realistic mobility and map information is available .future work will include analysis and efficient modeling of cross - correlation of los blockage evolution for spatial consistency of multiple v2v links in geographic proximity .a. osseiran , f. boccardi , v. braun , k. kusume , p. marsch , m. maternia , o. queseth , m. schellmann , h. schotten , h. taoka _ et al . _ , `` scenarios for 5 g mobile and wireless communications : the vision of the metis project , '' _ ieee communications magazine _ , vol .52 , no . 5 , pp . 2635 , 2014 .d. s. baum , j. hansen , and j. salo , `` an interim channel model for beyond-3 g systems : extending the 3gpp spatial channel model ( scm ) , '' in _ ieee vehicular technology conference , vtc - spring _ , vol . 5 , 2005 , pp . 31323136 .w. viriyasitavat , m. boban , h .-tsai , and a. vasilakos , `` vehicular communications : survey and challenges of channel and propagation models , '' _ ieee vehicular technology magazine _ , vol . 10 , no . 2 , pp .5566 , 2015 .d. dhoutaut , a. regis , and f. spies , `` impact of radio propagation models in vehicular ad hoc networks simulations , '' _ vanet 06 : proceedings of the 3rd international workshop on vehicular ad hoc networks _ , pp .6978 , 2006 .x. wang , e. anderson , p. steenkiste , and f. bai , `` simulating spatial cross - correlation in vehicular networks , '' in _ ieee vehicular networking conference ( vnc)_.1em plus 0.5em minus 0.4emieee , 2014 , pp .207214 .r. meireles , m. boban , p. steenkiste , o. tonguz , and j. barros , `` experimental study on the impact of vehicular obstructions in vanets , '' in _ ieee vehicular networking conference ( vnc ) _ , dec .2010 , pp .338 345 .m. boban , r. meireles , j. barros , p. steenkiste , and o. k. tonguz , `` tvr tall vehicle relaying in vehicular networks , '' _ ieee transactions on mobile computing _ , vol . 13 , no . 5 , pp . 11181131 , may 2014 .t. abbas , k. sjberg , j. karedal , and f. tufvesson , `` a measurement based shadow fading model for vehicle - to - vehicle network simulations , '' _ international journal of antennas and propagation _ , vol .2015 , 2015 .m. ferreira , h. conceio , r. fernandes , and o. k. tonguz , `` stereoscopic aerial photography : an alternative to model - based urban mobility approaches , '' in _ proceedings of the sixth acm international workshop on vehicular inter - networking ( vanet)_.1em plus 0.5em minus 0.4emacm new york , ny , usa , 2009 .n. wisitpongphan , f. bai , p. mudalige , v. sadekar , and o. k. tonguz , `` routing in sparse vehicular ad hoc wireless networks , '' _ ieee journal on selected areas in communications _ ,vol . 25 , no . 8 , pp . 15381556 , oct .
we investigate the evolution of line of sight ( los ) blockage over both time and space for vehicle - to - vehicle ( v2v ) channels . using realistic vehicular mobility and building and foliage locations from maps , we first perform los blockage analysis to extract los probabilities in real cities and on highways for varying vehicular densities . next , to model the time evolution of los blockage for v2v links , we employ a three - state discrete - time markov chain comprised of the following states : i ) los ; ii ) non - los due to static objects ( e.g. , buildings , trees , etc . ) ; and iii ) non - los due to mobile objects ( vehicles ) . we obtain state transition probabilities based on the evolution of los blockage . finally , we perform curve fitting and obtain a set of distance - dependent equations for both los and transition probabilities . these equations can be used to generate time - evolved v2v channel realizations for representative urban and highway environments . our results can be used to perform highly efficient and accurate simulations without the need to employ complex geometry - based models for link evolution .
the topological structure of electric power grids can be modeled as a graph .a graph is a pair , where the vertex set represents unique entities and the edge set represents binary relationships on .the study of topological and electrical structure of power transmission networks is an important area of research with several applications such as vulnerability analysis , locational marginal pricing , controlled islanding , and location of sensors . in this paper, we focus on two interrelated problems : graph - based modeling and characterization of power transmission networks , and random graph models for synthetic generation of transmission networks .these two problems have a significant impact on several aspects of our understanding and functioning of the power grid .r0.5 graph - theoretic approaches have been studied extensively in the context of power grids ( section [ sec : related ] ) .however , there are several limitations that need to be addressed .for example , power grids have been characterized using graph - theoretic metrics to show conflicting features , and in the study of vulnerabilities to often misleading conclusions .graph based models and algorithms are effective when vertices represent homogeneous entities in a consistent manner. however , _ a transmission network is inherently heterogeneous_. it consists of entities such as generators , loads , substations , transformers , and transmission lines that operate at different voltage ratings .further , power transformers are not always treated consistently .they are generally represented as edges , but not always .consequently , a purely graph - based model will misrepresent a transmission network when it is not constructed carefully .as an example , illustration of a model for the power grid in poland is provided in figure [ fig : polishall ] , where nodes operating at different voltages are shown in different colors and sizes larger the size , higher the nominal voltage of a node .thus , edges connecting two different types of vertices are transformers . in this paper, we address the problem of accurate representation of power transmission networks as a graph , and discuss the impact of this representation for graph - theoretic characterization and modeling using random graph models . by accurately modeling transformers , and consequently decomposing a network into regions of different voltage ratings ,we propose a novel network - of - networks model for power transmission networks .this simple idea has profound implications to the study of topological and electrical structure of power grids .the contributions we make in this paper are : a new decomposition method for power transmission networks using nominal voltage ratings of entities ( section [ sec : characterization ] ) ; empirical evidence of the hierarchical nature of transmission networks based on the analysis of real - world data representing the north american and european grids ( section [ sec : characterization ] and [ sec : interconnection ] ) ; characterization of the interconnection structure between networks of different voltages ( section [ sec : interconnection ] ) ; and presentation of random graph models two methods for modeling the network at a specific voltage level , and one method for modeling the interconnection structure of networks of different voltage ( section [ sec : randomgraphmodel ] ) .an electric power transmission network consists of power generators , loads , transformers , substations , and branches connecting different components .while branches and transformers are modeled as edges of a graph , all the other components ( buses ) are modeled as vertices .while this model is fairly consistent , the inclusion of transformers as edges makes the graph representation inconsistent .an illustration of a symbolic power system is shown in figure [ blockdiagram ] .generation is shown in black , transmission is shown in blue , and distribution in green .an illustration of a power system .power generation , shown in black , is connected to distribution , shown in green , via a transmission system , shown in blue .the system operates at different voltage ratings as shown in the figure.,scaledwidth=65.0% ] a power transformer is used to step up or step down voltage between its two end - points , and therefore , connects two different regions of voltage magnitudes .power transmitted at a certain voltage remains at that level unless stepped up or stepped down by another transformer connecting the network to a region of different voltage rating .therefore , the component of the network transmitting power at a certain voltage rating can be treated as one network where all the vertices consistently represent entities operating at the same voltage rating , and edges represent transmission lines .the overall transmission network can thus be decomposed into a _ network - of - networks _ ( defined further below ) , where each network operates at a certain voltage rating , and a pair of networks are connected via one or more transformers regulating the voltage levels . when expressed in this manner ,a different graph structure emerges , which is the focus of our study .a graph is a pair , where the vertex set represents unique entities and the edge set represents binary relations on .we define a network - of - networks model for power transmission as an undirected heterogenous graph with a weight function .the vertex set represents unique entities such as substations , generators and loads ; the edge set represents binary relationships on such that an edge represents either a power transmission line or a voltage transformer ; a set of values associated with representing the nominal voltage rating at a particular vertex as defined in the mapping function where each vertex operates at a given voltage level , and is a positive real number .a set of values associated with represent the type of an edge a transmission line or a transformer as defined in the mapping function where each edge is of a given type , and is an integer .traditionally , transmission networks are represented as graphs , making no distinction between entities operating at different voltage ratings and edges of different types . in our representation ,the vertex and edge types ( ) , and the mappings ( ) , allow us to make clear distinctions between different vertex and edge types .given the voltage levels for each substation ( ) , we can automatically detect the voltage transformers , , where such that . by pruning the transformer - edges from , the remaining graph represents a collection of subgraphs where each subgraph has vertices operating at the same voltage level and edges that are transmission lines .the structure of this collection of subgraphs is of great interest in studying the transmission network and is the focus of this paper .* graph - theoretic measures : * we use the following graph - theoretic measures in this paper .a _ path _ in a graph is a finite sequence of edges such that every edge in a path is traversed only once and any two consecutive edges in a path are adjacent to each other ( share a vertex in common ) .the _ average shortest - path length _ is defined as the average of shortest paths between all possible pairs of vertices in a graph .the longest shortest - path is called the _diameter _ of a graph. the _ local clustering coefficient _ of a vertex is the ratio between the actual number of edges among the neighbors of to the total possible number of edges among these neighbors .the _ average clustering coefficient _ of a graph is the sum of all local clustering coefficients of its vertices divided by the number of vertices .we now present details of the network - of - networks model by decomposing different datasets using the method described in algorithm [ alg : decompose ] . in some order in some order let be the input graph , and be the decomposed graph that is computed as the output of algorithm [ alg : decompose ] .the algorithm starts by adding the vertices in to a queue ( line in algorithm [ alg : decompose ] ) in an arbitrary order .the * while * loop on line iterates until all the vertices in the queue have been processed .let us consider the process for an arbitrary vertex chosen from ( line ) .vertex , which is added to a new queue , acts as the source of a new network ( a connected component of ) consisting all the vertices connected to through edges of the same voltage .the search for the network corresponding to is managed by adding and removing vertices from , and persists until becomes empty ( the * while * loop on line ) . for each vertex in , we process all the neighbors of ( line ) to check if they are of the same voltage rating .the neighbors are added ( lines and ) to the network corresponding to vertex if true ( same voltage ) .once all the connected vertices are added to , the network is added to ( line ) .the algorithm resumes from a new vertex that is chosen arbitrarily until all the vertices in have been processed and added to corresponding networks in . [cols="<,>,>,>,>",options="header " , ] we now present the details of the interconnection structure of western and eastern interconnects computed using algorithm [ algo.interconnect ] .key details are summrized in table [ interconnect ] .we note that the interconnection graphs of the polish and texas interconnect are small and unsuitable for analysis .the topological properties of the interconnection networks is substantially different from the properties of the decomposed networks presented in section [ sec : characterization ] . while the average clustering coefficient of these networks are higher , the average shortest path and diameter are significantly smaller ( relative to decomposed networks ) .the fundamental differences in the topological structure is also evident from the visualization of the two systems presented in figure [ fig : interconnect ] . from the interconnection graphs we observe a hierarchical nature of the manner in which different networks at different voltages interconnect with each other .the lowest voltage levels are used for distribution purposes and generally form degree - one vertices in the interconnection networks .the remaining vertices ( networks of higher voltage ) form a hierarchical structure spanning the network . in order to highlight this feature, we present a -core of the interconnection network of the eastern interconnect in figure [ fig : interconnect2core ] .the -core of a graph is a subgraph of the graph such that each vertex has degree or more .we will conclude this section by observing that the topological structure of the interconnection network is significantly different from the topological structure of the decomposed networks operating at the same voltage level .further , there exists a hierarchical structure based on the voltage levels to the manner in which different vertices are linked to one another .an important research topic in the study of electric power grids is the ability to synthetically generate graphs that match the characteristics of real - world power grids .we refer the reader to for a detailed analysis of different random graph models and their comparison to real - world datasets .we summarize key findings from previous work in section [ sec : related ] . in this section ,we present two random graph models to generate graphs that match the characteristics of graphs at a given voltage level that were presented in section [ sec : characterization ] .we then present graph models that match the characteristics of the interconnection structure that we presented in section [ sec : interconnection ] . by combining the two separate models, we propose that synthetic graphs can be generated to match the overall characteristics of real - world power grids .we first present two random graph models to generate graphs at a given voltage .these two models can be seen as a variant of the random geometric graphs .a -dimensional random geometric graph , , is a graph formed by randomly placing vertices in a -dimensional space and adding an edge between pairs of vertices whose euclidean distance is less than or equal to .sparsity and structure of the generated graph can be controlled by varying the distance parameter .we note here that we experimented with several random graph models and found that the random geometric graph models to be the most promising . here, we present two such models . the first model is given in algorithm [ algo.simplemindist ] .the algorithm takes the size of the graph ( number of vertices and edges ) as the input , and generates a random graph as the output .the algorithm starts generating two - dimensional points uniformly at random ( line in algorithm [ algo.simplemindist ] ) . for each vertex ,a set of neighbors are chosen by minimizing the euclidean distance as follows : where represents the adjacency set of vertex .an additional edge is generated with with probability ( line in algorithm [ algo.simplemindist ] ) .randomly generate planar coordinates for ( , ) from a uniform distribution generate edges ( ) by iteratively selecting to minimize the euclidean distance between and using equation [ eq : mindist ] generate an additional edge with probability using equation [ eq : mindist ] graphs generated by algorithm [ algo.simplemindist ] do not match all the desired properties of power grids .therefore , we consider an adapted version of this algorithm that modifies how vertices get connected using the distance function given by equation [ eq : mindist ] .the modification is driven by a bisection cost .unlike the previous algorithm , new nodes can either be connected to an existing node through a new edge , or it can be placed near an existing edge and become a bisecting node for that edge .this change is motivated by economical considerations in the construction of power transmission networks .for example , consider the construction of a new town that needs to connect to the power grid .the town can either be connected via a transmission line or feeder to an existing substation , or a new substation could be built , bisecting an existing transmission line . with this modification , the min - dist selection criteria ( eq . [ eq : mindist ] )is modified as follows : where is the distance between point and the nearest point along line segment , and is an exogenously selected bisection cost .if is smaller than , then the algorithm creates a new edge .otherwise , the new node bisects the existing edge .the second step in generating synthetic transmission networks is to generate random graphs to match the characteristics of the interconnection graphs .we propose the use of preferential attachment ( pa ) model for this purpose .we generate the preferential attachment ( pa ) graphs using the algorithm proposed in .since we aim to generate graphs of a given size ( number of vertices , , and edges ) , we modify the algorithm in to allow for a fractional average degree . for each new vertex , edges are added by linking to an existing vertex , where is chosen randomly from the probability distribution , where is the degree of vertex .further , an additional edge is added with probability to generate a preferential attachment graph with vertices and approximately edges .we generate erds - rnyi ( er ) random graphs by randomly adding edges between a pair of vertices until there are exactly edges in the graph .further details are provided in . in figure[ fig : degdistint ] , we provide the cumulative degree distribution of the interconnection graphs of the western and eastern interconnects ( described in section [ sec : interconnection ] ) along with degree distributions from random graphs preferential attachment ( pa ) and erds - rnyi ( er ) of the same sizes .we observe that preferential attachment provides a better fit to the degree distribution of interconnection graphs .we conclude this section by noting that accurate models for synthetic recreation of transmission networks should include several aspects such as geography , cost of adding vertices and edges , size and distribution of power transformers , and the inherent hierarchy of networks of different voltage ratings .while we present a multi - step process involving different random graph models , careful construction and validation is part of our work in the near future .many researchers have explored the use of random graphs to model power grid , but in each case the resulting random graph does not fully model the structure of interest . in most cases , authors attempt to match graph characteristics from power grid networks , such as degree distribution , average path length , and clustering coefficient , in their random models . in author examines how vulnerability tests on the western interconnect ( wecc ) and the nordic grid , and then compare with the same tests on an erds - rnyi ( er ) random graph and the scale - free model of barabsi and albert .they first note that the clustering coefficient and average path length of the real graphs are significantly larger than that of both types of random graphs .when comparing vulnerability to failure of generators ( i.e. , removal of vertices ) they see that the real networks show more susceptibility than the random networks .the authors of also show that the topological properties of the eastern united states power grid and the ieee 300 system differ significantly from random networks created using preferential - attachment , small - world , and er models .they provide a new model , the minimum distance graph , which more closely matches measured topological characteristics like degree distribution , clustering coefficient , diameter , and assortativity of the real networks .we build on these ideas in our work .extensions of the minimum - distance model are discussed in section [ sec : randomgraphmodel ] . in a more recent paper by some of the same authors they look more specifically at both the electrical and topological connectivity of three north american power infrastructures .they again compare these with random , preferential - attachment , and small - world networks and see that these random networks differ greatly from the real power networks .they propose to represent electrical connectivity of power systems using electrical distances rather than physical connectivity and geographic connections .in particular , they propose a distance based on sensitivity between active power transfers and nodal phase angle differences .electrical distance is calculated as a positive value for all pairs of vertices which then yields a complete weighted graph .therefore , to make it more comparable to a geographic network the authors use a threshold value and keep those edges which are below the threshold .pagani and aiello , in , present a survey of the most relevant work using complex network analysis to study the power grid . in this surveythey remark that most of the networks studied were high voltage from either north america , europe , or china .they remark that most of the surveyed work show that degree distribution is exponential .the geography of the country seems to be important as the results differ somewhat between countries with differing geographies .one point of agreement between all studies is that the power grid is resilient to random failures but extremely vulnerable to attacks targeting nodes with high degree or high betweenness scores .finally , we cite the work in in which the authors characterize many graph measures in the context of power grid graphs . as an example , they show that power grids are sparsely connected , the degree distribution has an exponential tail , and the line impedance has a heavy - tailed distribution . based on their findingsthey propose an algorithm to generate random power grids which features the same topology and electrical characteristics discovered from real data .they , like us , take a hierarchical approach to generating synthetic power networks. however , their approach differs by looking at geographic zones , whereas our approach is to break up the network by voltage level .our approach requires no geographic knowledge of the system and leads to a systematic approach for annotating the nodes and edges with different electrical properties such as voltage ratings .graph - theoretic analysis of power grids has often resulted in misleading conclusions . in this paper , we hypothesized that one of the reasons for such conclusions is the inability to account for heterogeneity in a power grid .we therefore developed a new method for decomposition based on nominal voltage rating , and using power transformers to generate a network - of - networks model of power transmission networks . while the individual networks operating at the same voltage level are characterized by exponential degree distribution , large values for average shortest - path length and diameter , and low clustering coefficient ; the interconnection structure representing the interconnection of these networks is characterized by non - exponential degree distribution , small values for average shortest - path length and diameter , and relatively higher clustering coefficient .consequently , we proposed two random graph methods for synthetic generation of power transmission networks .our approach not only models the topology , but also provides a method for annotating the network with voltage ratings so that the electrical properties of a grid can be modeled correctly . to the best of our knowledge ,this approach is novel .the resulting characterization of power transmission networks and the ability to synthetically recreate networks has important implications for studying the vulnerability of power systems , evolution of power networks , and transmission expansion planning .thus , the ideas proposed in this paper hold the potential to significantly improve our understanding of several aspects of electric infrastructure networks , as well as other critical infrastructure networks .this work was supported in part by the applied mathematics program of the office of advance scientific computing research within the office of science of the u.s .department of energy ( doe ) .pacific northwest national laboratory ( pnnl ) is operated by battelle for the doe under contract de - ac05 - 76rl01830 .p. hines , e. cotilla - sanchez , and s. blumsack , `` do topological models provide good information about electricity infrastructure vulnerability ?, '' _ chaos : an interdisciplinary journal of nonlinear science _ ,20 , no . 3 , p. 033122, 2010 .j. anderson and a. chakrabortty , `` graph - theoretic algorithms for pmu placement in power systems under measurement observability constraints , '' in _ smart grid communications ( smartgridcomm ) , 2012 ieee third international conference on _ , pp . 617622 , nov 2012 .e. cotilla - sanchez , p. hines , c. barrows , and s. blumsack , `` comparing the topological and electrical structure of the north american electric power infrastructure , '' _ systems journal , ieee _ , vol .6 , pp . 616626 , dec 2012 .r. d. zimmerman , c. e. murillo - snchez , and r. j. thomas , `` matpower : steady - state operations , planning and analysis tools for power systems research and education , '' _ ieee transactions on power systems _ , vol .26 , pp .1219 , feb .2011 .p. hines , s. blumsack , e. c. sanchez , and c. barrows , `` the topological and electrical structure of power grids , '' in _ proceedings of the 43rd hawaii international conference on system sciences _ , 2010 .z. wang , a. scaglione , and r. j. thomas , `` on modeling random topology power grids for testing decentralized network control strategies , '' in _ proceedings of the 1st ifac workshop on estimation and control of networked systems _ , pp .114119 , 2009 .
modeling power transmission networks is an important area of research with applications such as vulnerability analysis , study of cascading failures , and location of measurement devices . graph - theoretic approaches have been widely used to solve these problems , but are subject to several limitations . one of the limitations is the ability to model a heterogeneous system in a consistent manner using the standard graph - theoretic formulation . in this paper , we propose a _ network - of - networks _ approach for modeling power transmission networks in order to explicitly incorporate heterogeneity in the model . this model distinguishes between different components of the network that operate at different voltage ratings , and also captures the intra and inter - network connectivity patterns . by building the graph in this fashion we present a novel , and fundamentally different , perspective of power transmission networks . consequently , this novel approach will have a significant impact on the graph - theoretic modeling of power grids that we believe will lead to a better understanding of transmission networks .
prior to making novel or useful contributions to scholarly work , an individual must acquire pertinent knowledge . since acquisition of such knowledge often involves time - consuming study or extensive experience ,most individuals tend to build their expertise in a small range of fields ; and experiencing success in one subject area may , through positive reinforcement and the ability to obtain more resources , result in additional focus on that subject . on the other hand ,there are myriad successful individuals who dabble in many sciences , inventors who invent diverse gadgets , and wikipedia editors who edit pages on seemingly disparate topics .one might argue that their versatility is a reflection of superior abilities that can yield greater opportunities for varied collaborations and cross - polination of ideas . in this paper , we present the first comprehensive , large - scale analysis of this relationship between individual focus and performance across a broad range of both traditional and modern knowledge collections .much previous research in this area has aimed to quantify the benefit of interdisciplinarity among researchers at the _ group _ level .a study of scholarly articles in the uk , for example , found that research articles whose coauthors are in different departments at the same university receive more citations than those authored in a single department , and those authored by individuals across different universities yield even more citations on average .multi - university collaborations that include a top tier - university were found to produce the highest - impact research articles .it has also been demonstrated that scholarly work covering a range of fields and patents generated by larger teams of coauthors tend to have greater impact over time .collaborations between experienced researchers who have not previously collaborated fare better than repeat collaborations . in the area of nanotechnology authors who have a diverse set of collaborators tend to write articles that have higher impact .finally , diverse groups can , depending on the type of task , outperform individual experts or even groups of experts .all of this work is evidence of a benefit in bringing together diverse individuals .it does not demonstrate , however , whether diversity in research focus is beneficial at the _ individual _ level .one exception is a study of political forecasting that established that `` foxes '' , individuals who know many little things , tend to make better predictions about future outcomes than `` hedgehogs '' who focus on one big thing .our work addresses knowledge contribution in a much broader context than forecasting and , more importantly , quantifies the relationship between individuals narrowness of focus and the corresponding quality of their contributions .in order to cover a broad range of knowledge - generating activities , we study several collections of traditional scholarly publications in addition to recent , web - based media collections . the traditional media we consider include patents and research articles .our patent collection consists of 5.5 million patents filed with the uspto between 1976 and 2006 .we consider two sources of research articles : jstor and the american physical society ( aps ) .the jstor corpus consists of 2 million articles from 1108 journals in the natural sciences , social sciences , and humanities , and the aps dataset we consider covers over 200,000 research articles in the single discipline of physics .we complement data from these traditional publication venues with data from two recent , online types of knowledge - sharing activity : wikipedia and a collection of question - answering forums .wikipedia is a collaborative online effort to document all of human knowledge in a systematic way into a popular internet - based encyclopedia .the question and answer forums we study are yahoo answers ( english ) ; baidu knows ( chinese ) ; and naver knowledge in ( korean ) . on each of the sites, millions of questions are answered each month by individuals with a wide range of expertise .online knowledge - sharing activity includes not just those who specialize in knowledge generation and dissemination , i.e. professional researchers and scholars , but also others who gained their expertise through study and experience .in addition to providing data on different types of individuals , these datasets represent knowledge generation at different scales .authoring a research article or patent in most cases involves weeks to years of research , culminating in a significant new result worthy of publication .in contrast , contributing a fact to wikipedia or answering a question posed in an online forum may involve little more than a simple recall of previously attained knowledge and a few minutes of the contributor s time . in evaluating focus across such a broad range of activities, we aimed to use a metric that captures three qualities : _ variety _ , or how many different areas an individual contributes to ; _ balance _ , or how evenly their efforts are distributed among these areas ; and _ similarity _ , or how related those areas are .we use the stirling measure , which captures all three aspects : where is the proportion of the individual s contributions in category and is a measure of similarity between categories and , inferred from the number of joint contributors between two categories and .this metric assigns a narrower ( higher ) focus value to an individual who contributes to fewer , related areas than to someone who contributes in many unrelated areas .the categories across which focus is measured differ by the type of knowledge - sharing medium .an inventor s proportion of contributions in subject is proportional to the number of times the class is assigned by the inventor or patent examiner to the inventor s patents .articles in the aps dataset are classified according to the physics and astronomy classification scheme . for jstor articles , in the absence of a pre - defined category structure , we used unsupervised topic models on the full text of authors research articles .wikipedia articles are situated within wikipedia s category hierarchy , while answers provided in q&a forums are sorted according to the hierarchy of categories of the corresponding questions . for each dataset , we sought a relevant , objective measure of quality of a contribution and evaluated it in the context of its peers . for research articles , we measured each article s citation count relative to those of other articles in the same discipline and year .likewise , patents citation counts were compared with those of other patents in the same patent classes and years . in doing this ,we control for discipline - specific factors that can impact a publication s citation count such as publication cycle length and number of publications in the discipline . for wikipedia contributions , we consider the percentage of words an author newly introduces to an article that survive subsequent revisions .finally , for q&a forums , we rely on the asker s rating of answers : a good contributor should have their answer selected as best more often than expected by chance . using these measures of focus and quality , we find that focus is weakly yet consistently positively correlated with quality across all types of knowledge contribution systems , as summarized in table [ tab : focusqq ] .the relationship between focus and quality is detailed further in figure [ fig : qvsf ] , which shows the variation in average quality for individuals grouped by their levels of focus . as focus increases , so does average quality ; but the trend levels off or even reverses for extremely focused individuals .this is clarified by plotting average focus at a given level of quality ( see figure [ fig : fvsq ] ) . while high - quality contributors are more narrowly focused than others on average, very poor contributors sometimes also dwell in a single area . in q&a forums ,we find further that narrowly focused users with poor track records of giving best answers tend to give answers that are significantly shorter than those of other users . [ cols="<,<,<,<",options="header " , ]* research articles*. a snapshot of * jstor * data includes 2.0 million research articles with 6.6 million citations between them .jstor spans over a century of publications in the natural and social sciences , the arts , and humanities . for this dataset , we needed to address name ambiguity .for example there were 26,000 instances where a person with the last name of smith authored an article and 728 unique combinations of initials appearing alongside `` smith '' .identifying two different individuals as being one and the same would tend to introduce data points with low focus and an inflated number of articles .since both variables are related to quality , we sought to exclude such instances .we excluded authors with , where is the number of first names or initials the inventor s last name occurs with in the data set , and is the number of last names the inventor s first name occurs with .we also collapsed matching names and initials if there was only one matching first name / inital pair and the last name occurred with fewer than 50 first names .this left us with 37,031 authors with 10 or more publications , for whom we were reasonably sure that they were uniquely identified .using latent dirichlet allocation , we generate 100 topics over the entire corpus of research articles .each document was assigned a normalized score for each of the 100 topics , and the pairwise topic similarity matrix was computed from cosines of vector values across documents .an author s distribution across topics was computed by averaging the topic vectors of all of the articles they authored . for robustness, we repeated the analysis with 250 topics instead of 100 , and found quantitatively similar correlation between focus and quality , although focus scores were lower due to the finer granularity .the quality of an article is measured as the number of times the article is cited , divided by the number of times other articles in the same area and year are cited .citations originate within the dataset . by normalizing quality by area, we mitigate the possible biases introduced by some areas being better represented in the dataset than others .our database of * american physical society * publications included physical review letters , and physical review a - e journal articles .we excluded reviews of modern physics as we were considering the impact of original research rather than review articles .the data set contained 396,134 articles published between 1893 and 2006 , with 3,112,706 citations between them . for our purposes ,we were limited to the 261,161 articles with pacs ( physics and astronomy classification scheme ) codes associated with articles published after 1977 .the pacs hierarchy has 5 levels , and we performed our analysis at the level of the 76 main categories , such as 42 ( optics ) and the 859 2nd level categories , e.g. 42.50 ( quantum optics ) . * patents*. the patent data set contains all 5,529,055 patents filed between 1976 and 2006 , in 468 top level categories .we construct a similarity matrix for the 468 categories , reflecting the frequency with which inventors in one category also file patents in another .there are 3,643,520 patents citing 2,382,334 others , for a total of 44,556,087 citations .we excluded inventors with .this makes it unlikely that we would identify two separate individuals as being one .we measure an inventor s impact according to a citation count normalized by the average number of citations for other patents in the same year and categories as those filed by the inventor .* q&a forums * : we obtained snapshots of activity on q&a forums with uniquely identified users posting answers to questions in distinct categories .we perform our analysis at the subcategory level , which gives us enough resolution to differentiate the question topics , while supplying a sufficient number of observations in each subcategory .we use best answers as a proxy for answer quality .the best answer is selected by the user who posed the question .if this user does not select a best answer , it may be selected via a vote by other users .the quality metric we used was the score , which compares the number of answers the user gives that were selected as best among others , relative to the expected number of best answers .the expected number depends on the number of other answers provided to each question . _ ( observed - . the expected number of best answers is simply given by , where is the total number of users answering question . *wikipedia * : our wikipedia dataset is a meta - history dump file of the english wikipedia generated on nov .4th , 2006 .the dump file has the entire revision history of about 1.5 million encyclopedia pages , of which we parsed 100,000 , or about 7% . in order to verify that our sample is unbiased with respect to topic distribution, we compare the category and subcategory distributions of our sample to that of a larger corpus of 1 million pages .the two distributions have a nearly perfect correlation ( ) .articles are a product of varying number of revisions , from several to 10,000 for single article .revisions are contributed by either registered or anonymous users . since anonymous users revision histories are non - traceable , we only consider registered users whose unique user names are associated with at least 40 revisions .we excluded wikipedia administrators from our study because they may perform a primarily editorial role . in like manner , to better filter the noise of measuring the quality of words by the final version of articles , we only choose pages in which ewer than 5% of the revisions occurred in the 30 days prior to the data dump . a wikipedia contributor s focus and entropy were calculated from the second - level categories of the pages they edit .each wikipedia article belongs to one or more categories .we truncated each hierarchical category to one of the roughly 500 second - level categories .the quality of a contribution is measured in terms of , the number of new words added by a user to wikipedia articles , such that the words were not present in any previous revisions of those articles .we found a high correlation between the number of new words that survive 5 revisions , and the number that survive to the last revision of the article ( ) , consistent with previous analyses of edit persistence .we therefore constructed a simple metric by taking the proportion of new words introduced by the user that are retained in the last version of a sufficiently frequently edited article : .we thank ibm for providing the patent data , and jstor , aps , and katy borner for providing the article citation data .we would also like to thank michael mcquaid , jure leskovec , scott page and eytan adar for helpful comments .this research was supported by muri award fa9550 - 08 - 1 - 0265 from the air force office of scientific research and nsf award iis 0855352 .
before contributing new knowledge , individuals must attain requisite background knowledge or skills through schooling , training , practice , and experience . given limited time , individuals often choose either to focus on few areas , where they build deep expertise , or to delve less deeply and distribute their attention and efforts across several areas . in this paper we measure the relationship between the narrowness of focus and the quality of contribution across a range of both traditional and recent knowledge sharing media , including scholarly articles , patents , wikipedia , and online question and answer forums . across all systems , we observe a small but significant positive correlation between focus and quality .
the binary erasure channel ( bec ) was introduced by elias in 1955 .it counts lost information bits as being `` erased '' with probabilities equal to .currently , the bec is widely used to model the internet transmission systems , in particular multicasting and broadcasting . as a milestone , luby _ et . proposed the first realization of a class of erasure codes lt codes , which are rateless and are generated on the fly as needed .however , lt - codes can not be encoded with constant cost if the number of collected output symbols is close to the number of input symbols . in , shokrollahi introduced the idea of raptor codes which adds an outer code to lt codes .raptor codes have been established in order to solve the error floors exhibited by the lt codes . on the other hand ,low - density parity - check ( ldpc ) codes have been studied to for application to the bec .the iterative decoding algorithm , which is the same as gallager s soft - decoding algorithm , was implemented .capacity - achieving degree distributions for the binary erasure channel have been introduced in , and .finite - length analysis of ldpc codes over the bec was accomplished in . in that paper , the authors have proposed to use finite - length analysis to find good finite - length codes for the bec . in this paper, we show the derivation of a new decoding algorithm to improve the performance of binary linear block codes on the bec .the algorithm can be applied to any linear block code and is not limited to ldpc codes . starting with superposition of the erased bits on the parity - check matrix , we review the performance of the iterative decoding algorithms , described in the literature , for the bec , principally the recovery algorithm and the guess algorithm . in section [ sec:03 ], we propose an improvement to the guess algorithm based on multiple guesses : the multi - guess algorithm and give a method to calculate the minimum number of guesses required in the decoding procedure . in this section, we also describe a new , non iterative decoding algorithm based on a gaussian - reduction method by processing the parity - check matrix . in section [ sec:04 ] ,we compare the performance of these algorithms for different codes using computer simulation . in section [ sec:05 ], we discuss the application of these decoding algorithms for the internet .section [ sec:06 ] concludes the paper .let denote the parity - check matrix . considering an binary linear block code , we assume that the encoded sequence is . after being transmitted over the erasure channel with erasure probability , the encoded sequence can be divided into the transmitted sub - sequence and the erased sub - sequence , denoted as and respectively , where .corresponding to the parity check matrix of the code , we can generate an erasure matrix ( ) which contains the positions of the erased bits in .then we denote the set of erased bits that participate in each parity check row by with standing for `` horizontal '' and the number of erased bits in is denoted by .similarly we define the set of checks in which bit participates , with standing for `` vertical '' , and the number of erased bits in is denoted by .let and . the matrix representation is shown in fig .[ fig:1 ] , where an `` x '' represents an erasure . a matrix representation of the erased bits , width=144 ] in , the message - passing algorithm was used for reliable communication over the bec at transmission rates arbitrarily close to channel capacity .the decoding algorithm succeeds if and only if the set of erasures do not cause stopping sets . for completeness, this algorithm is briefly outlined below : + * recovery algorithm * * _ step 1 _ generate the and obtain the . * _ step 2 _ for , if , we replace the value in the bit position with the xor of the unerased bits in that check equation .then we remove the erasure from the erasure matrix . * _ step 3 _ continue from step 2 until all the erased bits are solved or the decoding can not continue further .the decoder will fail if stopping sets exist .we can break the stopping sets by performing several `` guesses '' of the unsolved erased bits .this algorithm is called the guess algorithm . +* guess algorithm * * _ step 1 _ run the decoder with recovery algorithm until it fails due to stopping set(s ) . *_ step 2 _ in order to break the stopping set , when , we guess one of the erased symbols and update the erasure matrix and . * _ step 3 _ continue from step 1 until all the erased symbols are solved or the decoding can not continue further . if the decoder can not continue , declare a decoder failure and exit . * _ step 4 _ creat a list of solutions , where is the number of guesses made . from the list , ,pick the one that satisfies .obviously , compared to the recovery algorithm , the complexity of this algorithm increases with .usually , we limit the number of guesses to a small number . if after guesses , the decoding still can not be finished , a decoding failure is declared . for sparse codes with low - density , e. g. ldpc codes , the guess algorithm can improve the performance with guesses as shown in section [ sec:04 ] .the decoding algorithm is more efficient when the bits to be guessed are carefully chosen .these are termed `` crucial''bits .the crucial bits are chosen on the basis of the highest value of with the value of .for non - sparse linear codes , it is common to encounter more than 2 unsolved symbols in each row of after running the guess algorithm , due to the high - density of their parity check matrix . in these cases, we can not break the stopping set by guessing one erased symbol in a row only .more than 1 erased symbols at one time need to be guessed .we can calculate the minimum number of guesses before the decoding .consider the chosen erased symbols in each row as an erased group .let denote the set of rows with erasures , that is , .and is the set of rows which satisfies : + then where 1 accounts for the need for at least one `` crucial '' row . when the guessing process stops , there are more than 2 erased symbols in each erased row .the rows that have more than two bits which do not participate in any other row ( i. e. ) can not be solved by other rows , and so at least one of these bits has to be guessed .so the minimum number of guesses equals to the number of all the independent guesses plus one more `` crucial '' guess to solve the other rows . for the multi - guess algorithm ,a whole row is guessed .a crucial row is defined as follows : 1 . 2 . the multi - guess algorithm is given below : + * multi - guess algorithm * * _ step 1 _ run the decoder with guess algorithm until for . * _ step 2 _ evaluate the value of . if , the decoding declares a failure and exits .* _ step 3 _ group the rows with as , where . *_ step 4 _ find the `` crucial '' row and guess all erased bits in that row .( there will be at most guesses . )* _ step 5 _ guess one bit with in each of the independent rows , i.e. the rows in . * _ step 6 _ update , and .continue the decoding from step 3 to step 5 until all the erased bits are solved or the decoding can not continue further .the disadvantages of guess and multi - guess algorithms include the decoding complexity and the correctness of the results .the decoding complexity grows exponentially with the number of guesses .it is possible that the group guess declares a wrong value as the result of the decoder .although this kind of situation happens only when the value of is very small , it is still undesirable .let denote the received vector , where .we now devise a reduced complexity algorithm to decode the erased bits by solving the equation [ eq:1 ] using the gaussian reduction method . according to , the optimal decoding is equivalent to solving the linear system , shown in the equation [ eq:1 ] .if the equation [ eq:1 ] has a unique solution , the optimal algorithm is possible .guassian reduction algorithm is considered as the optimal algorithm over the bec .we propose a reduced complexity guassian reduction algorithm in - place algorithm by elimilating the column - permutations required. this algorithm is stated as follows : + * in - place algorithm * * _ step 1 _ the codeword is received and are substituted in positions of erased bits in . starting with one of the erased symbols , , the first equation containing this symbol is flagged that it will be used for the solution of .this equation is subtracted from all other equations containing and not yet flagged to produce a new set of equations .the procedure repeats until either non flagged equations remain containing ( in which case a decoder failure is declared ) or no erased symbols remain that are not in flagged equations . *_ step 2 _ let be the erased symbols at the last flagged equations. in the latter case , starting with this equation is solved to find and this equation is unflagged .this coefficient is substituted back into the remaining flagged equations containing .the procedure now repeats with the second from last flagged eqaution now being solved for .this equation is unflagged and followed by back substitution of for in the remaining flagged equations .erasure correction using in - place algorithm , width=240 ] a block schematic of the decoder is shown in fig.[fig:03 ] .the received bits are stored in the shift register with the erased bits being replaced by the unknown .the gaussian reduced equations are computed and used to define the connection of bit adders from the respective shift register stage to compute the outputs to .the non erased symbols contained in the shift register are switched directly through to the respective output so that the decoded codeword with no erased bits is present at the outputs through to .we evaluated the performance of the recovery algorithm with the lt codes with soliton distribution as described in and irregular ldpc codes .as shown in fig . [ fig:04 ] , the performance of irregular ldpc codes is significantly better than that of the lt codes for the same block length . as a consequance , we use ldpc codes to benchmark the remaining algorithms .performance of the lt codes and irregular ldpc codes with erasure probability = 0.2,width=240 ] a particularly strong binary code and which has a sparse is the cyclic ldpc code ( 255,175 ) , which has a length of 255 bits after encoding of 175 information bits .since the parity - check polynomial of the ( 255,175 ) code is orthogonal on every bit position , the minimum hamming distance is , where denotes the number of ones per row in .the applicability of the decoding methods above depends on the error correcting code being used and specifically on the parity check matrix being used .the performance of this code for the recovery , the guess and the in - place algorithms is shown in fig . [ fig:06 ] in terms of the probability of decoder error ( fer ) as a function of the erasure probability for every transmitted bit .performance of the cyclic ldpc ( 255,175 ) with the guess , the multi - guess and the in - place algorithms , width=240 ] performance of the cyclic ldpc ( 341,205 ) with the recovery , the guess , the multi - guess and the in - place algorithms , width=240 ] due to its sparse parity check matrix , guess algorithm with less than 3 guesses can achieve more than 1 order of magnitude improvement compared to that of recovery algorithm . in addition , from fig . [ fig:06 ] , we also can see that the curve of guess algorithm is very close to the curve of in - place algorithm , which means guess algorithm is a `` near optimal decoding '' algorithm when it has a sparse parity check matrix .[ fig:07 ] shows the performance of the ( 341,205 ) ldpc code with the recovery , the guess , the multi - guess and the in - place algorithms . comparing these results of the recovery and guess algorithms ,the multi - guess algorithm can obtain the results by several orders of magnitude better .for example , when the erasure probability equals to 0.3 , the multi - guess algorithm with is one order of magnitude better than the recovery and guess algorithms , when , the multi - guess algorithm is 2 order2 of magnitude better than the recovery and the guess algorithms .as an optimal decoding algorithm , the in - place algorithm can achieve 4 orders of magnitude better than the recovery and the guess algorithm . the ultimate performance of the in - place algorithm as a function of error correcting code is shown in fig .[ fig:6 ] for the example ( 255,175 ) code which can correct a maximum of 80 erased bits .[ fig:6 ] shows the probability density function of the number of erased bits short of the maximum correctable which is .the results were obtained by computer simulations .the probability of being able to correct only 68 bits , a shortfall of 12 bits , is .simulations indicate that on average 77.6 erased bits may be corrected for this code . in comparisonthe bch ( 255,178 ) code having similar rate is also shown in fig .[ fig:6 ] .the bch code has a similar rate but a higher minimum hamming distance of 22 ( compared to 17 ) .it can be seen that it has better performance than the ( 255,175 ) code but it has a less sparse parity check matrix and consequently it is less suitable for recovery algorithm and guess algorithm .moreover the average shortfall in erasures not corrected is virtually identical for the two codes .comparison of probability distribution of number of erased bits not corrected from maximum correctible (n - l ) for ( 255,175 ) code and bch ( 255,178 ) code , width=240 ] the simulation results of using in - place algorithm for the ( 103,52 ) quadratic residue binary code are shown in fig . [ fig:7 ] .the minimum hamming distance for this code is 19 and the results are similar to that of the ( 255,178 ) bch code above .it is found from the simulations that on average 49.1 erasure bits are corrected ( out of a maximum of 51 ) and the average shortfall from the maximum is 1.59 bits .probability distribution of number of erased bits not corrected from maximum correctible (n - l ) for ( 103,52 ) code quadratic redisue code , width=240 ] similarly the results for the extended bch ( 128,64 ) code is shown in fig .[ fig:8 ] .this code has a minimum hamming distance of 22 and has a similar probability density function to the other bch codes above . on average 62.39 erasure bitsare corrected ( out of a maximum of 64 ) and the average shortfall is 1.61 bits from the maximum .probability distribution of number of erased bits not corrected from maximum correctible (n - l ) for ( 128,64 ) extended bch code , width=240 ]in multicast and broadcast information is transmitted in data packets with typical lengths from 30 bits to 1000 bits .these packets could define a symbol from a galois field , viz but with equal to 30 or more up to and beyond 1000 bits this is impracticable and it is more convenient to use a matrix approach with the packets forming the rows of the matrix and columns of bits encoded using an error correcting code . usually , but not essentially the same code would be used to encode each column of symbols .the matrix of symbols may be defined as : [ cols= " < , < , < " , ] there are a total of information symbols which encoded using the parity check equations of a selected code into a total number of transmitted symbols equal to .the symbols are transmitted in a series of packets with each packet corresponding to a row as indicated above .for example the row : is transmitted as a single packet .self contained codewords are encoded from each column of symbols . for example the information symbols of one codeword and the remaining symbols , are the parity symbols of that codeword . as a result of network congestion , drop outs , loss of radio links or other multifarious reasons not all of the transmitted packets are received .the effect is that some rows above are erased .the decoding procedure is that codewords are assemble from the received packets with missing symbols corresponding to missing packets marked as .for example , if the second packet only is missing above : * the first received codeword corresponds to the first column above and is * the second codeword corresponding to the first column above and is and so on .all the algorithms stated in section 2 may be used to solve for the erased symbols in the first received codeword , and for the erased symbol in the second received codeword and so on up to the codeword ( column ) solving for . as an example, the binary , extended bch code could be used to encode the information data .the packet length is chosen to be 100 bits , and the total transmission could consist of 128 transmitted packets ( 12,800 bits total ) containing 6,400 bits of information . on average as soon as any 66 packets from the original 128 packets have been received , the remaining 62 packets are treated as if they are erased .100 codewords are assembled , decoded with the erasures solved and the 6,400 bits of information retrieved .one advantage is that a user does not have to wait until the entire transmission has been received .in this paper , we presented different decoding algorithms of ldpc codes over the bec : recovery , guess , multi - guess and in - place algorithms .the multi - guess algorithm is an extension to guess algorithm , which can push the limit to break the stopping sets .we show that guess and multi - guess algorithms are parity - check matrix dependent . for the codes with sparse parity - check matrix , guess and multi - guess algorithms can be considered as `` near - optimal decoding methods '' . on the other hand ,in - place algorithm is not .it s an optimal method for the bec and able to correct erasures , where is a small positive integer . c. di , d. proietti , i. e. telatar , t. richardson , and r. urbanke , `` finite - length analysis of low - density parity - check codes on the binary erasure channel , '' _ ieee tran .inform . theory _1570 1579 , june 2002 .a. m. odlyzko , `` discrete logarithms in finite fields and their cyptographic significance , '' in advances in cyptology : proceeding of eurocrypt84 , t. beth , n. cot , and i. ingemarssen , eds .berlin , germany : springer - verlag , 1985 , vol .224 314 .
this paper investigates decoding of binary linear block codes over the binary erasure channel ( bec ) . of the current iterative decoding algorithms on this channel , we review the recovery algorithm and the guess algorithm . we then present a multi - guess algorithm extended from the guess algorithm and a new algorithm the in - place algorithm . the multi - guess algorithm can push the limit to break the stopping sets . however , the performance of the guess and the multi - guess algorithm depend on the parity - check matrix of the code . simulations show that we can decrease the frame error rate by several orders of magnitude using the guess and the multi - guess algorithms when the parity - check matrix of the code is sparse . the in - place algorithm can obtain better performance even if the parity check matrix is dense . we consider the application of these algorithms in the implementation of multicast and broadcast techniques on the internet . using these algorithms , a user does not have to wait until the entire transmission has been received .
image segmentation is a fundamental problem in computer vision . despite many years of research , general - purpose image segmentation is still a very challenging task because segmentation is inherently ill - posed . to improve the results of image segmentation ,much attention has been paid to constrained image segmentation , where certain constraints are initially provided for image segmentation . in this paper , we focus on constrained image segmentation using pairwise constraints , which can be derived from the initial labels of selected pixels . in general , there exist two types of pairwise constraints : must - link constraint denotes a pair of pixels belonging to the same image region , while cannot - link constraint denotes a pair of pixels belonging to two different image regions . in previous work , such weak supervisory information has been widely used to improve the performance of machine learning and pattern recognition in many challenging tasks .the main challenge in constrained image segmentation is how to effectively exploit a limited number of pairwise constraints for image segmentation .a sound solution is to perform pairwise constraint propagation to generate more pairwise constraints .although many pairwise constraint propagation methods have been developed for constrained clustering , they mostly have a polynomial time complexity and thus are not much suitable for segmentation of images even with a moderate size ( e.g. pixels ) , which is actually equivalent to clustering with a large data size ( i.e. ) . for constrained image segmentation , we need to develop more efficient pairwise constraint propagation methods , instead of directly utilizing the existing methods like . here, it is worth noting that even the simple assignment operation incurs a large time cost of if we perform pairwise constraint propagation over all the pixels , since the number of all possible pairwise constraints is .the unique choice is to propagate the initial pairwise constraints _ only to a selected subset of pixels _ , but not across the whole image .fortunately , the local homogeneousness of a natural image provides kind of supports for this choice , i.e. , a selected subset of pixels may approximate the whole image ( such downsampling is widely used in image processing ) .more importantly , the selectively propagated constraints over a selected subset of pixels are enough for achieving a good quality of image segmentation , as verified by our later experimental results . hence , in this paper , we develop a selective constraint propagation ( scp ) method for constrained image segmentation , whichpropagates the initial pairwise constraints only to a selected subset of pixels ( _ the first meaning _ of our selective constraint propagation ) .although there exist different sampling methods in statistics , we only consider random sampling for efficiency purposes , i.e. , the subset of pixels used for selective constraint propagation are selected randomly from the whole image . in this paper , we formulate our selective constraint propagation as a graph - based learning problem which can be efficiently solved based on the label propagation technique . to further speed up our algorithm, we also discard those less important propagated constraints during iteration of graph - based learning , which is _ the second meaning _ of our selective constraint propagation . to the best of our knowledge ,we have made the first attempt to develop a selective constraint propagation method for constrained image segmentation .finally , the selectively propagated constraints obtained by our selective constraint propagation are exploited to adjust the original weight matrix based on optimization techniques , in order to ensure that the new weight matrix is as consistent as possible with the selectively propagated constraints . in this paper, we formulate such weight adjustment as an _problem , which can be solved efficiently due to the limited number of selectively propagated constraints .the obtained new weight matrix is then applied to normalized cuts for image segmentation .the flowchart of our selective constraint propagation for constrained image segmentation is illustrated in figure [ fig.1 ] .although our selective constraint propagation method is originally designed for constrained image segmentation , it can be readily extended to other challenging tasks ( e.g. , semantic image segmentation and multi - face tracking ) where only a limited number of pairwise constraints are provided initially .it should be noted that the present work is distinctly different from previous work on constrained image segmentation . in ,only linear equality constraints ( analogous to must - link constraints ) are exploited for image segmentation based on normalized cuts . in , although more types of constraints are exploited for image segmentation , the linear inequality constraints analogous to cannot - link constraints are completely ignored just as .in contrast , our selective constraint propagation method exploits both must - link and cannot - link constraints for normalized cuts .more notably , as shown in our later experiments , our method _ obviously outperforms _ due to extra consideration of cannot - link constraints for image segmentation .the remainder of this paper is organized as follows . in section 2, we develop a selective constraint propagation method which propagates the initial pairwise constraints only to a selected subset of pixels . in section 3 ,the selectively propagated constraints are further exploited to adjust the original weight matrix based on -minimization for normalized cuts .finally , sections 4 and 5 give the experimental results and conclusions , respectively .this section presents our selective constraint propagation ( scp ) in detail .we first give our problem formulation for propagating the initial pairwise constraints only to a selected subset of pixels from a graph - based learning viewpoint , and then develop an efficient scp algorithm based on the label propagation technique . in this paper ,our goal is to propagate the initial pairwise constraints to a selected subset of pixels for constrained image segmentation . to this end, we need to first select a subset of pixels for our selective constraint propagation .although there exist different sampling methods in statistics , we only consider random sampling for efficiency purposes , i.e. , the subset of pixels are selected randomly from the whole image . in the following , the problem formulation for selective constraint propagation over the selected subset of pixels is elaborated from a graph - based learning viewpoint .let denote the set of initial must - link constraints and denote the set of initial cannot - link constraints , where is the region label assigned to pixel and is the total number of pixels within an image .the set of constrained pixels is thus denoted as with .moreover , we randomly select a subset of pixels with , and then form the final selected subset of pixels used for our selective constraint propagation as with . in this paper, we construct a -nearest neighbors ( -nn ) graph over all the pixels so that the normalized cuts for image segmentation can be performed efficiently over this -nn graph .let its weight matrix be .we define the weight matrix over the selected subset of pixels as : where denotes the -th member of .the normalized laplacian matrix is then given by where is an identity matrix and is a diagonal matrix with its -th diagonal entry being the sum of the -th row of .moreover , for convenience , we represent the two sets of initial pairwise constraints and using a single matrix as follows : based on the above notations , the problem of selective constraint propagation over the selected subset of pixels can be formulated from a graph - based learning viewpoint : where and denote the positive regularization parameters , denotes the frobenius norm of a matrix , and denotes the trace of a matrix . here , it is worth noting that the above problem formulation actually imposes both _ vertical and horizontal _ constraint propagation upon the initial matrix .that is , each column ( or row ) of can be regraded as the initial configuration of a _ two - class label propagation _problem , which is formulated just the same as .moreover , in this paper , we assume that the vertical and horizontal constraint propagation have the same importance for constrained image segmentation .the objective function given by eq .( [ eq : intracp ] ) is further discussed as follows .the first and second terms are related to the _ vertical _ constraint propagation , while the third and fourth terms are related to the _ horizontal _ constraint propagation .the fifth term then ensures that the solutions of these types of constraint propagation are as approximate as possible .let and be the best solutions of vertical and horizontal constraint propagation , respectively .the best solution of our selective constraint propagation is defined as : as for the second and fourth terms , they are known as laplacian regularization , which means that and should not change too much between similar pixels . such laplacian regularization has been widely used for different graph - based learning problems in the literature . to apply our selective constraint propagation ( scp ) to constrained image segmentation, we have to concern the following key problem : _ how to solve eq .( [ eq : intracp ] ) efficiently_. fortunately , due to the problem formulation from a graph - based learning viewpoint , we can develop an efficient scp algorithm using the label propagation technique based on -nn graph over .the proposed scp algorithm will be elaborated in the next subsection .let denote the objective function in eq .( [ eq : intracp ] ) . the alternate optimization technique can be adopted to solve as follows : 1 ) fix , and perform the vertical propagation by ; 2 ) fix , and perform the horizontal propagation by . *vertical propagation : * when is fixed at , the solution of can be found by solving which is actually equivalent to where and . since is positive definite , the above linear equation has a solution : however , this analytical solution is not efficient at all for constrained image segmentation , since the matrix inverse incurs a large time cost of . in fact , this solution can also be _ efficiently found using the label propagation technique _ based on -nn graph over ( see the scp algorithm outlined below ) .* horizontal propagation : * when is fixed at , the solution of can be found by solving which is actually equivalent to this linear equation can also be efficiently solved using the label propagation technique based on -nn graph , similar to what we do for eq .( [ eq : lev ] ) . since the wight matrix over derived from the original weight matrix of the -nn graph constructed over all the pixels according to eq .( [ eq : knnwt ] ) , can be regarded as the weight matrix of a -nn graph constructed over .hence , we can adopt the label propagation technique to efficiently solve both eq .( [ eq : lev ] ) and eq .( [ eq : leh ] ) .moreover , to speed up our selective constraint propagation , we also discard those less important propagated constraints during both vertical and horizontal propagation .that is , the two matrices and are forced to become sparser and thus less computation load is needed during iteration .the complete scp algorithm is outlined as follows : ( 1 ) : : compute , where is given by eq .( [ eq : lap ] ) ; ( 2 ) : : initialize , , and ; ( 3 ) : : discard those less important propagated constraints with during the vertical propagation , where we set in this paper ; ( 4 ) : : , where and ; ( 5 ) : : iterate steps ( 3)(4 ) for the vertical propagation until convergence at ; ( 6 ) : : discard those less important propagated constraints with during the horizontal propagation ; ( 7 ) : : ; ( 8) : : iterate steps ( 6)(7 ) for the horizontal propagation until until convergence at ; ( 9 ) : : iterate steps ( 3)(8 ) until the stopping condition is satisfied , and output the solution . similar to , the iteration in step ( 4 ) converges to , which is equal to the solution ( [ eq : lp ] ) given that and . moreover , in our later experiments , we find that the iterations in steps ( 5 ) , ( 8) and ( 9 ) generally converge in very limited steps ( ) . finally , based on -nn graph , our scp algorithm has a time cost of ( ) .hence , it can be considered to provide an efficient solution ( note that even a simple assignment operation on incurs a time cost of ) .in this section , we discuss how to exploit the selectively propagated constraints stored in the output of our scp algorithm for image segmentation based on normalized cuts .the basic idea is to adjust the original weight matrix over the selected subset of pixels using these selectively propagated constraints .the problem of such weight adjustment for normalized cuts can be formulated as : where is the new weight matrix over , and is the regularization parameter .it is worth noting that the new weight matrix is actually derived as a _ nonnegative fusion _ of and by solving the above -minimization problem .more notably , the _ -norm regularization _term can force the new weight matrix not only to approach but also to become as sparse as , given that can be regarded as the weight matrix ( thus sparse ) of a -nn graph constructed over .the problem given by eq .( [ eq : wtadjust ] ) can be solved based on some basic -minimization techniques .in fact , it has an explicit solution : where is a soft - thresholding function . here, we directly define as : where and . since the -minimization problem given by eq .( [ eq : wtadjust ] ) is limited to , finding the best new weight matrix incurs a time cost of ( ) .once we have found the best new weight matrix over the selected subset of pixels , we can derive the new weight matrix over all the pixels from the original weight matrix of the -nn graph as : where denotes the -th member of .this new weight matrix over all the pixels is then applied to normalized cuts for image segmentation .the full algorithm for constrained image segmentation can be summarized as follows : ( 1 ) : : generate the selectively propagated constraints using our scp algorithm proposed in section 2 ; ( 2 ) : : adjust the original weight matrix by exploiting the selectively propagated constraints according to eq .( [ eq : wtbest ] ) ; ( 3 ) : : perform normalized cuts with the adjusted new weight matrix for image segmentation . as we have mentioned , steps ( 1 ) and ( 2 ) incur a time cost of and ( ) , respectively .moreover , since the adjusted new weight matrix is very sparse , step ( 3 ) can be performed efficiently . here, it is worth noting that the most time - consuming component of normalized cuts ( i.e. , eigenvalue decomposition ) has a linear time cost when the weight matrix is very sparse . in summary , our algorithm runs very efficiently for constrained image segmentation .in this section , our algorithm is evaluated in the task of constrained image segmentation .we first describe the experimental setup , including information of the feature extraction and the implementation details .moreover , we compare our algorithm with other closely related methods . for segmentation evaluation ,we select 50 images from the berkeley segmentation database ( along with ground truth segmentations ) , and some sample images are shown in figures [ fig.2 ] and [ fig.4 ] .it can be observed that these selected images generally have _ confusing backgrounds _ , such as the penguin and kangaroo images .furthermore , we consider a 6-dimensional vector of color and texture features for each pixel of an image just as .the three color features are the coordinates in the l*a*b * color space , which are smoothed to avoid over - segmentation arising from local color variations due to texture .the three texture features are contrast , anisotropy , and polarity , which are extracted at an automatically selected scale .the segmentation results are measured by the adjusted rand ( ar ) index which takes values between -1 and 1 , and a higher ar score indicates that a higher percentage of pixel pairs in a test segmentation have the same relationship ( joined or separated ) as in each ground truth segmentation .we do not consider the original rand index for segmentation evaluation , since there exists a problem with this measure . in the following , our normalized cuts with selective constraint propagation ( ncuts_scp )is compared with three closely related methods : normalized cuts with linear equality constraints ( ncuts_lec ) , normalized cuts with spectral learning ( ncuts_sl ) , and standard normalized cuts ( ncuts ) . here, we do not make comparison to other constraint propagation methods , since they have a polynomial time complexity and are not suitable for image segmentation .0.14 cm .the average segmentation results achieved by our ncuts_scp algorithm with a varied number ( i.e. ) of pixels being selected randomly . [cols="^,^,^,^,^,^,^",options="header " , ] we further illustrate the selectively propagated constraints obtained by our ncuts_scp algorithm with in figure [ fig.3 ] . here ,to explicitly represent the selectively propagated constraints , we need to infer the labels of randomly selected pixels from them .in fact , this inference can be done by simple voting according to the output of our scp with the initial set of labeled pixels being regarded as the voters .once we have inferred the labels of randomly selected pixels , we can show them out by marking the pixels within object and background by blue ` o ' and yellow ` x ' , respectively . from figure [ fig.3 ], we find that the selectively propagated constraints obtained by our ncuts_scp algorithm are mostly consistent with the ground truth segmentation .the comparison between different image segmentation methods is listed table [ table.2 ] . meanwhile , this comparison is also illustrated in figures [ fig.2 ] and [ fig.4 ] . here , the segmentation results are evaluated by both ar index and running time averaged over all the images . in particular , we collect the running time by running all the algorithms ( matlab code ) on a computer with 3ghz cpu and 32 gb ram .the immediate observation is that our ncuts_scp algorithm significantly outperforms the other three methods in terms of ar index . since our ncuts_scp algorithm incurs a time cost comparable to closely related methods , it is preferred for constrained image segmentation by an _overall consideration_. in addition , it can be clearly observed that ncuts_scp , ncuts_lec , and ncuts_sl lead to better results than the standard ncuts due to the use of constraints for image segmentation .in this paper , we have investigated the challenging problem of pairwise constraint propagation in constrained image segmentation . considering the local homogeneousness of a natural image , we choose to perform pairwise constraint propagation only over a selected subset of pixels .moreover , we solve such selective constraint propagation problem by developing an efficient graph - based learning algorithm .finally , the selectively propagated constraints are used to adjust the weight matrix based on -minimization for image segmentation .the experimental results have shown the promising performance of the proposed algorithm for constrained image segmentation . for future work, we will extend the proposed algorithm to other challenging tasks such as semantic segmentation and multi - face tracking .s. hoi , w. liu , m. lyu , and w .- y . ma . learning distance metrics with contextual constraints for image retrieval . in _ proc .ieee conference on computer vision and pattern recognition _ ,pages 20722078 , 2006 .d. klein , s. kamvar , and c. manning . from instance - level constraints to space - level constraints : making the most of prior knowledge in data clustering . in _ proc .international conference on machine learning _ , pages 307314 , 2002 .z. li , j. liu , and x. tang . pairwise constraint propagation by semidefinite programming for semi - supervised classification . in _ proc .international conference on machine learning _ , pages 576583 , 2008 .d. martin , c. fowlkes , d. tal , and j. malik .a database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics . in _ proc .cvpr _ , volume 2 , pages 416423 , 2001 .e. xing , a. ng , m. jordan , and s. russell .distance metric learning with application to clustering with side - information . in _ advances in neural information processing systems 15_ , pages 505512 , 2003 .
this paper presents a novel selective constraint propagation method for constrained image segmentation . in the literature , many pairwise constraint propagation methods have been developed to exploit pairwise constraints for cluster analysis . however , since most of these methods have a polynomial time complexity , they are not much suitable for segmentation of images even with a moderate size , which is actually equivalent to cluster analysis with a large data size . considering the local homogeneousness of a natural image , we choose to perform pairwise constraint propagation only over a selected subset of pixels , but not over the whole image . such a selective constraint propagation problem is then solved by an efficient graph - based learning algorithm . to further speed up our selective constraint propagation , we also discard those less important propagated constraints during graph - based learning . finally , the selectively propagated constraints are exploited based on -minimization for normalized cuts over the whole image . the experimental results demonstrate the promising performance of the proposed method for segmentation with selectively propagated constraints .
the fundamental laws of physics can ( without exceptions ) be related to certain continuous symmetries . in other words , by requiring that a model should be invariant with respect to a certain symmetry , the model is more or less completely determined .the standard model of particle physics and gravitation are examples of such theories . as an example , a model with a complex scalar field , i.e. , a model of charged bosons , and a requirement of local -invariance , or gauge invariance ,will immediately yield the maxwell - klein - gordon ( mkg ) theory , which in the non - relativistic limit reduces to maxwell - schrdinger theory .in addition to defining the theory , the continuous symmetries give rise to conserved quantities through noether s theorem(s ) , and the local -symmetry of the mkg - model ensures the conservation of local electric charge . in particle physics , and especially in the qcd - part of the standard model , numerical calculationsare often done using lattice gauge theory ( lgt ) .this is a numerical procedure , actually motivated from the continuous theory , designed to preserve the underlying continuous gauge symmetry . in a previous articlethis discretization scheme was applied to the mkg - equation , with emphasis on the continuous -symmetry and conservation laws deduced from discrete versions of noether s theorem(s ) . by preserving the -symmetry of the mkg - model on the discrete level , a discrete equivalent of the conservation of local electric charge is immediate , which not only makes the scheme consistent , but is also a good indicator of stability . by a more standard discretization of the model , the local -symmetryis broken , which again implies that the scheme is not consistent with the continuous formulation .this will also reveal itself through the fact that the physical observables calculated are dependent on the gauge chosen , obviously in conflict with the continuous model .a similar breaking of the continuous local -symmetry has been a known issue with discretizations of the schrdinger equation coupled to an external electromagnetic field .for example in atomic physics , results are known to depend on the gauge in which the calculation is done a most unfortunate situation indicating that the calculations are not correct .gauge dependence also leads to interpretation problems of the results .it is the goal of the present paper to show how gauge invariant discretizations may be built from existing ones with little or no extra effort in the implementations .simple gauge invariant grid discretizations of schrdinger operators have been studied in previous articles in the lgt formalism with promising results .the key to success in lgt is that it does not approximate the covariant derivative as a linear combination of the gradient and the gauge potential , an element of the lie - algebra under consideration , since such an approximation leads to non - local terms when discretizing the gradient , and a question of gauge invariance is meaningless since one compare fields at different spacetime points .instead lgt uses forward - euler / central difference approximation of the gradient in the various directions , which as argued is not gauge invariant , and then defines the covariant derivative through the way non - local terms are made gauge invariant in the continuous theory .this is done via the wilson line , to be discussed in the next section , which effectively localize non - local terms by parallel transport with the gauge potential as a key ingredient . by defining the covariant derivative in this way , the discrete theory is immediately manifestly gauge invariant .the aim of this article is to expand the lgt formulation to allow for completely general grid discretizations in arbitrary local coordinates of the spatial manifold .grid discretizations are widely used , and include most numerical discretizations of schrdinger operators in use today , such as pseudospectral methods based on the discrete fourier transform or chebyshev polynomials .we also generalize the discussion to arbitrary coordinate systems , and some care is needed in case some of the coordinates are periodic when using global approximations ( e.g. , chebyshev or fourier expansions ) .the paper is organized as follows : in section [ sec : schroedinger ] we introduce the time dependent schrdinger equation in general coordinates . in section [ sec : discretization ] and [ sec : covariant ] we discuss gauge invariant spatial grid discretizations .we proceed in section [ sec : time ] to discuss gauge invariant time integration .finally , in section [ sec : results ] we present some numerical results shedding light on the difference between gauge invariant and gauge dependent schemes , before we close with some concluding remarks in section [ sec : conclusion ] .we consider a particle with charge and mass coupled to an external electromagnetic ( em ) field .this is a semiclassical approach because the em - field is obviously affected by the particle , but if we assume that the coupling is weak the approximation can be justified .we will work in the non - relativistic regime , but our considerations could easily be transmitted to a relativistic model .moreover , the generalization to more than one particle is straightforward , since the em fields only enter a many - body hamiltonian at the one - body level , i.e. , the interparticle interactions are independent of the em fields .we are considering a spacetime domain , with coordinates , where is the time coordinate , and is a point in the spatial domain , usually taken to be euclidean space , but can in general be a riemannian manifold . in any case , we may work in local coordinates , viz , , with the induced metric tensor assumed to be time - independent .the wavefunction at some time is then a complex valued scalar function .in addition , the em - field is described by a gauge potential , where is a real valued function and is a real valued one - form . in coordinate basis one usually identifies one - forms with vectors .thus , if are basis one - forms and are basis vectors there is a one - to - one correspondence between and .note , we use the einstein summation convention except where noted. the components of and are related by the metric , i.e. , and the physical em fields are given by where we use the shorthand . in the following we work in units such that .the dynamics of the system is governed by the time dependent schrdinger equation reading \psi(t , x ) , \ ] ] where is the covariant derivative in the temporal direction , and where the `` covariant laplace - beltrami '' operator is defined by with being the covariant derivative in the direction .moreover , is the ( generalized ) canonical momentum operator .the term is simply the kinetic energy operator , and is an external potential .a fundamental property of the time dependent schrdinger equation is that it is invariant under local gauge transformations , i.e. equation is invariant under the following set of transformations where is a real valued function meaning that , the lie - algebra of ( consult e.g. for theory on lie - groups and lie - algebras ) .one says that the theory is invariant under local -transformations , meaning that the physical observables are not affected by the transformations .in particular , we note that the electric and magnetic fields are not affected by the transformations . moreover ,if ] still depends continuously on time. we shall here consider grids which in the local coordinates are cartesian products of one - dimensional grids , i.e. , where figure [ fig : grid ] illustrates this in the case of polar coordinates in the plane .we may list the elements of using multi - indices , i.e. , and the multi - indices may again be mapped one - to - one with , where is the total number of grid points .thus , we obtain a discrete hilbert space ] is the standard forward difference operator .we could just as well consider the limit for any local discrete differentiation operator . introducing a comparator function with the transformation law we see that for any , the function transforms in the same way as .explicitly , the comparator is given by for any finite , consider again the discrete difference operator applied to , but acting on the variable , i.e. , (x_k ) .\label{eq : local - discrete - cov}\ ] ] the notation implies that the discrete derivative is evaluated at .this operator is obviously gauge invariant , so that the path from to is in general ambiguous if is a circle : we may move either clockwise or anti - clockwise , and also through several revolutions before ending up at .however , as , we desire our discrete to converge to . the truncation error does not vanish unless we choose the _ shortest _ path , with vanishing length .. implies that for any smooth , (x ) = d_a\psi(x ) + o(h^p).\ ] ] the fact that was a _local _ approximation to the derivative was crucial above , as it allowed us to resolve path ambiguity .approximation this is not the case .a global approximation to in general has _ exponential _ order of approximation as it utilizes _ all _ grid points to estimate the derivative .that is , for any smooth , the truncation error is , i.e. so that the order of approximation in fact increases as .as may depend on path , and as and have arbitrary separation in a global method , the limit does not resolve the path ambiguity .the only way to overcome this , is to ensure that the comparator itself is path - independent .this is the case if and only if for some smooth , since then the fundamental theorem of analysis yields independently of the path taken . for being a circle , this means that must be the derivative of a _ periodic _ function .we therefore decompose the one - form as where is a constant such that for some smooth .formally , is the projection of onto the orthogonal complement of the range , i.e. , .the decomposition is unique and it always exists .we may say that is the `` largest part of that may be transformed away . ''we then obtain for the covariant derivative .it is clear that if is nonzero , it may never be transformed away using a gauge transformation . in the case of being an interval , , implying , and we get where any anti - derivative of may be used . for being a circle , however , since is not periodic unless .it is straightforward to show that we define a modified path independent comparator given by .\ ] ] since it is independent of path , we may write where is any reference point . combining eqns .( [ eq : with - constant ] ) and ( [ eq : local - discrete - cov ] ) we get (x_k ) - { \mathrm{i}}q a_0 \psi(x_k ) , \label{eq : global - discrete - cov}\ ] ] and using eqn .we may rewrite this as which may be a more practical expression to implement . this covariant derivative is valid for any one - dimensional manifold topology and any discretization of the derivative , and we note that in particular for , it is equivalent to the original expression for local discrete derivatives . for global methods ,the one - dimensional case necessitated the computation of given by eqn . .as the covariant derivative in this case was gauge equivalent to using the nave discretization with a constant , one may wonder what we have to gain from the approach in this case : why not use the standard nave discretization using this particular , and physically equivalent , gauge ?most manifolds are , however , not one - dimensional . in this section , the case of being a general -dimensional manifold is treated by simply defining for each spatial direction . in this caseit is in general _ not _ true that the method becomes gauge equivalent to a nave discretization : we may not find an such that the problem may be solved with a nave discretization .said in another way , on , the splitting becomes where is the part of which may not be transformed away , and this is of course not a constant function in general . in the direction , at the point , the continuous covariant derivativeis given by being an operator that constructs the component of a one - form field , i.e. , \psi(x) ] , and use a finite difference discretization with equally spaced grid points , .the grid spacing is given by .thus , at a time , is our discrete wave function .a common situation in atomic physics arise when one considers the so - called dipole approximation , in which the fields and take the form corresponding to a time - dependent electric field and .these fields are of course not solutions of maxwell s equations . the particular gauge in eqn .is referred to as the length - gauge .the so - called velocity gauge is obtained by transforming away using the gauge parameter , i.e. , it is the temporal gauge .we obtain the wave functions in the two gauges are of course related by .these two gauges are commonly studied , and may give different results in actual calculations ; a sure sign of a significant error . a common choice for is describing an oscillating electric field with frequency .using finite differences , a typical nave semi - discretization of eqn .is \psi(t , x_j ) \notag \\ & = & \frac{1}{2}\left[-\delta^+\delta^- + { \mathrm{i}}(a\delta^- + \delta^+ a ) + a^2 + 2\phi\right]\psi(t , x_j ) .\label{eq : se1d2}\end{aligned}\ ] ] where is a forward difference , and is a backward difference . as earlier , and be interpreted as diagonal multiplication operators . as , it is easy to see that the operator on the right hand side of eqn .is actually hermitian .the corresponding gauge invariant semi - discretization is , in the temporal gauge , \psi(t , x_j ) , \label{eq : se1d3}\ ] ] with $ ] being the comparator function . to integrate eqns . andin time , we select a somewhat non - standard approach .it is well - known that an approximation to for a given hamiltonian is where the error is .propagating using gives an error increasing roughly linearly as function of the number of time steps .the integral is evaluated using gauss - legendre quadrature using two evaluation points , giving practically no error in the integral as long as does not oscillate too rapidly .we choose and for the electrical field , and integrate for , so that the electric field oscillate exactly three times before we terminate the calculation .the spatial domain is of length , and we use points . we choose as initial condition ( which is normalized numerically in the calculations ) .the analytic solution using this particular problem ( on whole of ) can be computed in closed form , but we choose instead to perform a reference calculation using a pseudospectral discretization using points and a much smaller time step , giving in this case practically no error .figure [ fig : numerical1 ] shows the error as function of in the three cases . clearly , the velocity gauge has somewhat smaller error , and also the length gauge and gauge invariant calculations have almost indistinguishable errors .the latter fact can easily be understood by inserting into the semi - discrete formulations , and noting that and are unitarily equivalent , i.e. , having the same eigenvalues .the semidiscrete equations are thus actually equivalent , and any discrepancy showing in the graphs for the gauge invariant scheme and the length - gauge calculation comes from errors in the time integration .the spectrum of is _ not _ equivalent to that of when , however .in fact , it is readily established that the latter operator has eigenvalues depending strongly on , and therefore on the particular gauge used .hence , it is expected that the velocity gauge , or any other gauge in which , should perform _ worse _ than either of the other gauges in the generic case ..[fig : numerical1 ] the velocity gauge ( dashed / dotted green ) has somewhat larger error than the gauge invariant ( solid black ) and the length gauge ( dotted red ) calculations .the low error in the velocity gauge is a `` stroke of luck '' when compared with fig .[ fig : numerical2 ] , where the velocity gauge has the largest error .notice that the oscillations of the em - fields clearly affect the errors . ] to test this statement , we perform a calculation using a different field in the length gauge , namely which may describe incoming electromagnetic waves ( e.g. , a laser ) along the axis .figure [ fig : numerical2 ] shows the errors as function of in this case , using , clearly showing that the velocity gauge indeed has the larger error .moreover , a `` mixed gauge '' calculation is shown , where the length gauge fields are transformed using a gauge parameter , chosen somewhat arbitrarily .now , both and are non - vanishing , and the error is seen to behave accordingly .we notice that the gauge invariant calculations in both cases are well - behaved , and no choice of gauge will of course affect the calculations . moreover ,the length gauge is equivalent to the gauge invariant calculations _ only _ when ; this holds in general in one dimensional systems , but of course not in arbitrary dimensions , where the magnetic field usually can not be transformed away in this way ..[fig : numerical2 ] the gauge invariant ( solid black ) and the length gauge ( dotted red ) calculations have the smallest errors , while the velocity gauge ( dot - dash , green ) has clearly the largest error .this should be contrasted with the results in fig .[ fig : numerical1 ] , where the velocity gauge has the smallest error . ]we have discussed a method based on lgt to convert virtually any grid - based scheme for the time - dependent schrdinger equation ( [ eq : schroedinger2 ] ) to a gauge invariant scheme , in both space and time .we have considered discretization in arbitrary coordinates on arbitrary spatial manifolds .the theory is directly generalizable to many - particle systems as the em - fields obviously only enter at one - body level in the many - body hamiltonian .moreover , the computational overhead of the gauge invariant schemes compared to the original ones are negligible . our numerical simulations of time - dependent problems , albeit simplistic , indicate that the gauge invariant schemes perform on average better than standard schemes , even though the original `` nave '' scheme may be better in specific cases .
grid - based discretizations of the time dependent schrdinger equation coupled to an external magnetic field are converted to manifest gauge invariant discretizations . this is done using generalizations of ideas used in classical lattice gauge theory , and the process defined is applicable to a large class of discretized differential operators . in particular , popular discretizations such as pseudospectral discretizations using the fast fourier transform can be transformed to gauge invariant schemes . also generic gauge invariant versions of generic time integration methods are considered , enabling completely gauge invariant calculations of the time dependent schrdinger equation . numerical examples illuminating the differences between a gauge invariant discretization and conventional discretization procedures are also presented .
here we would like present the catalogue of models ( theories , hypotheses ) in modern physics which are at this time not verified experimentally .it is not possible to construct the full catalogue of such hypotheses but one can to aspire to make it asymptotically .the present time is characterized by that there is a considerable quantity of hypotheses which are not checked up experimentally nevertheless are in favor for the last years among physicists .one can say that it is a strange time for physics : since newton the physics is considered as an experimental science unlike , for example , theology .but now we have many ( may be too many ) hypotheses which are considered as completed theories and not having any experimental verification . in our worldthere are not too much absolute statements but one of them is : no statements about mathematical ( or any another ) beauty of the theory can be replaced by the experimental support . in this noticewe do not want to criticize or be lawyers any of these hypotheses .we wish to list only ( whenever it is possible ) these hypotheses to understand about in what strange ( for physics ) time we live .this notice is in no case the review of such hypotheses .therefore our references to corresponding papers will be minimum and certainly are incomplete . in our listing will be resulted both widely known , and little - known hypotheses .in this listing the order does not play any role : it could be any other : * string theory ; * supersymmetry ; * multidimensionality ; * loop quantum gravity ; * noncommutative geometry ; * brane models ; * double special relativity ; * topological nontriviality ; * hawking radiation ; * -gravities ; * nonassociativity .here we would like to give some very short description of every issues in listing [ list ] .the string theory probably is most known hypothesis from the list [ list ] above ( one can see ref . as a comprehensive textbook ) .this theory is known even out of a narrow circle of experts .this hypothesis pretends on the theory of everything .the main idea is very simple and consequently very promising : elementary particles are nothing more than vibrating strings .the difference between particles is the difference between oscillations excited on the string and nothing more .unfortunately the beauty of this hypotheses does not guarantee the correctness of it . in ref . one can found full and sufficient analysis of this hypothesis even as a social phenomenon .this hypothesis is beautiful no less than the string hypothesis but it is less known because the definition of the supersymmetry demands more mathematics . before supersymmetry bosons and fermionswere described by different mathematics .the reason is they have : ( 1 ) different statistics , ( 2 ) different quantization technique . for fermionswe have so called pauli principle : not any two fermions may have identical quantum numbers but this principle does not work for bosons .the commutation relationships for fermions and bosons are different : for fermions we have anticommutation relationships but for bosons - commutation relationships .it gives rise to the fact that in lagrangian we can not mix fermion and boson fields .let us note that these fields are different in the sense that : fermions are the matter and bosons transfer the interaction between fermions .for example ,electrons ( as electric charges ) are the sources of electromagnetic field .the textbook for the supersymmetry is practically any textbook for quantum field theory , for example , one can see ref . .the hypothesis that our world is multidimensional is very unexpected . after the birth of einstein sgeneral relativity , the natural question about the dimensionality of the world which we live in appeared . within the framework of einstein s gravitational theory , space and time are unified and it allows us to realize that the surrounding world is a 4-dimensional one .the present stage of the development of multidimensional theories of gravity began with kaluza s paper where the foundations of modern multidimensional gravitational theories have been laid .the essence of kaluza s idea consists in the proposal that the 5-dimensional kaluza - klein gravitation is equivalent to 4-dimensional einstein gravitation coupled to maxwell s electromagnetism .one of the most important questions in the multidimensional hypothesis is the question about the unobservability and independence of 4-dimensional quantities on extra dimensions .now there is two approaches for the resolution of this problem : * it has been considered that in such theories the observable 4-dimensional spacetime appears as a result of compactification of extra dimensions , where the characteristic size of extra dimensions becomes much less than that of 4-dimensional spacetime . * we live on a thin leaf ( brane ) embedded into some multidimensional space ( bulk ) .why the multidimensionality is so important ?the answer is : many realistic candidate for a grand unified theory , such as superstring / m - theory , should be multidimensional by necessity , otherwise , it will contain undesirable physical consequences . loop quantum gravity has matured to a mathematically rigorous candidate quantum field theory of the gravitational field ( as a textbook one can see ref .the features that distinguish from other quantum gravity theories are : ( 1 ) background independence and ( 2 ) minimality of structures ._ background independence _ means that this is a non - perturbative approach in which one does not perturb around a given , distinguished , classical metric , rather arbitrary fluctuations are allowed ._ minimally _ means that one explores the logical consequences of bringing together the two fundamental principles of modern physics : general covariance and quantum theory , without adding any experimentally unverified additional structures such as extra dimensions , extra symmetries or extra particles beyond the standard model . in a noncommutative geometrythe spacetime coordinates become noncommuting operators .the noncommutativity of spacetime is encoded in the commutator = i \theta^{\mu\nu } \label{3e-10}\ ] ] where is an antisymmetric matrix which determines the fundamental discretization of spacetime ( for review one can see ) . according to a brane models approach , particles corresponding to electromagnetic, weak and strong interactions are confined on some hypersurface ( called a brane ) which , in turn , is embedded in some multidimensional space ( called a bulk ) . only gravitation and some exotic matter ( e.g. , the dilaton field )could propagate in the bulk .it is supposed that our universe is such a brane - like object ( for review one can see ) . according to : `` doubly special relativity is a modified theory of special relativity in which there is not only an observer - independent maximum velocity ( the speed of light ) , but an observer - independent maximum energy scale ( the planck energy ) . ''the idea about topological nontriviality means that in the nature these exist objects having topological nontrivial structure . roughly speaking by topological nontrivial mapping whole object # 1 maps onto whole object # 2 . at the same time by topological trivial mapping whole object # 1 maps into one point object # 2 ( see fig . [ mapping ] ) .now we know following topologically nontrivial objects : instanton , monopole and kink .the instanton and monopole are solutions of yang - mills equations with very interesting properties ( for details one can see practically any quantum field theory textbook , for example : ) .the kink is the solution for a nonlinear scalar field .there are the attempts to apply instantons and monopoles for the resolution confinement problem in quantum chromodynamics .the kink solution is applied for the construction of brane models .there is many other applications for instantons , monopoles and kinks but it is beyond the scope of survey of given paper .now we know only one manifestation of topological nontriviality : abrikosov vortexes in a superconductor .this phenomena is the consequence of the meissner effect in superconductivity . by this effect a magnetic fieldis escaped from a superconductor .abrikosov vortex is a vortex of supercurrent in a type - ii superconductor .hawking radiation is a black body radiation emitted by black holes due to quantum effects .vacuum fluctuations causes the generation of particle - antiparticle pairs near the event horizon of the black hole .one of the particles falls into the black hole while the other escapes ._ comments : these calculations are made using perturbative technique .the problem appears if the gravity becomes so strong that perturbative technique can not be applied even for quantum electrodynamics . in this caseit becomes unclear what happened with quantum electromagnetic field in a strong gravitational field ._ one of fundamental problems in modern physics is : what is dark energy ?there are many models to understand what is the dark energy : scalar , spinor , ( non-)abelian vector theory , cosmological constant , fluid with a complicated equation of state , higher dimensions and so on . butworking in framework of one of the above approaches , it is necessary to introduce the some extra cosmological components , for example , inflaton , a dark component and dark matter .unfortunately such a scenarios introduce a new set of problems , for example , coupling with usual matter , ( anti-)screening of dark components during the evolution of the universe , compatibility with standard elementary particle theories and so on .another approach is the gravitational alternative for dark energy : -gravities ( for details one can see , for example , ) .the action for modified gravity is .\ ] ] one can show that some versions of -modified gravities may be consistent with local tests and may provide a qualitatively reasonable unified description of inflation with the dark energy epoch . a nonassociativity is unknown widely hypothesis that in our world there exist a nonassociative structures ( for the textbook about the nonassociativity in physics one can see ) .the people think about : ( 1 ) a nonassociative geometry ; ( 2 ) octonionic version of dirac equations ; ( 3 ) octonionic quark structure ; ( 4 ) a nonassociative decomposition of quantum field operators and nonassociativity in supersymmetric quantum mechanics .in some sense some branches of modern theoretical physics have lost the connection with the experiment . despite any assurances about beauty of these hypotheses the situation is bad: we can not name these hypotheses as real physics .the goal of this notice is to remind of this fact and nothing more .the problem consists that nobody can guarantee that these hypotheses are true !certainly not all of these hypotheses are very important for the theoretical physics .for example , if the idea about nonassociativity appears to be false it will not lead to any dramatic consequences .but if it will appear that the theory of strings is wrong or in the nature there is no supersymmetry it will lead to the very regrettable consequences . for physics it will be very bad if in the future the experimental physics can not give us the answer about the validity of these hypotheses .it may occur , if for example , for this purposes will be necessary unattainable energy or something like that .it will mean the death of physics as the science : there are questions on which we ( because of unattainable energy ) can not answer .who knows : maybe we live at the natural end of physics .certainly it is too pessimistic view on physics but we will hope for the best . from another side we have several fundamental unresolved problems in modern physics . some of them are : nonperturbative quantization , confinement in quantum chromodynamics , dark energy , dark matter , quantum gravity and so on . of course this list can vary depending on the point of view of the physicist . for the hypotheses listed in [ list ]it will be very important to solve even one of these problems .but from another side if one of these problems will be resolved with any another approach it will be very bad for these hypotheses : it means that these hypotheses are not be able to solve these problems .after that the following question appears : for what of problems these hypotheses are necessary ?i am grateful to the research group linkage programme of the alexander von humboldt foundation for the support of this research .e. j. beggs , s. majid , j. math .phys . * 51 * , 053522 ( 2010 ) .[ math/0506450 [ math - qa ] ] ; + a. i. nesterov and l. v. sabinin , `` nonassociative geometry : towards discrete structure of spacetime , '' _ phys ._ , * d62 * , 081501 ( 2000 ) .v. dzhunushaliev , `` non - associativity , supersymmetry and hidden variables , '' j. math . phys .49 , 042108 ( 2008 ) ; doi:10.1063/1.2907868 ; arxiv:0712.1647 [ quant - ph ] ; + v. dzhunushaliev , `` nonperturbative quantum corrections , '' arxiv:1002.0180 [ math - ph ] .
some hypotheses in modern theoretical physics that have not any experimental verification are listed . the goal of the paper is not to criticize or be lawyers any of these hypotheses . the purpose is focus physicists attention on that now there are too much hypotheses which are not confirmed experimentally .
humans have used paintings as a way to communicate , record events , and express ideas and emotions since long before the invention of writing. painting has been at the center of artistic and cultural evolution of humanity reflecting their lifestyle and beliefs .for example , ernst gombrich , art historian in britain , remarked on egyptian art in his book that `` to us these reliefs and wall - paintings provide an extraordinarily vivid picture of life as it was lived in egypt thousand years ago '' . a deep investigation into how painting has evolved and the motivations behind it , therefore , can be expected to yield valuable insights into the history of creative developments in culture . given the ubiquity of art and culture , and the value our society puts on them as symbols of the quality of life , we believe that approaching art and culture as subjects of serious science should be a worthy endeavor . in order to proceed , we take the viewpoint that a piece of art can be considered as a complex system composed of diverse elements whose collective effect , when presented to an audience , is to stimulate their senses be they cerebral , emotional , or physiological .to understand a painting , for instance , one may analyze its colors , geometry , brushing techniques , subjects , or impact on the audience , each of which would allow us to grasp the multifacted , correlated aspects of the art form .the same could be said of many other art forms , obviously with some unique variations depending on the form .a positive development that could have far - reaching benefits for such work is the recent proliferation of high - quality , large - scale digital data of cultural artifacts that provides an unprecedented opportunity to devise science - inspired analytical methods for identifying interesting and complex patterns residing in art and culture _ en masse _ .the quantitative study of style in a cultural artifact is also called stylometry , a term coined by polish linguist wincenty lutosawski who attempted to extract statistical features of word usage from plato s dialog .stylometric analyses have been performed on various subjects since then , including literature , music and art . a landmark scientific study of paintings is taylor _ et al ._ s characterization of the fractal patterns in jackson pollock s ( 19121956 ) drip paintings .it was subsequently found that the characteristics of drip paintings of unknown origin significantly deviated from those of pollock s , showing that such measurements can reflect an artist s unique style .other notable studies of paintings include lyu _ et al ._ s which decomposed images using wavelets ; hughes _ et al ._ s which used sparse - coding models for authenticating artworks ; and kim _ et al . _s which studied the `` roughness exponent '' to characterize brightness contrasts . venturing beyond quantification of artistic styles , recent studies investigated perceived similarities between different paintings , the influence relationships between artworks for quantifying creativity in an artwork , and the changes in the perception of beauty using face - recognition techniques on images from different era . despite these attempts , individual stylistic characteristics of painters have not yet been sufficiently and collectively explored , which will also reveal a remarkable diversity in the modern times .this stem from a number of shortcomings of previous works : they lack robust statistical frameworks for understanding the underlying principles based on the image data ; they make only limited use of the full color information , even though it is readily available in data ; or they concern themselves with specific artworks or painters . in this work ,we propose a scientific framework for characterizing artistic styles that makes use of the complete color profile information in a painting and simultaneously takes into account the geometrical relationships between the colored pixels in the image , two essential building blocks of an image .applying this framework to a large number of historical paintings , we characterize artistic styles of various paintings over time .color boasts a long history as a subject of intensive scientific investigation from many points of view including physical , physiological , sensory , and so forth . starting with two classical groundbreaking investigations by newton and goethe ,modern research on color continues in full force in art , biology , medicine , and vision science as well as physics .here we introduce the concept of ` color contrast ' as the signature property of color use in a painting .as its name suggests , color contrast refers to the contrast effect originating from the color differences between clusters of colors in a painting .examples of paintings in which color contrast is highly pronounced include vincent van gogh s ( 18531890 ) _ starry night _( 1899 ) where a bright yellow moon floats against the dark blue sky and piet mondrian s ( 18721944 ) _ composition a _ ( 1923 ) where well - defined geometric shapes of distinct colors are juxtaposed to form a ` hard edge ' painting , a popular style of the twentieth century .we propose a quantity we call _ seamlessness _ for color contrast that incorporates the color profile and geometric composition of a painting .we show that this quantity is a useful indicator for characterizing distinct painting styles , which also allows us to track the stylistic evolution of western painting both on the aggregate and the individual levels ( particularly for modern painters ) by applying it to a total of 179 853 digital scans of paintings the largest yet in our type of study collected from multiple major online archives .we collected digital scans of paintings ( mostly western ) from the following three major online art databases : web gallery of art ( wga ) , wiki art ( wa ) , and bbc - your paintings ( byp ) .the wga dataset contains paintings dated before 1900 , while the wa and byp datasets contain those dated up to 2014 ( all datasets are up - to - date as of oct 2015 ) .wga provides two useful metadata on the paintings allowing a deeper analysis : painting technique ( e.g. , tempera , fresco , or oil ) and painting genre ( e.g. , portrait , still life , genre painting itself a specific genre depicting scenes from ordinary life ) .byp is mainly a collection of oil paintings preserved in the united kingdom where most paintings originate from .( later we show that byp data still exhibit a comparable trend in color contrast with other datasets . )we excluded paintings dated before the 1300s , as they were too few .we manually removed those deemed improper for , or outside the scope of , our analysis ; they include partial images of a larger original , non - rectangular frames , seriously damaged images , photographs , etc .the final numbers of painting used for analysis are 18321 ( wga ) , 70235 ( wa ) , 91297 ( byp ) images for a total of 179853 .as its name suggests , color contrast represents the effect brought on by color differences between pixels in a painting .therefore , characterizing how an artist places various colors on a canvas is a key element in determining the color contrast in a painting .the human sense of color contrast between two points in a painting ( pixels in a digital image ) is affected most strongly by two factors , the relative color difference between the pixels and their geometric separation the closer they are in real space the more pronounced their color difference will be . in order to quantify this phenomenon, we must first define the color difference between two pixels that agrees with the human - perceived difference .then we must incorporate the spatial separation information to produce a combined measure .a color is represented by three values that define the three coordinates in a ` color space ' .a color space is named according to what the coordinates mean .familiar examples include the rgb space for red , green , and blue , the hsv space for hue ( the color wheel ) , saturation , and value ( brightness ) , and the cielab space ( the full nomenclature is 1976 cie ) for ( lightness ranging from 0 for black to 100 for white ) , ( running the gamut between cyan and magenta ) , and ( between blue and yellow ) .the and axes have no specified numerical limits . for our workwe use the cielab space as it was designed specifically to emulate the human perception of difference between two colors which is proportional to the euclidean distance , between the coordinates of two colors .that a color difference between pixels would be the more pronounced the closer they are prompts us to consider , for simplicity , that between adjacent pixel pairs .this results in a total of approximately data points to consider in an image of pixels . to illustrate what the can teach us about the use of color in a painting , we compare the distribution for piet mondrian s ( 18721944 ) _ composition a _ ( 1923 ) ( fig .[ seamlessness](a ) ) and claude monet s ( 18401926 ) _ water lilies and japanese bridge _ ( 1899 ) ( fig . [ seamlessness](c ) ) , which show significant differences that indeed well reflect their visible differences . in order to provide a baseline for proper comparison, we also measure the color distance distribution of two different types of null models obtained from randomizing these paintings .the first null model is produced by randomly relocating the pixels of the original image while preserving the number of each color , and the second is produced by replacing all pixels of the original image with randomly selected colors in the rgb color space .therefore , the first randomization retains the intrinsic color of a painting with only its geometric structure destroyed , where the second randomization produces a completely random image . the fact that the second null model shows a significantly broader tail than other distributions in indicates that artists are likely to use similar colors , avoiding extremely distant colors in the color space ( fig .[ seamlessness](b ) , ( d ) ) . furthermore ,when geometry is considered , the similar colors already in a same painting tend to stay close in real space also .therefore , the final distribution of of a painting can be considered as the signature of a painting that represents both an artist s own color selection and geometric style .an image characterized by a high color contrast shows regions with large inter - pixel color distance relative to the overall image , i.e. inhomogeneity in . in the mondrian , is small on average but a significant number of large exist , namely the tail of decays more slowly than an exponential ( fig .[ seamlessness](b ) ) .this is a consequence of large patches of nearly uniform colors being separated by well - defined borders .the monet , on the other hand , shows a high average owing to the many intertwining brush strokes of different colors , but few extraordinarily large , similar to an exponential distribution ( fig .[ seamlessness](d ) ) . these suggest using the relative magnitude of the mean and the standard deviation to characterize a painting s overall color contrast :specifically , we use . $ ] and when ( for the mondrian ) , and when ( for the monet ) .we call it `` seamlessness '' because a high ( low ) means fewer ( more ) boundaries or ` seams ' between clusters or ` patches ' of like colors .this quantity is also used for quantifying heterogeneity of inter - event time distributions in statistical physics , although the problems are unrelated to each other .unlike digitized or ocred ( optical character recognition ) text , a digitized image of a painting can exist in many versions of differing sizes or colors depending on scanning environments and settings .we therefore need to test for the robustness ( insensitivity ) of against such variations , if we are to be able to rely on it as a characteristic of a painting , and not only of a specific scan of it .while we expect slight differences in color or resolution not to result in significant changes in in principle since it is defined in terms of the color differences between pixels , it would still be reassuring to confirm the robustness of against certain realizable variations .when one digitally scans a painting , the lighting condition is a key element that affects the final color of the image .since the original lighting conditions are not given in the datasets , we simulate different lighting conditions by varying the color temperatures of the light sources , i.e. the color profile of a black body of the same temperature . assuming that the original scan represents a well white - balanced image , multiplying each pixel by the rgb values of the color of the black body normalized by 255 ( the maximum value of each axis ) gives the simulated pixel .for instance , at 1500 k , the rgb value of the color that a black body radiates is ( 255 , 109 , 0 ) .then the pixels of an image are multiplied by the factor ( 255/255 , 109/255 , 0/255 ) .analysis of the six test images in different conditions of light ranging from 1500 k ( similar to a common candle ) to 10000 k ( similar to a very clear blue sky ) indicates that is fairly consistent as expected ( see fig .[ temperature ] and fig .[ robustness](a ) for the simulated images of paintings in different color temperature and their values ) .we also test for the effect of image size on by rescaling the six test images to between 100 and 1500 pixels in width ( the longer side of an image ) using the bicubic interpolation method ( fig .[ robustness](b ) ) .after showing some fluctuation when the size of the image is very small ( 300 pixels in width ) , becomes fairly stable for larger widths ( 500 pixels ) .since 99.8% ( wga ) , 76.0% ( wa ) and 100.0% ( byp ) of painting images in this research are wider than 500 pixels , this is unlikely to be an issue in practice ( fig . [ size ] ) .for all pairs ) .( c ) number of paintings in various genres in the wga dataset .( d ) evolution of s of different genres , with the standard error of the mean indicated . ]the measurement of on all images allows us to map the historical trend of color contrast , shown in fig .[ evolution](b ) .most notably , the average consistently increases until it shows a temporary dip in the nineteenth century ( fig .[ evolution](c ) ) .the increase in around the fifteenth century is often attributed to the wide adoption of oil as binder medium for pigments .the availability of new pigments , media , and colors have historically been linked to the emergence of new techniques and styles in painting .prior to 1500 ce , in the medieval times , most paintings were tempera or fresco . around the fifteenth century , oil gained popularity , superseding the previous two as the most dominant medium of choice that allowed for new techniques for high contrast ( fig .[ technique](a ) ) .[ technique](b ) teaches us that oil paintings show significantly higher average than other techniques .the kolmogorov - smirnov test tells us that the distributions of of varying techniques are significantly different ( for all pairs ) . additionally , two well - known historical developments in painting , the _ chiaroscuro _ ( the treatment of light and dark to express gradations of light that create the effect of modeling ) during the renaissance period , and _ tenebrism _( painting in the shadowy manner with dramatic contrasts of light and dark by caravaggio ( 15171610 ) made popular during the baroque period contributed to the increase in .the development of these new painting techniques is also closely related to the rise of novel painting genres .the rise in popularity of portraits after the fifteenth century led to a couple of significant developments in painting technique such as chiaroscuro and tenebrism mentioned above ( fig .[ technique](c ) ) .still life shows notable changes during the sixteenth century , reaching the peak in the seventeenth century ( fig .[ technique](d ) ) .the increase of in still life in the sixteenth century is attributed to the change of themes and subjects . in the prior half of the century, dutch painters like pieter aertsen ( 15081575 ) and joachim beuckelaer ( 15331573 ) intentionally combined still life and depiction of biblical scenes in the background , while in the latter half artists began to highlight still objects by incorporating chiaroscuro previously found in portraits resulting high . in the nineteenth century , artists began to perceive paintings as a means of expressing their individuality and originality more strongly than ever before . challenging the tradition led to a thriving of different interpretations of the world , and various new techniques for expressing it emerged . in the beginning of the nineteenth century , for instance , artists started to pursue various impressions of light shining on nature and landscapes , rather than the dramatic and artificial lighting effect of the previous era , leading to the decrease in .the invention of the railroad and the paint tube enabled impressionists to travel to distant areas , leading to the surge in popularity of landscape paintings in the nineteenth century ( fig .[ technique](c ) ) .furthermore , towards the end of the nineteenth century modern abstract art began to emerge , noted for a drastic departure from realism .the simple and geometric abstraction of the movement led to an unprecedented growth in ( fig .[ evolution](c ) ) .but it is important to note that the variance in also grows rapidly , indicating a remarkable growth in the diversity in color contrast .the most notable growth occurs between the nineteenth and the twentieth centuries ( fig .[ evolution](d ) ) .[ variance ] shows that in earlier periods , the shape of distribution of is concentrated around the mean , reflecting a narrow scope of color usage and therefore color contrast .however , in modern periods , the distribution becomes much broader than earlier periods .this indicates that painting styles become more diverse in later times , especially the modern era .the concentration around the mean appears weak , making it difficult to think of a `` typical '' style . this stronger diversity in color contrastis observed not only on such aggregate level , but also in individual painters profiles : regardless of the numbers of paintings produced , the individual painter exhibits a wider range of in this period than their predecessors ( fig . [evolution](e ) , ( f ) ) , signifying a culture of experimentation and willing adoption of diverse styles .we explore this in more detail in the following section . prompted by the aforementioned extraordinary historical developments in color contrast in the modern , we find it essential to explore in finer detail the patterns of individuality for understanding stylistic developments . for the modern painters who belong to this period ( defined as those whose middle point in their careeris in the nineteenth century or later ) , we introduce two novel quantities to characterize their individuality , _ metamorphosality _ and _ singularity_. metamorphosality measures a painter s transformation in color contrast over their career , while singularity measures how distinct their style is from the norm of the day .we used the wa dataset to measure these quantities .mondrian , founder of _ de stijl _ movement and renowned for abstractionist paintings ( fig .[ individual](d ) ) , actually produced works of a wide range of styles over his career .his progression from traditional style to abstractionism can in fact be summarized using , which increases consistently until the mid-1920s , when his abstractionism fully matures ( fig .[ individual](a ) , ( d ) ) .pierre auguste renoir ( 18411919 ) , an early leader of impressionism , is the opposite : his decreases over time , as he progressively employs free - flowing brush strokes to generate boundaries that fuse softly with the background ( fig .[ individual](b ) , ( e ) ) .claude monet ( 18401926 ) and edgar degas ( 18341917 ) , also prominent impressionists , show similar trends .these observations teach us that the changes in s can indeed represent painters stylistic evolutions .we now define the _ metamorphosality _ of a painter based on the slope of the linear fit to the values with their career lengths normalized to 1 . for instance , for mondrian and for renoir ( fig .[ individual](c ) ) . given the near - gaussian distribution over the 1,326 modern artists who produced paintings in at least five distinct years , we define a painter s metamorphosality as their -score , where is the average , and is the standard deviation of the slopes .this allows us , for example , to rank the artists and find the most notable , prominent ones .[ bargraph](a ) shows 100 modern artists whose is ranked in the top 50 , both positive and negative .it is american painter howard mehring ( 19311978 ) who has the largest .mehring s early works are reminiscent of pollock , mark rothko ( 19031970 ) and helen frankenthaler ( 19282011 ) , employing uniformly scattered colors with vague boundaries .his later works , on the other hand , become more structured with geometric compositions of vivid colors with abrupt transitions , very similar to mondrian s hard - edge paintings . at the other extreme with the smallest ( most negative ) is swiss - french painter flix edouard vallotton ( 18651925 ) , member of the post - impressionist avant - garde group les nabis ; initially famous for wood cuts featuring extremely reductive flat patterns with strong outlines ( high ) , he produced classical - style paintings such as landscapes and still life in later life ( low ) for .another indication of a strong individuality is how one s works differ from their contemporaries. we quantify this using _ singularity _ defined as follows . for each paintingwe compare its with those produced roughly at the same time ( defined as a span of eleven years , five years before and five after its date ) and measure its -score .we call a painting _ singular _( i.e. statistically unusual ) if its falls outside some value , which we take to be .we then measure each painter s production rate of singular artworks over their careers .[ individual](f ) shows seven artists and their paintings -scores , for example .the artworks in the lightly shaded areas are the singular ones according to our definition .we now define the singularity of an artist as the difference between the fractions of their paintings that are and .this definition allows us to determine those who often produced singular paintings , and show a specific trend in .for example , 45% of mondrian s paintings are in ( singular high in ) and only 6% in , resulting in , showing that his high - s paintings are indeed unique and singular when compared with his contemporaries . in fig . [individual](g ) shows the histogram of 330 artists who painted more than 40 works . in accordance with our definition ,we indeed identify those known for a high level of singularity and originality ( see fig .[ bargraph](b ) for 100 modern artists whose is ranked in the top 50 , both positive and negative ) .qi baishi ( 18641957 ) , chinese - born but widely known in the west for his witty watercolor works of vivid colors shows the highest positive singularity ( ) .max bill ( 19081994 ) is also highly singular , known for geometric paintings that also became a signature of his style as a swiss designer ( ) .koloman moser ( 18681918 ) , founding member of the vienna secession movement and known for repetitive complex motifs inspired by classical greek and roman art , has the largest negative singularity ( ) .eugne leroy ( 19102000 ) is ranked second in negative singularity , known for numerous works featuring obsessively thick brush strokes in different colors , resulting in obscure objects not readily identifiable .these findings show that our understanding of color contrast can indeed characterize the individual painters , and identify those prominently noted for their creativity and uniqueness .art and culture are the manifestations of human creativity . for that reason ,in addition to being objects of appreciation for purely aesthetic purposes , they may contain valuable information we could utilize to understand the creative process . to this end , we have focused on perhaps the most essential ingredients of a painting color and geometry via color contrast and inhomogeneity , which allowed us to quantitatively characterize and trace artistic styles of various periods and identify those artists who exhibited variability and originality .we inspected whether our measure was sensible by cross - validating our findings with accepted understandings of their artworks .lessons from our investigation suggest many interesting directions for understanding art and culture via the use of massive data sets .for instance , asian , hindu , and islamic painting art have been largely untouched in our work ; large - scale analyses of these subjects would be of immediate , universal interest .also , integrating an analytical study using stylometric measures such as ours with object - recognition and classification techniques from machine learning could lead to a deeper understanding of art that incorporates both the styles and contents of paintings .for example , how the same objects or motifs have been portrayed differently over time would shed light on changes in tastes as well as style .we expect such work to find use in understanding various art forms such as sculpture , architecture , visual design , film , animation , typography , etc .b. l. , d. k. , and h. j. acknowledge the support of national research foundation of korea ( nrf-2011 - 0028908 ) .j. p. acknowledges the support of national research foundation of korea ( nrf-20100004910 and nrf-2013s1a3a2055285 ) , ministry of science ( msip - r0184 - 15 - 1037 ) , and bk21 plus postgraduate organization for content science .hughes jm , foti nj , krakauer dc , rockmore dn .quantitative patterns of stylistic influence in the evolution of literature .2012 ; 109(20):7682 - 7686 .doi : 10.1073/pnas.1115407109 taylor rp , guzman r , martin tp , hall gdr , micolich ap , jonas d , scannel bc , fairbanks ms , marlow ca .authenticating pollock paintings using fractal geometry . 2007 ; 28(6):695 - 702 .doi : 10.1016/j.patrec.2006.08.012 hughes jm , graham dj , rockmore dn .quantification of artistic style through sparse coding analysis in the drawings of pieter bruegel the elder .natl . acad .2010 ; 107 : 1279 - 1283 .doi : 10.1073/pnas.0910530107 graham dj , friedenberg jd , rockmore dn .mapping the similarity space of paintings : image statistics and visual perception . visual cognition , 2010 ; 18(4):559 - 573 .doi : 10.1080/13506280902934454 de la rosa j , suaez a. quantitative approach to beauty . perceived attractiveness of human faces in world painting .international journal for digital art history .doi : 10.11588/dah.2015.1.21640 smith r. review / art ; looking beneath the surfaces of eugne leroy .1992 ; available from : http://www.nytimes.com/1992/09/18/arts/review-art-looking-beneath-the-surfaces-of-eugene-leroy.html . ( date of access: 2016 - 04 - 07 ) .
painting is an art form that has long functioned as major channel for communication and creative expression . understanding how painting has evolved over the centuries is therefore an essential component for understanding cultural history , intricately linked with developments in aesthetics , science , and technology . the explosive growth in the ranges of stylistic diversity in painting starting in the nineteenth century , for example , is understood to be the hallmark of a stark departure from traditional norms on multidisciplinary fronts . yet , there exist few quantitative frameworks that allow us to characterize such developments on an extensive scale , which would require both robust statistical methods for quantifying the complexities of artistic styles and data of sufficient quality and quantity to which we can fruitfully apply them . here we propose an analytical framework that allows us to capture the stylistic evolution of paintings based on the color contrast relationships that also incorporates the geometric separation between pixels of images in a large - scale archive of 179853 images . we first measure how paintings have evolved over time , and then characterize the remarkable explosive growth in diversity and individuality in the modern era . our analysis demonstrates how robust scientific methods married with large - scale , high - quality data can reveal interesting patterns that lie behind the complexities of art and culture .
advances in theory are often fore - shadowed by intuition .but the mathematical structures governing multipartite and even bipartite states and unitary transformations are complex , which makes many problems difficult to explore and intuition hard to develop . analyzing problems analytically is often a time - consuming process .validation of hypothesis is laborious and searching for counter - examples is a lengthy endeavor .the qit community will probably benefit from tools to accelerate these processes .the use of computers for theoretical mathematics is a well established , with a specialized journal , textbooks and numerous papers .wolfram research defines _experimental mathematics _ as a type of mathematical investigation in which computation is used to investigate mathematical structures and identify their fundamental properties and patterns .bailey and borwein use the term to mean the methodology of doing mathematics that includes the use of computation for * gaining insight and intuition * discovering new patterns and relationships * using graphical displays to suggest underlying mathematical principles * testing and especially falsifying conjectures * exploring a possible result to see if it is worth formal proof * suggesting approaches for a formal proof * replacing lengthy hand derivations with computer - based derivations * confirming analytically derived results as the benefits of tools such as mathematica , matlab and maple are clear , there is strong indication that field - specific software for experimental theoretical quantum information would be advantageous .qlib is an attempt to provide such a tool .qlib provides the tools to manipulate density matrices , separable states , pure states , classical probability distributions ( cpds ) as well as unitary and hermitian matrices .all of which are supported with any number of particles , and any number of degrees of freedom per particle .the following functions are provided to manipulate these objects : * * entanglement calculations * : pure state entanglement , concurrence , negativity , tangle , logarithmic negativity , entanglement of formation , relative entanglement , robustness , pt - test ( peres horodecki ) , schmidt decomposition and singlet fraction . * * entropy * : shannon , von neumann , linear entropy , relative entropy , participation ratio , purity * * measurements * : orthogonal ( to multiple collapsed states or to a single mixture ) , povm , weak measurements * * object transformation : * * * reorder particles , partial trace , partial transpose * * transform objects to / from the regular representation to a tensoric representation with one index per particle if the original object was a vector , or two indexes per particle if the object was a matrix * * convert to / from computational base to the base of su(n ) generators * * distance measures * : hilbert - schmidt , trace distance , fidelity , kullback - leibler , bures distance , bures angle , fubini - study * * miscellaneous * : majorization , mutual information , spins in 3d , famous states , famous gates qlib provides * parametrizations for all objects of interest * , density matrices , separable states , pure states , cpds , hermitian matrices and unitary matrices .in other words , these object are representable as points in a parameter space .this allows , for example , to generate random separable states or random unitary matrices . for details of each parametrization and its theoretical background, please refer to the on - line help . as an example, details of two unitary - matrix parametrizations and of one separable density matrix parametrization are presented in [ sec : sample - applications ] .the * robust optimization capabilities * provided with qlib , allows searching for extrema of functions defined over these spaces .the optimization is performed by alternating stages of hill - climbing and simulated annealing while applying consistency requirements to the output of the stages .current experience with the optimization feature suggests that the search succeeds in locating the global extrema in a surprising majority of the cases .finally qlib provides a wide selection of general purpose utilities which , while are not quantum - information specific , go a long way towards making the use of qlib productive and simple : * linear algebra : gram schmidt , spanning a matrix using base matrices , checking for linear independence , etc . *numerics : approximately compare , heuristically clean - up computation results from tiny non - integer and/or tiny real / imaginary parts , etc .* graphics : quickly plot out functions in 2 and 3d , smoothing and interpolation techniques for noisy or sparse data , etc .qlib , available at ` www.qlib.info ` , has been designed for easy use .an _ installation guide _ and a _ getting started guide _ are available on the website , and over a dozen demos are provided as part of qlib , to help you get started .in addition , on - line help is available : simply type ` help qlib ` at the matlab prompt for an overview of functionality or get function - specific help , e.g. ` help partial_trace . ` finally ,user forums are available to ask questions and discuss qlib issues , and forms are provided to request new features or report bugs .following are a number of qlib usage example which were selected both for their ability to demonstrate qlib capabilities and for their relatively simple structure and simple theoretical background , so that they may be quickly understood by a wide range of readers .recently , some work has been done regarding the entanglement of superpositions an upper limit to the entanglement of has been proposed by linden , popescu and smolin . further work by gour added a tighter upper bound and a lower bound .unfortunately , the analytical form of these bounds make it difficult to get a good intuitive feel as to whether they are relatively tight or whether there is still significant room for improvement .qlib provides us with convenient tools with which to explore the problem .see figure 1 . .generally speaking , gour is indeed tighter than lps , but not always ( note the top - left sub - figure at . also of note is the complex behavior exhibited by the entanglement of superposition and the relative un - tightness of existing bounds . ] to create the graphs above , qlib s basic primitives have been used ( computation of entanglement for a pure state , normalization of a pure state , etc ) , as was the capability to generate random pure states . finally the optimization capabilities are also put to use , as gour s bounds are defined in terms of maximizing a function over a single degree of freedom for given , , and , which requires that every point along the gour limit lines above be computed by an optimization process . over the years there has been keen interest in the question of mems , maximally entangled mixed states , which can not be made more entangled ( as measured by some measure ) with any global unitary transformation .qlib can assist in exploration of this problem by searching for the most - entangling unitary transformation .parametrization of unitary transformations is done either by generalized euler angles or with the more naive with being the in this particular example , we have explored the maximal entanglement possible for the separable diagonal density matrix qlib can help discover the dependency of the maximal entanglement on , by locating the mems associated with the initial density matrix and visualizing various options for , dependence . see figure 2 . .the ] space was explored with a resolution of , for a total of points , for each of which an optimization of the concurrence over the space of su(4 ) unitaries has been performed .the maximal concurrence is shown both as a function of and ( in 3d , above ) , and as a contour plot ( below ) showing the dependence of the maximal concurrence on the trace distance between the single - particle initial density matrix , , and the fully mixed state for a single qubit .,title="fig : " ] it is well known that a single qubit may be represented using the generators as with a pure state iff .this suggests a trivial generalization to higher dimensions as follows with being the generators , with the assumption that if then the density matrix represents a pure state . utilizing qlib s parametrization capabilities, we shall generate a large number of random pure states , separable states and general density matrices and plot the projections of the resulting bloch `` hyper - sphere '' , i.e. scatter plots of two components of .it is evident from figure 3 that no such trivial generalization is possible , and that the geometry of the problem is far more complex that can be naively guessed .projections of the su(3 ) and su(4 ) bloch `` hyper - spheres '' .blue dots indicate general density matrices .green are separable states and red dots indicate pure states.,title="fig : " ] projections of the su(3 ) and su(4 ) bloch `` hyper - spheres '' .blue dots indicate general density matrices .green are separable states and red dots indicate pure states.,title="fig : " ] another simple use of qlib is to experimentally test the additivity of entropy and entanglement measures by randomly generating multiple -s and -s and checking the additivity attribute for each , we can form a reliable hypothesis regarding the behavior of the measure in question .moreover , by extremizing over all possible , one may reach an even more well - founded conclusion .of particular interest is the relative entanglement measure which is a generalization of the classical relative entropy it is known that is non - additive . to compute ,one must be able to compute , which in turn requires parametrization of the separable space . in qlibthis is achieved using the observation of p. horodecki , that the separable space is convex , and thus each point within is constructable as a linear interpolation of a finite number of extremal points of that space , as per the caratheodory theorem .therefore , to parametrize all separable density matrices of dimension , one may parametrize separable pure states of the same dimensionality and a classic probability distribution to specify the mixing , resulting in the parametrization the numerical study of additivity clearly indicate that the relative entanglement is super - additive , i.e. is distributed as free software . the word " free "does not only refer to price ; primarily it refers to freedom : you may run the program , for any purpose , study how it works , adapt it to your needs , redistribute copies and improve the program .it is our hope is that qlib will evolve into a group effort , maintained , nurtured and grown by the quantum information community , for the benefit of us all . for that purpose ,we have licensed qlib under the gpl , or gnu public license , which sets - up both the freedom to use the software , and the requirement that any enhancements made to qlib be released back to the community .code which uses qlib , but is not part of it , may , of course , remain private . for more information regarding these issues ,see the licensing section of the qlib website .
* developing intuition about quantum information theory problems is difficult , as is verifying or ruling - out of hypothesis . we present a matlab package intended to provide the qit community with a new and powerful tool - set for quantum information theory calculations . the package covers most of the `` qi textbook '' and includes novel parametrization of quantum objects and a robust optimization mechanism . new ways of re - examining well - known results is demonstrated . qlib is designed to be further developed and enhanced by the community and is available for download at * ` www.qlib.info `
many genomic datasets consist of measurements from multiple samples at a common set of genetic markers , with no phenotype " representing clinical state or experimental condition of the sample .datasets of this type include genome - wide measurements of dna copy number or dna methylation , for which the main goal is to identify aberrant regions on the genome that tend to have extreme measurements in comparison to other regions .testing for aberrations requires some thought about appropriate test statistics , and constructing a null distribution that appropriately reflects serial correlation structures inherent to genomic data . a meta - analysis across several genome - wide association studiesmight also be viewed in this framework , in the sense that the testing for association within each study produces a vector of -values that might be viewed as a vector of observations . " the problem of interest to us is the identification of aberrant markers , where multiple samples exhibit a coordinated ( unidirectional ) , departure from the expected state .aberrant markers are of particular interest in cancer studies , where tumor suppressors or oncogenes exhibit dna copy variation or modified methylation levels .similarly , it may be possible to identify pleiotropic single nucleotide polymorphisms ( snps ) in disease association by identifying genetic markers that repeatedly give rise to small -values in multiple association studies .in this paper we provide a rigorous asymptotic analysis of a permutation based testing procedure for identifying aberrant markers in genomic data sets .the procedure , called dinamic , was introduced in walter et al .( 2011 ) , and is described in detail below .in contrast to other procedures which permute all observations , dinamic is based on cyclic shifting of samples .cyclic shifting eliminates concurrent findings across samples , but retains the adjacency of observations in a sample ( with the exception of the first and last entries ) , thereby largely preserving the correlation structure among markers .our principal result is that , for a broad family of null data distributions , the sampling distribution of the dinamic procedure is close to the true conditional distribution of the data restricted to its cyclic shifts . as a corollary, we find that the cyclic shift testing provides asymptotically correct type i error rates .the outline of the paper is as follows .the next section is devoted to a description of the cyclic shift procedure , a discussion of the underlying testing framework within which our analysis is carried out , and a statement of our principal result . in section [ sec3 ]we apply cyclic shift testing to dna copy number analysis , dna methylation analysis , and meta - analysis of gwas data , and show that the results are consistent with the existing biological literature . because of its broad applicability and solid statistical foundation , we believe that cyclic shift testing is a valuable tool for the identification of aberrant markers in many large scale genomic studies .we consider a data set derived from subjects at common genomic locations or markers .the data is arranged in an matrix with values in a set .depending on the application , may be finite or infinite .the entry of contains data from subject at marker .thus the row of contains the data from subject at all markers , and the column of contains the data at marker across subjects . for let be a local summary statistic for the marker . in most applicationsthe simple sum statistic is employed . in order to identify locations with coordinated departures from baseline behavior ,we apply a global summary statistic to the local statistics .when looking for extreme , positive departures from baseline it is natural to employ the global statistic to detect negative departures from baseline , the maximum may be replaced by a minimum .the cyclic shift procedure and the supporting theory in theorem [ thm1 ] apply to arbitrary local statistics , as well as a range of global statistics .given a data matrix , we are interested in assessing the significance of the observed value of the global statistic . when is found to be significant , the identity and location of the marker having the maximum ( or minimum ) local statistic is of primary biological importance . while in special cases it is possible to compute -values for under parametric assumptions , permutation based approaches are often an attractive and more flexible alternative .a permutation based -value can be obtained by applying permutations to the entries of , producing the matrices , and then comparing to the resulting values of the global statistic .the maximum global statistic accounts for multiple comparisons across markers , so it is not necessary to apply further multiplicity correction to the permuted values .the performance and suitability of permutation based -values in the marker identification problem depends critically on the family of allowable permutations . if permutes the entries of without preserving row or column membership , then the induced null distribution is equivalent to sampling the entries of at random without replacement . in this casethe induced null distribution does not capture the correlation of measurements within a sample , or systematic differences ( e.g. in scale , location , correlation ) between samples .in real data , correlations within and systematic differences between samples can be present even in the absence of aberrant markers . as such, -values obtained under full permutation of will be sensitive to secondary features of the data and may yields significant -values even when no aberrant markers are present .an obvious improvement of full permutation is to separately permute the values in each row ( sample ) of the data matrix .this approach is used in the gistic procedure of beroukhim et al .while row - by - row permutation preserves some differences between rows , it eliminates correlations within rows ( and correlation differences between rows ) , so that the induced null distribution is again sensitive to secondary , correlation based features of the data that are not related to the presence of aberrant markers . the dinamic cyclic shift testing procedure of walter et al .( 2011 ) addresses the shortcomings of full and row - by - row permutation by further restricting the set of allowable permutations . in the procedure , each row of the data matrix is shifted to the left in a cyclic fashion , as detailed below , so that the first entries of the vector are placed after the last element ; the values of the offsets are chosen independently from row to row .cyclic shifting preserves the serial correlation structure with each sample , except at the single break point where the last and first elements of the unshifted sample are placed next to one another . at the same time , the use of different offsets breaks concurrency among the samples , so that the resulting cyclic null distribution is appropriate for testing the significance of . formally , a _cyclic shift _ of index is a map whose action is defined as follows : given with , let be the map from the set of data matrices to itself defined by applying to the row of , namely , the cyclic shift testing procedure of walter et al .( 2011 ) is as follows ..2 in * cyclic shift procedure to assess the statistical significance of * 1 .let be random cyclic shifts of the form , where are independent and each is chosen uniformly from .2 . compute the values of the global statistic at the random cyclic shifts of .3 . define the percentile - based -value here is the indicator function of the event .we wish to assess the performance of the cyclic shift procedure within a formal testing framework . to this end , we regard the observed data matrix as an observation from a probability distribution on , so that for any ( measurable ) set the probability as measurements derived from distinct samples are typically independent , we restrict our attention to the family of measures on under which the rows of are independent .let be the sub - family of corresponding to the null hypothesis that has no atypical markers , _i.e. _ , no markers exhibiting coordinated activity across samples .one may define in a variety of ways , but the simplest is to let be the set of distributions such that the rows of are stationary and ergodic under ; independence of the rows follows from the definition of . under columns of are stationary and ergodic , and the same is true of the local statistics , which are identically distributed and have constant mean and variance .thus under no marker is atypical in a strong distributional sense .our principal result shows that the -value produced by the cyclic shift procedure is approximately consistent for distributions in a subfamily .the family includes or approximates many distributions of practical interest , including finite order markov chains with discrete or continuous state spaces . in order to assess the consistency of the cyclic shift -value we carefully define both the target and the induced distributions of the procedure . as much of what follows concerns probabilities conditional on the observed data matrix, we use to denote both the random matrix and its observed realization .given let be the set of all cyclic shifts of .define the _ true conditional distribution _ to be the conditional distribution of given , namely if is discrete with probability mass function then if has probability density function then may be defined in a similar fashion . in the cyclic shift procedure, matrices are selected uniformly at random from the set of cyclic shifts of the observed data matrix .the associated _ cyclic conditional distribution _ has the form under mild conditions the cyclic shifts of are distinct with high probability when is large ( see lemma [ lem3 ] in section [ sec4 ] ) . in this case, the cyclic conditional distribution may be written as the distribution of the cyclic shift -value is given by here where is the observed value of , and represents the event .note that as the number of cyclic shifts increases , the -value will converge in probability to our principal result requires an invariance condition on the global statistic .informally , the condition ensures that does not give special treatment to any column of the data matrix ..1 in * definition : * a statistic is _ invariant under constant shifts _ if whenever is obtained from by applying the _ same _ cyclic shift to each row of ..1 in the maximum column sum statistic used in the cyclic shift testing procedure is clearly invariant under constant shifts .more generally , any statistic of the form where is an arbitrary local statistic ( not necessarily a sum ) , and is invariant under cyclic shifts will be invariant under constant shifts .the following result establishes the asymptotic validity of the cyclic shift procedure in this general setting .let be a random matrix whose rows are independent copies of a first - order stationary ergodic markov chain with countable state space and transition probabilities .suppose that where in the second condition we define to be . here and denote the one- and two - dimensional marginal distributions of the markov chain , respectively .for let be a statistic that is invariant under constant shifts .then tends to zero in probability as tends to infinity .the first condition in ( [ conds ] ) ensures that there are not deterministic transitions between the states of the markov chain .the second condition can be expressed equivalently as implies .the proof of theorem [ thm1 ] is given in section [ sec4 ] .as an immediate corollary of the theorem , we find that tends to zero in -probability as tends to infinity .thus , under the conditions of the theorem , when and are large , the percentile based -value will be close to the true conditional probability that exceeds the observed value of .if we define to be , where the expectation is taken under , then conditional convergence also yields the unconditional result as tends to infinity .thus , under the assumptions of theorem [ thm1 ] , the percentile based -value provides asymptotically correct type i error rates .theorem [ thm1 ] can be extended in a number of directions . under conditions similar to those in ( [ conds ] )the theorem extends to matrices whose rows are independent copies of a order ergodic markov chain , where is fixed and finite . the theorem can also be extended to settings in which the rows of are independent stationary ergodic markov chains with _different _ transition probabilities . in this casewe require that the conditions ( [ conds ] ) hold for each row - chain .theorem [ thm1 ] can also be extended to the setting in which the rows of are independent copies of a first - order stationary ergodic markov chain with a continuous state space and a transition probability density . the existence of the transition probability density obviates the need for the first condition in ( [ conds ] ) and the analysis of lemmas [ lem2 ] and [ lem3 ] in section [ sec4 ] .the second condition of ( [ conds ] ) is replaced by the assumption where and denote the one- and two - dimensional marginal densities of the markov chain , respectively .markovity and ergodicity ensure that converges weakly to a pair consisting of independent copies of , and therefore condition ( [ op1c ] ) holds if the ratio is continuous on .thus theorem [ thm1 ] applies , for example , to standard gaussian ar(1 ) models . as in the discrete case, one may extend the theorem to settings in which the rows of are independent stationary ergodic markov chains with _different _ transition probabilities , provided that ( [ op1c ] ) holds for each row - chain . herewe present simulation results illustrating the resampling distributions and defined above .each simulation was conducted using an matrix with independent , identically distributed rows generated by a stationary first - order -state markov chain with a fixed transition matrix .figure [ fig1 ] shows empirical cumulative distribution functions ( cdfs ) and based on simulations conducted with , and or 50 .each panel is based on an observed matrix produced by the markov chain , and the results presented here are representative of those obtained from other simulations .based on theorem [ thm1 ] , we expect the cdfs to converge as the number of columns increases .accordingly , the two curves in each panel of part b of figure [ fig1 ] ( = 50 ) exhibit a greater level of concordance than those in part a ( = 10 ) .additional simulation results based on an ar(1 ) model are presented in section [ app ] , the appendix .in tumor studies dna copy number values for each subject are measured with respect to a normal reference , typically either a paired normal sample or a pooled reference . in the autosomesthe normal dna copy number is two .underlying genomic instability in tumor tissue can result in dna copy number gains and losses , and often these changes lead to increased or decreased expression , respectively , of affected genes ( pinkel and albertson 2005 ) .some of these genetic aberrations occur at random locations throughout the genome , and these are termed _sporadic_. in contrast , _ recurrent _ aberrations are found in the same genomic region in multiple subjects .it is believed that recurrent aberrations arise because they lead to changes in gene expression that provide a selective growth advantage .therefore regions containing recurrent aberrations are of interest because they may harbor genes associated with the tumor phenotype .distinguishing sporadic and recurrent aberrations is largely a statistical issue , and the cyclic shift procedure was designed to perform this task .dna methylation values for a given subject are also measured with respect to a paired or pooled normal reference .although dna methylation values are not constant across the genome , even in normal tissue , at a fixed location they are quite stable in normal samples from a given tissue type .epigenetic instability can disrupt normal methylation patterns , leading to methylation gains and losses , and these changes can affect gene expression levels ( laird 2003 ) .regions of the genome that exhibit recurrent hyper- or hypo - methylation in tumor tissue are of interest . in many applicationsmore than one atypical marker may be present , and as a result multiple columns may produce summary statistics with extreme values .in tumor tissue , for example , underlying genomic instability can result in gains and losses of multiple chromosomal regions ; likewise , epigenetic instability can lead to aberrant patterns of dna methylation throughout the genome . in order to identify multiple atypical markers and assess their statistical significanceit is necessary to remove the effect of each discovered marker before initiating a search for the next marker .this task is carried out by a process known as _peeling_. several peeling procedures have been proposed in the literature , including those employed by gistic ( beroukhim et al .2007 ) and dinamic ( walter et al . 2011 ) . in the applications here we make use of the procedure described in detail in walter et al .( 2011 ) .walter et al . ( 2011 ) used the cyclic shift procedure to analyze the wilms tumor data of natrajan et al .here we apply the procedure to the lung adenocarcinoma dataset of chitale et al .( 2009 ) , with = 192 and = 40478 .we detected a number of highly significant findings under the null hypothesis that no recurrent copy number gains or losses are present .table [ tab1 ] lists the genomic positions of the the three most significant copy number gains and losses , as well as neighboring genes , most of which are known oncogenes and tumor suppressors . strikingly , weir et al .( 2007 ) detected highly significant gains of the oncogenes _ tert _ , _ arnt _ , and _in their comprehensive investigation of the disease , each of which appears in table [ tab1 ] .the loss results for chromsomes 8 and 9 in table [ tab1 ] are also highly concordant with previous findings of weir et al .( 2007 ) , and wistuba et al .weir et al .( 2007 ) detected chromosomal loss in a broad region of 13q that contains the locus in table [ tab1 ] , but it is not clear if the target of this region is the known tumor - suppressor _ rb1 _ or some other gene .[ tab1 ] .genomic locations of the three most significant dna copy number gains ( top table ) and losses ( bottom table ) found by applying the cyclic shift procedure to the lung adenocarcinoma dataset of chitale et al .( 2009 ) . [ cols="^,>,^",options="header " , ] genome - wide association studies ( gwas ) are used to identify genetic markers , typically _ single nucleotide polymorphisms _ ( snps ) , that are associated with a disease of interest . when conducting a gwas involving a common disease and alleles with small to moderate effect sizes ,large numbers of cases and controls are required to have adequate power to detect disease snps ( pfeiffer et al .2009 ) .the welcome trust case control consortium ( wtccc 2007 ) performed a genome - wide association study of seven common familial diseases - bipolar disorder ( bd ) , coronary artery disease ( cad ) , crohn s disease ( cd ) , hypertension ( ht ) , rheumatoid arthritis ( ra ) , type i diabetes ( t1d ) , and type ii diabetes ( t2d ) - based on an analysis of 2000 separate cases for each disease and a set of 3000 controls .we applied the inverse of the standard normal cumulative distribution function to the cochran - armitrage trend test -values from the wtccc study , a transformation that produces z - scores whose values are similar those exhibited by a stationary process .we then analyzed the matrix whose entries are negative thresholded z - scores arranged in rows corresponding to the seven disease phenotypes .as seen in figure [ fig2 ] , a number of regional markers on chromosome 6 produce extremely large column sums .these markers lie in the major histocompatability complex ( mhc ) , which is noteworthy because mhc class ii genes have been shown to be associated with autoimmune disorders , including ra and t1d ( fernando et al . 2008 ) .when applied to , cyclic shift testing identified several highly significant apparently pleiotropic snps in the mhc region that produced large entries in the rows corresponding to both ra and t1d , including rs9270986 , which is upstream of the ra and t1d susceptibility gene _hla - drb1_. the wtccc dataset serves as a proof of principle for cyclic shift applied to gwas studies , although the use of a common set of controls may create modest additional correlation not fully captured in the cyclic shifts .we note that the cyclic shift procedure appplied to gwas is sensitive only to small -values that occur in multiple studies .thus the procedure is qualitatively different from typical meta - analyses , such as zeggini et al .( 2008 ) , which can be sensitive to large observed effects form a single study .let be a random matrix whose rows are independent realizations of a first - order stationary ergodic markov chain with countable state space .denote the distribution of in by .let and denote , respectively , the stationary distribution and the one - step transition probability of the markov chain defining the rows of .let denote the joint probability mass function of contiguous variables in the chain .thus the vectors have common probability mass function in what follows we assume that ( [ conds ] ) holds .the ergodicity assumption on the markov chain ensures that the joint probability mass function of converges to the joint probability mass function of the pair where are independent with the same distribution as .it follows that in other words , for each row the ratio in ( [ op1 ] ) is stochastically bounded under as tends to infinity .suppose for the moment that is fixed .for any integer , define the cyclic shift on sequences of length by } , x_{[r + 1 ] } , \dots , x_{[r + ( m - 1)]})\ ] ] where = k \mbox { mod } m ] , substantially reducing notation . for each define to be the matrix with rows . if , then it is easy to verify that let be the set of cyclic shifts of .let and be the true conditional and cyclic conditional distributions given , defined by and , respectively . in order to compare the distributions and we introduce two closely related distributions , and , that are more amenable to analysis .let be a ( random ) measure on defined by ^n }\eta(\sigma_{{\mbox{\scriptsize \bf r}}}({\mbox{ } } ) ) \ , i ( \sigma_{{\mbox{\scriptsize \bf r}}}({\mbox{ } } ) \in a ) , \ ] ] where = \{0 , 1 , \ldots , m-1\}\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x ] , but this may be rewritten as } p_m(\sigma_{s}({\mbox{}}_{i \cdot}))}\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x\bf x ] . )it follows from the definition of and equation ( [ rhodelt ] ) that } p_m(\sigma_s({\mbox{}}_{i \cdot } ) ) } \right ] \\ & = & \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot } ) \ , \gamma_m^{-1}({\mbox{}}_{i \cdot } ) \ = \\gamma_m^{-1}({\mbox{ } } ) \ , \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot})\end{aligned}\ ] ] where .the assumptions of the theorem ensure that the random variables , , in the row of are the initial terms of a stationary ergodic process , and therefore the same is true of the non - negative random variables , , .note that the random variable can not be included in this sequence because it involves the non - adjacent variables and .it is easy to see that from the ergodic theorem and the fact that is stochastically bounded ( see ( [ op1 ] ) ) , it follows that and therefore and are equal to as well .( here and in what follows the stochastic order symbols and refer to the underlying measure with tending to infinity ) . for ^n m ] and let \setminus v_0({\mbox{}}) ] . combining the relation ( [ gmop ] ) with inequality ( [ prdiff ] ) and equation ( [ rhodelt ] ) ,we conclude that & \leq & \gamma_m^{-1}({\mbox{ } } ) \cdot \frac{1}{m^n } \sum_{{{\mbox{\scriptsize \bf r } } } \in [ m]^n } \left| \frac{1}{m } \sum_{k \in [ m ] } \ , \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot } ) - 1 \ , \right| \ + \ & = & o_p(1 ) \cdot \frac{1}{m^n } \sum_{{{\mbox{\scriptsize \bf r } } } \in [ m]^n } \left| \frac{1}{m } \sum_{k \in [ m ] } \ , \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot } ) - 1 \ , \right| \ + \o_p(1 ) \nonumber \\[.1 in ] \label{pdbnd } & = & o_p(1 ) \cdot \frac{1}{m^n } \sum_{{{\mbox{\scriptsize \bf r } } } \in [ m]^n } \left| \frac{1}{m } \sum_{k \in v_1({{\mbox{\scriptsize \bf r } } } ) } \ , \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot } ) - 1 \ , \right| \ +\ o_p(1 ) \ , \delta_m \ + \o_p(1 ) \end{aligned}\ ] ] where in the last line ^n } \frac{1}{m } \sum_{k \in v_0({{\mbox{\scriptsize \bf r } } } ) } \ , \prod_{i=1}^n \rho_{r_i + k}({\mbox{}}_{i \cdot } ) .\ ] ] as the upper bound in ( [ pdbnd ] ) is independent of our choice of , it is enough to show that the first two terms in ( [ pdbnd ] ) are .concerning the first term , by markov s inequality it suffices to show that ^n } \e \left| \frac{1}{m } \sum_{k \in v_1({{\mbox{\scriptsize \bf r } } } ) } \prod_{i=1}^n \rho_{r_i + k'}({\mbox{}}_{i \cdot } ) - 1 \, \right| \ \to \ 0 \ \mbox { as } \m \to \infty .\ ] ] this follows from corollary [ nzcor ] below . as for the second term , note that ^n } \frac{1}{m } \sum_{k \in v_0({{\mbox{\scriptsize \bf r } } } ) } \ , \prod_{i : r_i + k \neq 0 } \rho_{r_i + k}({\mbox{}}_{i \cdot } ) \right ] \\[.1 in ] & = & o_p(1 ) \cdot \left [ \frac{1}{m^n }\sum_{{{\mbox{\scriptsize \bf r } } } \in [ m]^n } \frac{1}{m } \sum_{k \in v_0({\mbox{\scriptsize \bf r } } ) } \ , \prod_{i : r_i + k \neq 0 } \rho_{r_i + k}({\mbox{}}_{i \cdot } ) \right ] \end{aligned}\ ] ] the term in brackets is non - negative and has expectation at most .thus and the result follows .let , , be independent , real - valued stationary ergodic processes defined on the same underlying probability space .suppose that is bounded for , and define .let denote a vector with non - negative integer - valued components .for define random variables the independence of the processes ensures that .[ shift - lln ] under the assumptions above , converges to zero as tends to infinity .standard arguments show that the joint process is stationary and ergodic , and therefore the same is true for the process of products .the ergodic theorem implies that note also that which is bounded by assumption .fix and with .because the indices of in are assessed modulo , we may assume without loss of generality that .let be the distinct order statistics of , and note that .define , , and the differences for .consider the decomposition the key feature of the sum is this : for there are no `` breaks '' in the indexing of the terms in arising from the modular sum .in particular , there exist integers such that for each , and each in the sum defining . as a result , the stationarity and independence of the individual processes ensures that is equal in distribution to the random variable we now turn our attention to the expectation in the statement of the lemma .it follows immediately from the decomposition ( [ decomp ] ) that which yields the elementary bound taking expectations of both sides in the last display yields the inequality where the first equality follows from the distributional equivalence of and . in particular , for each integer we have it follows from ( [ deltinq1 ] ) and ( [ deltinq2 ] ) that the final term in the last display tends to zero with if is any sequence such that tends to infinity and converges to 0 .moreover , the final term does not depend on the vector .this completes the proof of the lemma ..2 in an elementary argument using lemma [ shift - lln ] establishes the following corollary .[ nzcor ] under the assumptions of lemma [ shift - lln ] , converges to zero as tends to infinity , where for each the sum is restricted to those $ ] such that .* proof of theorem [ thm1 ] : * theorem [ thm1 ] follows from theorem [ thm2 ] and the fact that for each the event as is invariant under constant shifts .high resolution genomic data is routinely used by biomedical investigators to search for recurrent genomic aberrations that are associated with disease .cyclic shift testing provides a simple , permutation based approach to identify aberrant markers in a variety of settings .here we establish finite sample , large marker asymptotics for the consistency of -values produced by cyclic shift testing .the results apply to a broad family of markov based null distributions . to our knowledge , this is the first theoretical justification of a testing procedure of this kind .although cyclic shift testing was developed for dna copy number analysis , we demonstrate its utility for dna methylation and meta - analysis of genome wide association studies .this research was supported by the national institutes of health ( t32 ca106209 for vw ) , the environmental protection agency ( rd835166 for faw ) , the national institutes of health / national institutes of mental health ( 1r01mh090936 - 01 for faw ) , and the national science foundation ( dms-0907177 and dms-1310002 for abn ) . 9 beroukhim , r. , getz , g. , nghlemphu , l. , barretina , j. , hsueh , t. , linhart , d. , vivanco , i. , lee , j.c . ,huang , j.h . ,alexander , s. , et al .( 2007 ) .`` assessing the significance of chromosomal aberrations in cancer : methodology and application to glioma , '' _ proc .sci . _ * 104 * 2000720012 .chitale , d. , gong , y. , taylor , b.s . ,broderick , s. , brennan , c. , somwar , r. , golas , b. , wang , l. , motoi , n. , szoke , j. , et al .( 2009 ) .`` an integrated genomic analysis of lung cancer reveals loss of _dusp4 _ in _ egfr_-mutant tumors , '' _ oncogene _ * 28 * 27732783 .fackler , m.j . ,umbricht , c.b . ,williams , d. , argani , p. , cruz , l - a . , merino , v.f . ,teo , w.w ., zhang , z. , huang , p. , visananthan , k. , et al . (`` genome - wide methylation analysis identifies genes specific to breast cancer hormone receptor status and risk of recurrence , '' _ cancer res ._ * 71 * 61956207 .fernando , m.m.a . ,stevens , c.r . ,walsh , e.c ., de jager , p.l . , goyette , p. , plenge , r.m . ,vyse , t.j . ,rioux , j.d .( 2008 ) . `` defining the role of the mhc in autoimmunity : a review and pooled analysis . '' _ plos genet . _ * 4*(4 ) : e1000024 .laird , p.w . , ( 2003 ) .`` the power and the promise of dna methylation markers , '' _ nature rev . _ * 3 * 253266 .misawa , k. , ueda , y. , kanazawa , t. , misawa , y. , jang , i. , brenner , j.c . ,ogawa , t. , takebayashi , s. , krenman , r.a .herman , j.g ., et al . ( 2008 ) .`` epigenetic inactivation of galanin receptor 1 in head and neck cancer , '' _ clin .cancer res . _* 14 * 76047613 .natrajan , r. , williams , r.d . ,hing , s.n . ,mackay , a. , reis - filho , j.s . ,fenwick , k. , iravani , m. , valgeirsson , h. , grigoriadis , a. , langford , c.f ., et al . ( 2006 ) . array cgh profiling of favourable histology wilms tumours reveals novel gains and losses associated with relapse , " _j. path . _* 210 * 49 58 .pfeiffer , r.m . ,gail , m.h . , and pee , d. , ( 2009 ) .`` on combining data from genome - wide association studies to discover disease - associated snps , '' _ stat .sci . _ * 24*(4 ) 547560 .pinkel , d. and albertson , d.g . , ( 2005 ) .`` array comparative genomic hybridization and its applications in cancer , '' _ nature genet . _ * 37 * s11 s17 .renard , i. , joniau , s. , van cleynenbreugel , b. , collette , c. , naome , c. , vlassenbroeck , i. , nicolas , h. , de leval , j. , straub , j. , van criekinge , w. , et al .( 2010 ) .`` identification and validation of the methylated _twist1 _ and _ nid2 _ genes through real - time methylation - specific polymerase chain reaction assays for the noninvasive detection of primary bladdar cancer in urine samples , '' _ eur .urology _ * 58 * 96104 .selamat , s.a . ,chung , b.s . ,girard , l. , zhang , w. , zhang , y. , campan , m. , siegmund , k.d . ,koss , m.n ., hagan , j.a . ,lam , w.l ., et al .`` genome - scale analysis of dna methylation in lung adenocarcinoma and integration with mrna expression , '' _ genome res ._ doi:10.1101/gr.132662.111 .thompson , j.r . ,attia , j. , and minelli , c. , ( 2011 ) .`` the meta - analysis of genome - wide association studies , '' _ briefings bioinf ._ * 12*(3 ) 259269 vire , e. , brenner , c. , deplus , r. , blanchon , l. , fraga , m. , didelot , c. , morey , l. , van eynde , a. , bernard , d. , vanderwinder , j - m . , et al . ( 2006 ) . ``the polycomb group protein ezh2 directly controls dna methylation , '' _ nature _ * 439*(16 ) 871874 .walter , v. , nobel , a.b . , and wright , f.a .( 2011 ) .`` dinamic : a method to identify recurrent dna copy number aberrations in tumors , '' _ bioinformatics _ * 27*(5 ) 678685 .weir , b.a . , woo , m.s . ,getz , g. , perner , s. , ding , l. , beroukhim , r. , lin , w.m . , province , m.a . ,kraja , a. , johnson , l.a ., et al . ( 2007 ) .`` characterizing the cancer genome in lung adenocarcinoma , '' _ nature _ * 450 * 893901 . the welcome trust case control consortium , ( 2007 ). `` genome - wide association study of 14,000 cases of seven common diseases and 3,000 shared controls , '' _ nature _ * 447*(7 ) 661678 .wistuba , i.i . ,behrens , c. , virmani , a.k . , milchgrub , s. , syed , s. , lam , s. , mackay , b. , minna , j.d . , and gazdar , a.f .( 1999 ) .`` allelic losses at chromosome 8p21 - 23 are early and frequent events in the pathogenesis of lung cancer , '' _ cancer research _ * 59 * 19731979. zeggini , e. , scott , l.j . , saxena , r. , and voight , b.f ., for the diabetes genetics replication and meta - analysis ( diagram ) consortium ( 2008 ). `` meta - analysis of genome - wide association data and large - scale replication identifies additional susceptibility loci for type 2 diabetes , '' _ nature genet . _ * 40*(5 ) 638645 .
genomic aberrations , such as somatic copy number alterations , are frequently observed in tumor tissue . recurrent aberrations , occurring in the same region across multiple subjects , are of interest because they may highlight genes associated with tumor development or progression . a number of tools have been proposed to assess the statistical significance of recurrent dna copy number aberrations , but their statistical properties have not been carefully studied . cyclic shift testing , a permutation procedure using independent random shifts of genomic marker observations on the genome , has been proposed to identify recurrent aberrations , and is potentially useful for a wider variety of purposes , including identifying regions with methylation aberrations or overrepresented in disease association studies . for data following a countable - state markov model , we prove the asymptotic validity of cyclic shift -values under a fixed sample size regime as the number of observed markers tends to infinity . we illustrate cyclic shift testing for a variety of data types , producing biologically relevant findings for three publicly available datasets .
sensory information is often encoded in irregularly spiking neural populations .one well - studied example is given by direction - selective cells in area mt , whose firing rates depend on the degree and direction of coherent motion in the visual field . individual neurons in mt as in many other brain areas exhibit noisy and variable spiking , as can be modeled by poisson point processes . moreover , this variable spiking is generally not independent from cell to cell . returning to our example ,a number of studies have measured pairwise correlations in mt during direction discrimination tasks as well as smooth - pursuit eye movements ; while this measurement is a subtle endeavor experimentally , a number of studies suggest a value near ( summarizes these observations , for a number of brain areas . ) what are the consequences of correlated spike variability for the speed and accuracy of sensory decisions ? the role of pairwise correlations in stimulus encoding has been the subject of many prior studies .the results are rich , showing that correlations can have positive , negative , or neutral effects on levels of encoded information .the present study serves to extend this body of work in two ways .first , as done in a different context by , we contrast the impact of correlations that have the same pairwise level but a different structure at higher orders .second , as in , we consider the impact of correlations on decisions that unfold over time , by combining a sequence of samples observed over time in the sensory populations .a classical example that we will use to describe and motivate our studies is the _ moving dots _ direction discrimination task . here , a fraction of dots in a visual display move coherently in a given direction , while the remainder display random motion ; the task is to identify the direction from two possible alternatives .decisions become increasingly accurate as subjects take ( or are given ) longer to make the decision . in analyzing decisions that develop over time ,we utilize a central result from sequential analysis .this is the sequential probability ratio test ( sprt ) , which linearly sums the log - odds of independent observations from a sampling distribution until a predetermined evidence threshold is reached .the sprt is the optimal statistical test in that it gives the minimum expected number of samples for a required level of accuracy in deciding among two task alternatives .we pose two related questions based on the sprt .first , how does the presence of correlated spiking in the sampled pools impact the speed and accuracy of decisions produced by the sprt ?our focus is on how the structure of population - wide correlations determines the answer .second , how does the presence of correlated spiking impact the computations that are necessary to perform the sprt ?this question is intriguing , because the sprt may be performed via the simple , linear computation of integrating spikes over time and across the populations for a surprisingly broad class of inputs , including independent poisson spike trains .thus , in this setting optimal decisions can be made by integrator circuits .our goal here is to determine whether and when this continues to hold true for correlated neural populations .we answer these questions for two illustrative models of correlated , poissonian spiking .we emphasize that the spikes that these models produce are indistinguishable at the level of both single cells and pairs of cells .however , they differ in higher - order correlations , in that they can only be distinguished by examining the statistics of three or more neurons . in the first model ,correlations are introduced via shared spike events across the entire pool . in this caseoptimal inference via the sprt produces fast and accurate decisions , but depends on a nonlinear computation . as a result ,the simpler computation of spike integration requires , on average , longer times to reach the same level of accuracy .in contrast , when shared spiking events are more frequent but are common to fewer neurons within a pool , performance under the sprt is significantly diminished . however , in this case both sprt and spike integration perform comparably , so a linear computation can produce decisions that are close to optimal .we begin by introducing the notation for the two decision making models that will be compared . in this studywe consider the case of discrimination between two alternatives , and therefore model two populations of neurons that encode the strength of evidence for each alternative .returning to the moving dots task for illustration , each population could be the set of mt cells that are selective for motion in a given direction . here, the firing rates in each population represents the dot motion via their firing rates and ; here the subscripts indicate the preferred " and null " populations , which correspond to the motion direction of the visual stimulus versus the alternate direction . in this way , the firing rate of neurons encoding the preferred direction will be higher than the null direction , . following ( see also ) , we model this relationship as linear : throughout the text we consider present results at ,however the results do not depend on this particular value of dot motion or its precise relationship firing rate . in our model, we assume that each population consists of neurons firing spikes via a homogenous poisson process , with rate or .we use the notation to each spike train . integrating these processes over a time interval two time series of vectors of poisson random variables ; these independent vectors provide the input to the decision making models .specifically , for the neuron in a pool , on the time step , the properties of poisson processes imply that is independent from ( ) , i.e. for different time steps .however , the outputs of different neurons in the same time are not , in general , independent . following experimental observations that neurons with similar directional tuning tend to be correlated , while those with very different tuning are not , we model neurons from different pools as independent and those within a single pool as correlated with a correlation coefficient : }{\sqrt{\text{var}[s^i_{k}]\text{var}[s^i_{l}]}},\;\ ; k\neq l. \label{eq : rho}\ ] ] this implies that , with vector notation for the probability distribution of spike counts for each pool , = p[{\boldsymbol{s^i_p}}]p[{\boldsymbol{s^i_n}}].\ ] ] next , we introduce notation for decision making between the two task alternatives .the task of determining , e.g. , direction in the moving dots task is that of determining which of the two pools fires spikes with the higher firing rate .we frame this as decision making between the hypotheses where each alternative corresponds to a decision as to the motion direction .this formalism allows us to define accuracy as the fraction of trials on which the correct hypothesis is accepted . in this studywe consider decision making tasks at a fixed level of difficulty , so that and do not vary from trial to trial ( i.e. , this hypothesis test is simple and not composite ) .we relate the decision making task to a discrete random walk , which follows in turn from the sequential accumulation of independent and identically distributed ( iid ) realizations from the sampling distribution . we will specify this distribution below ; for now , we note that the random walk takes the general form : in a drift - diffusion model of decision making , accumulation continues as long as , the decision threshold .the number of increments necessary to cross one of the two increments multiplied by its duration defines the decision time ; this is a random variable , as it varies from trial to trial .crossing the threshold corresponding to is interpreted as a correct trial ; the fraction of correct ( fc ) trials defines the accuracy of a the model .together , the expected ( mean ) decision time ( ) and accuracy ( ) determine the performance of a decision making model . formulas for the mean decision time and accuracy are given in wald as a function of the sampling distribution and the decision threshold .importantly , these formulas are exact under the assumption that the final increment in does not overshoot the threshold , a point we return to below . given the moment generating function for the sampling distribution : ,\ ] ] speed and accuracy are given by : } \tanh \left ( \frac{-h_0 \theta } { 2}\right ) \label{eq : rt}\end{aligned}\ ] ] where is the nontrivial root of , i.e. we notice here that as increases ( and assuming ) , both and will increase .we now return to the definition of the random increments .we consider two different ways in which this can be done .first , in the spike integration ( si ) model , increments are constructed by counting the spikes emitted in a window by the preferred pool , and subtracting the number emitted by the null pool .this is equivalent to the time evolution of a neural integrator model that receives spikes as impulses with opposite signs from the preferred and null populations .this integrate - to - bound model is an analog of drift - diffusion model ( ddm ) with inputs that are not white noise " , but rather poisson spikes : , cf . .second , in the sequential probability ratio test ( sprt ) , the increment is defined as the log - odds ratio of observing the spike count from both of the pools , under each of the two competing hypothesis : [{\boldsymbol{s^i_n}}|h_1]}{p[{\boldsymbol{s^i_p}}|h_0]p[{\boldsymbol{s^i_n}}|h_0 ] } \right ] \label{eq : sprtsamplingrv}\ ] ] present an analysis of speed and accuracy of decision making based on independent neural pools ; for completeness , and to help contrast this result with the correlated case , we give the key calculations in appendices [ sec : mgf_sprt ] , [ sec : siuncorr ] . here , choosing increments via the sprt yields : & = & \delta t n\left ( \lambda_p-\lambda_n \right)\log \left ( \frac{\lambda_p}{\lambda_n}\right ) .\label{eq : indsprtez } \end{aligned}\ ] ] under the spike integration model , zhang and bogacz ( see also appendix [ sec : siuncorr ] ) find that : & = & \delta t n\left ( \lambda_p-\lambda_n \right).\label{eq : uncorrsiez } \end{aligned}\ ] ] therefore , by applying a change of variables in equations [ eq : fc ] and [ eq : rt ] , spike integration can implement the sprt .the implication is that simply counting spikes , positive for one pool and negative for the other , can implement statistically optimal decisions for when the neural pools are independent .we next describe two models for introducing correlations into the poisson spike trains of each neural population .both models are studied in , and rely on shared input from a single correlating process to generate the correlations in each pool .these authors termed the two model sip and mip for single- and multiple - interaction process ; here we use the added descriptors additive " and subtractive . " in both models , a realization of correlated spike trains that provide the input to the accumulation models is achieved via a common correlating train . before describing the models in detail, we note that in this study , these models are statistical approaches chosen to illustrate a range of impacts that correlations can have on decision making ( see also ) . in contrast , in neurobiological networks , correlated spiking arises as through a complex interplay of many mechanisms , including recurrent connectivity and shared feedforward interactions ( for example , ) . while beyond the scope of the present paper , avenues for bridging the gap between statistical and network - based models of correlations in the context of decision making are considered in the discussion .the first case is the additive ( sip ) model , in which the spike train for each neuron is generated as the sum of two homogenous poisson point processes .the first poisson train is generated with an overall firing rate of , where is the intended firing rate of the neuron , and is the intended pairwise spike count correlation between any two neurons in the pool .the second train , with a rate of , is added to every neuron in the pool , and serves as the common source of correlations .an example of this model of spike train generation is depicted in the rastergrams in figure [ fig : intsprtoverview]a and b ; the common spike events are evident as shared spikes across the entire population .the second case is the subtractive ( mip ) model , in which correlated spikes are generated through random , independent deletions from an original mother " spike - train ; we refer to this as the correlating spike train .there is a separate correlating spike train for each of the two independent populations . in order to achieve an overall firing rate for the pool of spikes per second , with a pairwise correlation between any two individual neurons ,the correlating train has a rate of spikes per second .then , for each neuron in the pool , a spike is included from this train iid with a probability of .an example of this model of spike train generation is depicted in the rastergrams in figure [ fig : intsprtoverview]d and e. in summary , the two models both include correlated spike events that originate in from a single mother . "although they produce identical correlations among cell pairs , these events are distributed in different ways across the entire population .we note that the results of can be seen as a limiting case as of either the additive ( sip ) or subtractive ( mip ) models . for preferred ( a , d ) and null ( b , e ) populations of neurons , with spike count correlation within pools . in ( c ,f ) these spikes are either integrated ( black line ) or provide input for the sprt ( gray line ) , until a decision threshold is reached .the decision threshold has been set so that all four cases will yield the same mean reaction time ( in c , and , and in f and ; in both cases the sprt lines have been scaled for plotting purposes ) . on these trials , the sprt accumulator crosses the correct " , upper , threshold , as opposed to the incorrect " , lower , threshold for the spike integrator . unlike the independent case ,the time evolution of the spike integration process is not simply a scaled version of the sprt ( though they are clearly similar ) under either model of correlations.,width=576 ]we now study the impact of subtractive ( mip ) correlations on decision making performance .as noted above , recall that within a time window , the spike counts from each neuron form a vector of random variables which are independent from window to window .these independent vectors provide the evidence for each of the two alternatives , which is then weighed via log - likelihood at each step in sprt . in appendices [ sec : mgf_sprt ] and [ sec :mipsprtappendix ] , we compute the values and ] corresponding to decision making based on mother spikes of rate and .one consequence of this interpretation is that the particular realization of a spike vector ( in a sufficiently small time - bin ) carries no evidence about the decision of vs. , beyond its identity as either the zero vector or not .of course , this is a consequence of the construction of the mip model , as the spike deletions that create the realization of the spike vector have no dependence on the firing rate of the population .concretely then , the increments ( or decrements ) are based solely on whether the vector of spikes in the preferred ( or null ) pool contains any spikes at all ; the actual number of spikes is irrelevant in the sprt .it follows that the accumulation process is a discrete - space random walk , with steps . to see this , note thatfor sufficiently small , there are only three possibilities for how spikes will be emitted from the two populations .first , both the preferred and null pools could produce no spikes .this event provides no information to distinguish the firing rates of the pools , so the increment is 0 .second , one of the pools could produce a vector of spikes caused by iid deletions from the mother " spike train .if the spiking pool is the preferred one , each possible nonzero spike vector will increment the accumulator by the of the ratio ; the opposite sign occurs if the null pool spikes . events in which both pools spike are of higher order in , and thus become negligible for small time windows .the discrete nature of the sprt effect causes the curve in figure [ fig : mipsprt](a ) to take on only discrete values of accuracy ; a small increase in above a multiple of will not improve accuracy because on the final , threshold - crossing - step will overshoot the threshold .this also explains why some of the values at a given do not lie on the theoretical line defined by equation ; that equation is only exactly true in the case of zero overshoot past the threshold .we will return to this point later , and also in appendix [ sec : overshootappendix ] .we next insert the values for and ] : =\delta t n\left ( \lambda_p-\lambda_n \right ) \label{eq : mipsiimplicitez}\ ] ] the nontrivial root of the mgf is found to be the implicit solution of : here we see that correlations only impact the performance of the model through changing , as the expected increment is the same is in the independent case ( equation [ eq : uncorrsiez ] ) .moreover , performance under spike integration is diminished to a degree that is comparable to the performance loss of sprt . to illustrate this , figure [ fig : mipsi]a plots the speed - accuracy tradeoff curves from both models of decision making under subtractive correlations , for the same values of . as we must ( ) , we see the optimal character of the sprt in the fact that at a given level of accuracy , the sprt requires , on average , fewer samples than spike integration . however , the difference is very slight .this yields our next main result , that _ nearly optimal decisions are produced by the simple operation of linear integration over time for the mip model of spike correlations across neural populations . _ having established this , we pause to note a subtlety in our analysis . figures [ fig : mipsi]b and c show and as a function , for both simulated data and plots of equations and [ eq : rt ] .the solid lines are the graphs of those equations as written ( using the values for and ] in equation [ eq : sipsprtez ] .to explain the form of the scale factor , note that the spike vector from each pool is composed of independent spike trains firing at rate , and a single ( highly redundant ) spike train firing at a rate .as in the subtractive ( mip ) model , here also becomes a discrete random walk with increment .this can be seen by noting that for either pool , in a sufficiently small window , only one of two events is possible : ( i ) no spikes occur at all , or ( ii ) a single spike occurs in one neuron , in one of the two pools .the first case is uninformative about either or .the second case occurs with probability under and under ( here if the spike occurred in the preferred pool , for example ) ; taking the log ratio , we find our increment is independent of correlations . the resulting decision accuracy ( fc ) is plotted vs. threshold in figure [ fig : sipsprt]a , and is qualitatively similar to the subtractive ( mip ) correlations case , with plateaus following from the discrete nature of .however , the speed - accuracy tradeoff pictured in figure [ fig : sipsprt]b is very different from that found in the subtractive ( mip ) model .in particular , we see our third main result : _ the impact of additive correlations on optimal ( sprt ) decision performance is relatively minor . _ for example , in the presence of pairwise correlations as strong as , the mean decision time required to reach a typical value of accuracy is increased by only a few milliseconds compared with the independent case , instead of by hundreds of milliseconds as for subtractive correlations .equation [ eq : sipsprtez ] offers an intuitive explanation for this fact : ] from equations [ eq : sipsih0maintext ] and [ eq : sipsiezmaintext ] ( and thereby assuming no overshoot of the decision threshold ) .performance is similar to the subtractive - correlations case ( broken gray lines ) , and significantly worse than performing sprt on additive - correlated inputs ( solid gray lines ) .( b ) at , for example , major differences arise between this theory ( again , solid black line , reproduced from a ) and simulation of the model ( dots ) , especially at short reaction times .this is a consequence of significant overshoot of over the decision threshold , on the threshold crossing step .( inset ) at short reaction times , the simulations actually perform closer to the sprt ( gray line , reproduced from figure [ fig : sipsprt]a ) ; see text.,width=576 ] what about the ability of the simple spike integrator to perform decision making when confronted with additive correlations ? proceeding as in the subtractive - correlations case , we derive an implicit relationship for , and the expected increment ] as varies ; crucially , this quantity varies significantly and for higher values of under sip correlations , resulting in the non - monotonic speed - accuracy tradeoff pictured in figure [ fig : sipsi].,width=576 ] for large thresholds , the sequential sampling theory of equations [ eq : fc ] and [ eq : rt ] , which assume no overshoot , accurately approximates the simulated data ; however for low values of the approximation is poor .in fact , the inset to figure [ fig : sipsi]b shows that in this regime , the decision making performance of the spike integration model is far better described by the theory predicted by the sprt .the intuition behind this observation is that for short reaction times , there is a small probability of a shared spike that will send the integrator significantly over the threshold .this allows accumulation to occur one spike at a time ( for sufficiently small ) , where each spike arrives from an independent spike train . as we have seen , the process of integrating independent spikes is equivalent to the sprt .it is only at longer decision times , when the chances of having integrated a large common spike event are larger , that a significant impact of correlations appears .figure [ fig : nonlinearity ] provides further evidence for this scenario .density plots of the distribution of the overshoot ( conditioned on crossing the upper threshold ) , for both additive ( sip ) and subtractive ( mip ) correlations are shown as a function of the decision threshold , with particular overshoot distributions plotted at and .for the additive correlations model , a significant fraction of the trials terminate with zero overshoot at low values of ( because , for example , large correlating events are relatively rare ) , implying that many trials underwent optimal accumulation of evidence , without experiencing a common , correlating spike event as discussed above .overall , the monotonic dependence of accuracy ( fc ) on decision time ( ) follows from the invariance of the moments of the overshoot distribution relative to changes in the threshold value ; this is particularly true for the first moment ( see appendix [ sec : overshootappendix ] ) .figure [ fig : nonlinearity](sip ) demonstrates that these moments continue to fluctuate over a larger range of , and with larger magnitude , for the additive correlations model .this serves to explain the strange shape of the speed - accuracy tradeoff curve pictured in figure [ fig : sipsi]b that ( unlike the subtractive correlations model ) can not be explained by a constant shift in .this stands in direct contrast to the optimality of linear summation in the zero - correlations case .( c ) a nonlinear computation also appears as a consequence of the additive correlations model , however the nonlinearity is much less severe than in the subtractive model .( all results pictured hold in the case of vanishing .),width=576 ] ) .( b ) spike integration with this nonlinearity is suggested by figure [ fig : nonlinearity]c , and recovers performance of the decision making model ( black dots ) to agreement with the results of sprt ( gray line ) . without this nonlinearity to discount shared events , performance suffers ( gray dots , reproduced from figure [ fig : sipsi]b , inset),width=576 ] when the neurons in each pool spike independently , zhang and bogacz demonstrated that linear summation of spikes across the two pools at each time step implements the sprt . because the sprt is optimal in the sense of minimizing for a prescribed level of , the conclusion is that linear integration of spikes across pools , and then across time , provides an optimal decision making strategy . however , is this optimality of linear integration confined to the case of independent activity within the pool ?above , we showed that when correlations are introduced into this model , it is no longer true that each spike should be given the same weight " , as in linear integration .moreover , knowing only the pairwise correlations and firing rates alone does not allow one to write down a rule for the function that should be applied to incoming spikes in order to implement the sprt , although in these cases this function takes the form of the difference between the result of a nonlinearity applied to both pools .this dependence on higher order statistics is demonstrated in figure [ fig : nonlinearity ] by the fact that the nonlinearities for mip correlations ( panel b ) and sip correlations ( panel c ) take a significantly different form . for mip correlations ,the nonlinearity pictured in figure [ fig : nonlinearity]b that implements the sprt ( up to a change in threshold ) takes the form : at first glance , it is surprising that such a severe nonlinearity , applied to two mip - correlated spiking pools , results in nearly the same performance is simple spike integration ( c.f .figure [ fig : mipsi ] ) .the intuition here is that optimal inference requires essentially performing spike integration on the correlating spike train , as no information about the firing rate is added through spike deletions .this random walk on one of three cases ( -1,0 , or + 1 ) is approximated by linear integration , in the limit as the size of the pool ( ) increases .another perspective on the nonlinearities that enable optimal computation is that they leverage knowledge about the mechanism of correlations , to improve performance . in the sip model , the nonlinear function depicted in figure [ fig : nonlinearity]c is , as in the mip case , a consequence of applying a nonlinearity to each pool , and then subtracting .however , in this case , the form is not as drastic a shared spike event coming from the correlating train only registers as a single spike : intuitively , this strategy uses the fact the a simultaneous spike in every neuron in a pool only has one explanation for a sufficiently small window of integration , and therefore uses the correlating spike train as an additional independent input in the likelihood ratio . at low values of ,this does not confer much of an advantage ; however as the threshold increases , higher accuracy is achievable at much shorter decision times .the nonlinearity is pictured figure [ fig : sipsinonlinear]a , and also offers an intuition as to why , for low threshold values , spike integration performs almost optimally : when spikes from the correlating train are rare ( or can be properly weighted ) , spike integration implements sprt ( figure [ fig : sipsinonlinear]b ) .neurons is constant for all . in contrast, the joint cumulants of the subtractive ( mip ) model decay geometrically as the pool size increases , and this difference helps to characterized the differences in higher - order correlations between the two models .( see appendix [ sec : jointcumulantappendix ] for supplementary computations.),width=288 ] correlated spiking among the neurons that encode sensory evidence appear ubiquitous .such correlations might arise arise from any number of neuroanatomical features the simplest being overlapping feedforward connectivity which can cause collective fluctuations across a population .they can also result from sensory events that impact an entire population , or from rapid modulatory effects .moreover , for large neural populations it appears that accurate descriptions of population - wide activity can require more than the typically measured pairwise correlations , but higher order interactions as well .the aim of our study is to improve our understanding of how correlated activity in these populations can impact the speed and accuracy of decisions that require accumulating sensory information over time . faced with the wide range of possible mechanisms and structures of correlations alluded to above , we choose to focus on two models for population - wide correlations that illustrate a key distinction in how correlations can occur .these models have identical first - order and pairwise statistics , but differ in how each common spiking events either involves a small subset of the neurons ( the subtractive , mip case ) or each neuron in the pool ( the additive , sip case ) .figure [ fig : cumulant ] quantifies this difference : based on calculations in appendix [ sec : jointcumulantappendix ] , we plot the joint cumulant across neurons in a pool under both subtractive ( mip ) and additive ( sip ) correlations . while the additive model possesses a constant joint cumulant no matter how many neuronsare included , the joint cumulant of neurons falls off geometrically for the subtractive case .we conjecture that this is a statistical signature that could suggest when other , more general patterns of correlated activity measured experimentally or arising in mechanistic models of neural circuits will produce similar effects on decisions . exploring this conjecture via models anddata is a target of our future research .we summarize our main findings are as follows . for both models of correlated spiking , decisions produced by a simple , linear spike integration model ( i.e. , a neural integrator ) become slower and less accurate as correlations increase. however , a strong difference appears for decisions made via the optimal decision strategy ( sprt ) . here , additive correlations have only a minor impact on decision performance , while subtractive correlations continue to strongly diminish this performance .the conclusion is that decision making circuits , faced with subtractive ( mip ) correlated sensory populations , will invariably produce diminished decision performance , and stand little to gain by implementing computations more complex that a simple integration of spikes over time and neurons .however , in the presence of additive ( sip ) correlations , circuit mechanisms that implement or approximate the sprt perhaps via a nonlinearity such as that shown in fig . [fig : sipsinonlinear ] applied to the sum of incoming spikes stand to produce substantially better decision performance than their linear counterparts .in other contexts , nonlinear computations have also been shown to improve discrimination between two alternatives .field and rieke demonstrated the importance of a thresholding nonlinearity in pooling the responses of rod cells , where this nonlinearity served to reject background " noise .closer to the present setting , gating inhibition that prevents accumulation of noise samples before the onset of evidence - encoding stimulus can account for visual search performance , and recent results suggest that related nonlinearities can improve performance for mistuned neural integrators ( , see also ) . our cases in which correlations decrease performance in particular , when spikes are linearly integrated are consistent with several prior studies of the role of correlated activity in decision making .we note , however , two differences in our models .the first is the mechanism through which correlated spikes are generated ; while we use additive and subtractive models based on poisson processes , the authors of use a multivariate gaussian description of spike counts .the second is that in , decisions are rendered after a duration that is fixed before the trial begins ( either a single duration , , or one that is drawn from a distribution of reaction times , in ) .this is different from the setting here , where incoming signal on each trial determines the reaction time through a bound crossing .our result , in the case of subtractive ( mip ) correlations , that linear integration of spikes closely approximates the optimal decision making strategy is similar to findings of beck et al .specifically , they model a dense range of differently tuned populations , and find that optimal bayesian inference can be based on linear integration of inputs , for a wide set of correlation models . our additive ( sip ) case , however , behaves differently , as nonlinearities are needed to achieve the optimal strategy .an aim of future work is extending the setting of our study to include tuning curves as in .this is more realistic for many decision tasks ( including the direction discrimination task ) , and will also allow progress toward models with multiple decision alternatives .an important challenge will come from defining pairwise correlations that vary as a function of preferred tuning orientation ( see ) , while also including the full structure of correlating events across multiple cells in a realistic way .for example , in the present paper , additive correlating events occurred independently in the two populations ; future work could take a more graded approach , in which some events impact the entire sensory population ( i.e. , as in an eyeblink or possibly an attentional shift during a visual task ) . as long as each neuron remains modeled as a poisson point process, the sequential accumulation theory utilized here will carry over directly .this points to another limitation of the present study and opportunity for future work .this is the lack of temporal correlations in the statistics of the inputs .a model of correlations that includes spikes from a correlating train that are temporally jittered could provide a starting place for a model of the input trains , however defining updates to the likelihood ratio for the two competing hypotheses will be more difficult .nevertheless , it will be interesting to see how our results carry over ; in particular , there will be many more different combinations of spike events that will contribute to increments for both spike integration and sprt decision models .while we therefore view the present study as a first step in exploring many possibilities , our findings demonstrate how the population - wide structure of correlations beyond pairwise correlation coefficients can strongly impact the speed and accuracy of decisions , and the circuit operations necessary to achieve optimal performance .this suggests that multi - electrode and imaging technologies , together with theoretical work on neural coding , will continue to play an exciting role in understanding the structure of basic computations like decision making over time .[ [ acknowledgements ] ] acknowledgements : + + + + + + + + + + + + + + + + + we thank yu hu , adrienne fairhall , and michael shadlen for their valuable comments on the manuscript .we gratefully acknowledge the support of a career award at the scientific interface from the burroughs welcome fund and nsf grant career dms-1026125 ( e. s. b. ) , and the university of washington escience institute s hyak computer cluster .the nontrivial real root of the moment generating function ( mgf ) of a sampling distribution is critical to finding and of an independently sampled sequential hypothesis test ( via equations [ eq : fc ] and [ eq : rt ] ) . for the sprt, the increment distribution is given in equation [ eq : sprtsamplingrv ] as : } { p[{\boldsymbol{s^i_p}},{\boldsymbol{s^i_n}}|h_0 ] } \right ] \label{eq : sprtsamplingrvappendix}\ ] ] the correct " hypothesis is in the numerator in order to orient a crossing of the positive decision threshold with a correct choice . correspondingly , the probability of observing a _ given _sample , is known from assumption of this hypothesis , and by definition follows the distribution : =p[{\boldsymbol{s^i_p}}|h_1]p[{\boldsymbol{s^i_n}}|h_1],\ ] ] where the independence assumption of the spike count vectors from the two separate pools and have allowed the factoring of the distribution . dropping the sampling index for notational convenience , the mgfcan then be computed as : =\sum_{{\boldsymbol{s_p}},{\boldsymbol{s_n } } } p[{\boldsymbol{s_p}}|h_1]p[{\boldsymbol{s_n}}|h_1 ] \left ( \frac{p[{\boldsymbol{s_p}}|h_1]p[{\boldsymbol{s_n}}|h_1]}{p[{\boldsymbol{s_p}}|h_0]p[{\boldsymbol{s_n}}|h_0 ] } \right)^{s}\ ] ] the nontrivial root ( ) can then be seen by inspection ( cf . equation [ eq : indsprth0 ] ) : we note that this computation is fully general , without any assumptions on the structure of correlations both within and across pools .the other parameter of the sampling distribution critical to computing the and functions , ] can be obtained in the limit as , by repeatedly expanding via taylor series about throughout the computation .first , we simplify the expression for the expected increment by using the independence of the two pools : = e\left[\log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0]}|h_1\right ] + e\left[\log \frac{p[{\boldsymbol{s_n}}|h_1]}{p[{\boldsymbol{s_n}}|h_0]}|h_1\right ] \label{eq : twopartssipsprt}\ ] ] next we expand each term to first order in ; below , we only demonstrate the expansion for the preferred " population ; the calculation for the null pool follows by exchanging and . in that case , by using the law of total expectation conditioned on the number of spikes in the common spike train shared " across the pool ( which spikes at a rate ) , we have : }{p[{\boldsymbol{s_p}}|h_0]}|h_1\right ] = e\left[e\left[\log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p , h_1\right]|h_1\right]\ ] ] e\left[\log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p=\hat{s}_p , h_1\right ] \label{eq : lastequivalenteqn}\ ] ] }{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p=0,h_1\right]+ \delta t\lambda_p e\left[\log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p=1,h_1\right]+ o(\delta t^2 ) \label{eq : sipsprtfirstexpansion}\ ] ] taking the case of , }{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p=0,h_1\right ] = \sum_{{\boldsymbol{s_p } } } p[{\boldsymbol{s_p}}|\hat{s}_p=0,h_1 ] \log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0 ] } \label{eq : sipderiv1}\ ] ] the aim here is to take advantage of the conditioning ; because the spike counts of neurons within the same pool are conditionally independent , given the number of spikes in the correlating spike train , the joint distribution across the vector becomes the product of the conditioned marginal distributions .however , this is only true for the first factor in the summand of equation [ eq : sipderiv1 ] . to continue , we must expand the log - ratio of the probability distributions , using the law of total probability , in : & = & \sum_{\hat{s}_p}^{\infty}p[\hat{s}_p|h_1]p[{\boldsymbol{s_p}}|\hat{s}_p=\hat{s}_p , h_1]\\ & = & ( 1-\rho\lambda_p\delta t)p[{\boldsymbol{s_p}}|\hat{s}_p=0,h_1 ] + \rho\lambda_p\delta t p[{\boldsymbol{s_p}}|\hat{s}_p=1,h_1 ] + o(\delta t^2)\end{aligned}\ ] ] & = & \sum_{\hat{s}_p}^{\infty}p[\hat{s}_p|h_0]p[{\boldsymbol{s_p}}|\hat{s}_p=\hat{s}_p , h_0]\\ & = & ( 1-\rho\lambda_n\delta t)p[{\boldsymbol{s_p}}|\hat{s}_p=0,h_0 ] + \rho\lambda_n\delta t p[{\boldsymbol{s_p}}|\hat{s}_p=1,h_0 ] + o(\delta t^2 ) \end{aligned}\ ] ] moreover , the summation in equation [ eq : sipderiv1 ] need only be over , as higher values will produce contributions of higher than first order in .two cases emerge for the expansion : if for any , = p[{\boldsymbol{s_p}}|\hat{s}_p=1,h_0 ] = 0 ] which is itself is o ) ; thus , \log \frac{p[{\boldsymbol{s_p}}|\hat{s}_p=0,h_1]}{p[{\boldsymbol{s_p}}|\hat{s}_p=0,h_0 ] } = n(1-\rho)\delta t \left ( \lambda_n - \lambda_p+\lambda_p \log \frac{\lambda_p}{\lambda_n}\right ) + o(\delta t^2)\label{eq : sipsprtfirstpart}\ ] ] the case of is simpler , as only zero - order terms must be kept ( due to the coefficient in equation [ eq : sipsprtfirstexpansion ] ) . recycling the expansion from equation [ eq : sprtsiprecycle ], we have that to zero - order : }{p[{\boldsymbol{s_p}}|h_0]}|\hat{s}_p=1,h_1\right ] = \sum_{{\boldsymbol{s_p } } } p[{\boldsymbol{s_p}}|\hat{s}_p=1,h_1 ] \log \frac{p[{\boldsymbol{s_p}}|h_1]}{p[{\boldsymbol{s_p}}|h_0 ] } = \log \frac{\lambda_p}{\lambda_n } + o(\delta t ) \label{eq : sprtsiplast}\ ] ] finally , combining equations [ eq : sipsprtfirstexpansion ] , [ eq : sipsprtfirstpart ] , and [ eq : sprtsiplast ] , we have that : }{p[{\boldsymbol{s_p}}|h_0]}|h_1\right ] & = & ( 1-\rho \lambda_p \delta t ) \left ( n(1-\rho)\delta t \left ( \lambda_n - \lambda_p+\lambda_p \log \frac{\lambda_p}{\lambda_n}\right ) + o(\delta t^2 ) \right ) \\ & + & \delta t \rho \lambda_p \left ( \log \frac{\lambda_p}{\lambda_n } + o(\delta t^2 ) \right)\end{aligned}\ ] ] repeating the exercise for the other component of equation [ eq : twopartssipsprt ] amounts to exchanging " for " ; adding everything together gives the final result , to first - order in : = \left(n(1-\rho)+\rho\right ) \delta t ( \lambda_p-\lambda_n ) \log \frac{\lambda_p } { \lambda_n}+o(\delta t^2)\ ] ] we note here that as and , we reproduce the results that would be expected from equation [ eq : poissonuncorrsprtrt ] . also , a more intuitive and tractable computation can be done for an analogous additively - correlated bernoulli process , resulting in the same solution . in the case of subtractive correlations within pools ,the derivation of ] and binom ] ) derived by wald , which are : - 1}{e[e^{h_0e_n}|e_n\geq \theta]-e[e^{h_0e_n}|e_n\leq -\theta ] } \label{eq : fcfull } \\dt & = & \frac{\delta t}{e[w]}\left ( e[e_n|e_n\geq \theta](fc ) + e[e_n|e_n\leq -\theta](1-fc)\right ) \label{eq : rtfull}\end{aligned}\ ] ] specifically , equations [ eq : fc ] and [ eq : rt ] hold under the assumption that the value of the state variable on the decision step is exactly equal to the decision threshold . in practice , however , this no - overshoot " assumption may not provide a particularly good approximation . a correction term based on the mean of the overshoot distribution that is , the distribution of the random variable defined by the excess distance over either the positive or negative threshold on the threshold crossing step is suggested by lee et al . this correction is based on the taylor expansion of the conditional expectations in equation [ eq : fcfull ] , and takes the form of a shift in the decision threshold .a correction of this form is relevant to our analysis , as the performance of two models are compared parametrically in the threshold to isolate the effects of the speed - accuracy tradeoff imparted by freely adjusting the threshold .denote the value of conditioned on crossing the first threshold as , and let overshoot random variable , with mean .expanding the conditional expectation ( although dropping the conditional notation for convenience ) via a taylor series centered on this mean ( the so - called delta method ) , we have = e[e^{h_0r_0}+h_0e^{h_0r_0}(\hat{e}_n - r_0)+\frac{h_0 ^ 2e^{h_0r_0}(\hat{e}_n - r_0)^2}{2}+ ... ]\ ] ] choosing yields an expression of wald s truncation : & = & e^{h_0\theta } \left(1 + h_0e[x ] + \frac{h_0 ^ 2e[x^2]}{2}+ ... \right)\\ & \approx & e^{h_0\theta}\end{aligned}\ ] ] here we see that if , each term in the expansion becomes zero and wald s approximation holds exactly . on the other hand ,if overshoots , error will accumulate at each term in the expansion , as a function of the moments of the overshoot distribution .if instead the expansion is performed about , a threshold - shifted approximation expresses the truncation error terms of the second and higher centered moments of the overshoot distribution : & = & e^{h_0(\theta+\mu_x ) } \left(1 + \frac{h_0 ^ 2e[(x-\mu_x)^2]}{2}+ ... \right)\\ & \approx & e^{h_0(\theta+e[x_n])}\end{aligned}\ ] ] in practice , the overshoot distribution is often nonzero ; however , if its mean can be calculated and , the truncation error associated with the latter approximation might provide a more favorable approximation as long as the higher - order moments do not grow too large . for the decision time , using this alternative approximation is exactly correct , and results in no additional error .staude et al . that cumulants provide a natural and intuitive higher - order generalization of the covariance " for multineuron spiking .the two models of correlated activity examined here are indistinguishable when only examining first - order ( i.e. , mean firing rate ) or second - order ( i.e. , pairwise correlations ) statistics . here , we derive the joint cumulants for each of these two models , to clarify how the spike count distributions produced by the two models differ at higher orders .the derivation relies on the conditional independence of the spike counts for each neuron in a pool , conditioned upon the spike count in the common spike train .let be the random variables giving spike counts in a windows of size from each of the neurons in a correlated pool , and be the spike count in the common spike train .the law of total cumulance allows a relatively simple expression of the joint cumulant on members ( because of the homogeneity of the pool , we will express the joint cumulant as calculated on , but the same expression holds for any -sized subset of ) : here is the set of all partitions of , for example & = & \ { \ { \{1\},\{2\},\{3\ } \ } , \ { \{1,2\},\{3\ } \ } , \ { \{1\},\{2,3\ } \ } , \ { \{1,3\},\{2\ } \ } , \ { 1,2,3 \ } \}\\ & = & \ { \pi_1 , \pi_2,\pi_3,\pi_4,\pi_5 \ } \label{eq : partitionexample}\end{aligned}\ ] ] and is the conditional joint cumulant over the set of all spike counts indexed by an element of is , the set . in our special case , whenever , owing to the conditional independence of each neuron given the common spike train .moreover , from the definition of the cumulant , the term of for the partition that contains such a block will also be zero .this implies that the only that contributes in equation [ eq : condcumulantformula ] is ( in the example of equation [ eq : partitionexample ] ) ; thus , ... ,e[s_k|\hat{s } ] ) = \kappa_k(e[s_1|\hat{s}]),\ ] ] where we have used the fact that the first cumulant is simply the expected value . using the cumulant generating function, we then have a formula for the joint cumulant : } ] \right]\right|_{t=0}\ ] ] thus , for the two models of correlations ( assuming a firing rate ) , we have : & = & \sum_{s_1=0}^{\hat{s } } s_1 { \hat{s } \choose s_1 } \rho^{s_1 } ( 1-\rho)^{\hat{s}-s_1 } = \rho \hat{s}\\ & { \mbox{ } } & \log e[e^{t e[s_1|\hat{s}=\hat{s } ] } ] = \log \sum_{\hat{s}=0}^{\infty } \frac{e^{-\delta t \lambda/\rho}(\delta t \lambda/\rho)^{\hat{s}}}{\hat{s } ! } e^{t \rho \hat{s}}\\ & & \hspace{.1 in } = \frac{\lambda \delta t ( e^{\rho t}-1)}{\rho}\\ & { \mbox{ } } & \kappa(s_1 ...s_k ) = \left . \frac{d^k}{(dt)^k}\left[ \frac{\lambda \delta t ( e^{\rho t}-1)}{\rho}\right]\right|_{t=0 } \end{aligned}\ ] ] & = & \sum_{s_1=\hat{s}}^{\infty } s_1 \frac{e^{-(1-\rho)\delta t \lambda } ( ( 1-\rho)\delta t \lambda)^{s_1-\hat{s } } } { ( s_1-\hat{s } ) ! } = \hat{s } + \lambda \delta t ( 1-\rho)\\ & { \mbox{ } } & \log e[e^{t e[s_1|\hat{s}=\hat{s } ] } ] = \log \sum_{\hat{s}=0}^{\infty } \frac{e^{-\delta t \lambda \rho}(\delta t \lambda \rho)^{\hat{s}}}{\hat{s } ! } e^{t ( \hat{s } + \lambda \delta t ( 1-\rho))}\\ & & \hspace{.1 in } = \lambda \delta t ( \rho[e^t - t-1]+t ) \label{e.esb } \\ & { \mbox{ } } & \kappa(s_1 ...s_k ) = \left . \frac{d^k}{(dt)^k}\left[ \lambda \delta t ( \rho[e^t - t-1]+t)\right]\right|_{t=0 } \end{aligned}\ ] ] comparing equations [ eq : mipcumulant ] and [ eq : sipcumulant ] ( see also figure [ fig : cumulant ] ) , we see agreement for as expected ; these correspond the the intended firing rate and pairwise covariance of neurons within the pool .however , for , we see the signature of the differences in the structure of the correlations . for the mip model ,the joint cumulant decays geometrically as more and more neurons are considered .in contrast , the joint cumulant remains constant for the sip model .william t newsome , kenneth h britten , j anthony movshon , and michael n shadlen . .in dominic man - kit lam and charles d. gilbert , editors , _ neural mechanisms of visual perception _, pages 171198 . portfolio pub .co. , woodlands , tex . , april 1989 .
stimulus from the environment that guides behavior and informs decisions is encoded in the firing rates of neural populations . each neuron in the populations , however , does not spike independently : spike events are correlated from cell to cell . to what degree does this apparent redundancy impact the accuracy with which decisions can be made , and the computations that are required to optimally decide ? we explore these questions for two illustrative models of correlation among cells . each model is statistically identical at the level of pairs cells , but differs in higher - order statistics that describe the simultaneous activity of larger cell groups . we find that the presence of correlations can diminish the performance attained by an ideal decision maker to either a small or large extent , depending on the nature of the higher - order interactions . moreover , while this optimal performance can in some cases be obtained via the standard integration - to - bound operation , in others it requires a nonlinear computation on incoming spikes . overall , we conclude that a given level of pairwise correlations even when restricted to identical neural populations may not always indicate redundancies that diminish decision making performance .
detection of spatial patterning is important in many domains , including molecular biology , ecology and epidemiology .spatial patterning can be identified by testing whether observed data departs from a model of spatial randomness : for instance , the homogeneous poisson process may serve as a model of spatial randomness for point process data , and deviations from poisson statistics may be used to detect spatial structure such as clustering .the ripley function is widely employed , along with associated and functions , to analyse deviations from homogeneous poisson statistics , since it permits tests for clustering and dispersion at multiple scales .a number of related statistics have been introduced based on the function to summarize such deviations ( employing either simulations or analytic approaches to evaluate the critical quantiles under poisson statistics ) , including the clustering index and degree of clustering , and a variance normalized alternative to the function .the above spatial statistics have been defined and applied in the context of point process , where the data to be analysed consists of a collection of points in ( typically ) euclidean 2- or 3-space : for instance , the degree of clustering has been applied to the study of the spatial distribution of individual mrna transcripts from a single gene , treated as point particles at positions inferred from fluorescent in situ hybridization ( fish ) microscopy data , whose clustered organization was shown to be dependent on the spatial aggregation of an associated rna binding protein , and necessary for asynchronous cell - cycle timing in multinucleate fungal cells .point processes can be defined as a special class of random measures , the _ counting measures _ , which assign non - negative integer values to all measurable subsets of a space .spatial statistics such as the ripley function can be generalized to the framework of random measures ; the function generalizes directly to the _ reduced second moment measure _ , which can be defined for stationary random measures taking either discrete values ( counting measures ) or continuous values .further , the concept of spatial randomness can be generalized , leading to the class of _ completely spatially random _ ( csr ) _ measures _ , which includes the homogenous poisson processes as a subclass ( those which are simultaneously counting measures and csr ) .however , while such generalizations appear to enable the treatment of more general kinds of data , for instance continuous measurements which can be modeled as samples from a random measure , spatial statistics such as those above are rarely applied outside the point process context .problems which arise in straightforwardly applying similar techniques to other kinds of data include choosing a general estimator for the function , and determining a method to evaluate the necessary critical quantiles either by simulations or analytically for a general class of csr null hypotheses .unlike the homogeneous poisson processes , which can be parameterized by a single intensity parameter , the class of all csr random measures has a more complex structure , as characterized in .in addition to the homogeneous poisson processes , further subclasses of csr random measures include gamma processes , and sum measures associated with marked poisson processes ( referred to as mark sum poisson processes below ) .we propose here a general approach to function - based statistical tests in the context of arbitrary random measures .we provide a consistent convolution estimator for the function based on the approach of , and investigate a number of ways in which the critical quantiles of the clustering index and degree of clustering estimators can be estimated for various classes of null model .first , we consider null hypotheses in the classes of gamma processes and mark sum poisson processes , and show how to fit these models to data and draw samples to simulate csr in each case , providing an expectation - maximization ( em ) algorithm to fit the marked poisson process .further , we derive an exact permutation - based estimator for the clustering index , which provides a general test against the null hypothesis class of all csr measures .we show that our permutation test using the convolution - based estimator reduces to the clustering index estimator used by lee et .for the point process case , and hence provides a further rationale for the _ conditionality principle _ discussed in , which circumvents model fitting in the homogeneous poisson case by fixing the number of points across simulations .an advantage of adopting a general random measure based approach to identification of spatial patterning is that it provides a unifying framework in which statistics and indicators can be compared when analysing diverse data types .it also has the potential to provide a unifying framework for the modeling and inference of spatially distributed regulatory networks ( at both inter- and intra - cellular levels ) as diverse kinds of spatial omics data become available .random measures have emerged in a variety of areas of machine learning as a robust general framework for modeling diverse kinds of data , while avoiding the need to make arbitrary assumptions about the parametrization of distributions , particularly in context of bayesian non - parametric approaches ( see for a general summary , and for applications in text and image processing ) .we discuss in further detail below the potential relevance of our approach and the random measure framework within the broader context of modeling spatial omics data .we begin by introducing formally the concepts of complete spatial randomness and random measures , and outline existing statistical tests for ripley s _ k _ , _ l _ and _ h _ functions , the clustering index , and degree of clustering in the point process context ( sec . [sec : prelim ] ) .we then outline our generalization of these tests to the context of arbitrary random measures , including a convolution - based estimator for the function , and tests against various null hypothesis classes as described above ( sec .[ sec : results1 ] ) .we assess the ability of these tests to identify spatial randomness and patterning ( clustering ) first in synthetic data ( sec .[ sec : results2 ] ) , and then apply the method to probe for patterns of clustering over time in fluorescence microscopy data from pairs of corresponding mrnas and proteins in a polarizing mouse fibroblast system ( sec .[ sec : results3 ] ) .the strong relationship between mrna and protein clustering profiles suggests that mrna localization and local translation provides a mechanism for protein localization in a number of cases , providing a small - scale demonstration of a spatial omics application .we conclude with a discussion ( sec .[ sec : discuss ] ) .a _ random measure _ can be defined on any measurable space , that is , a set equipped with a -algebra . for convenience, we will assume below that is a euclidean space of dimension , ( ) , and that the -algebra is , the collection of borel sets .borel set _ is any set that can be formed by the operations of countable union , countable intersection and relative complement from the open sets in the standard topology .a _ measure _ on is a mapping from to the non - negative reals with infinity , such that , and for all countable collections of disjoint sets in , .a measure is called _ locally finite _ if is finite whenever is a bounded set , and we denote the collection of all locally finite measures as .a _ random measure _ is then defined to be a random variable taking values in , and we will write for the random variable itself , and for a specific value ( measure ) taken by . a random measure is necessarily defined with respect to a -algebra over , and all examples below will assume the -algebra , which is the smallest -algebra of subsets of such that all functions are measurable for arbitrary borel set .further , we will use the notation to denote the probability that a random measure assigns a value in to set , where is an open interval in .a random measure is _ completely random _if is independent of whenever ._ complete spatial randomness _ ( csr ) is a stronger property of a random measure which implies both ( a ) complete randomness , and ( b ) _ stationarity _ , for any displacement .a number of properties follow from complete spatial randomness .first , a csr measure is necessarily isotropic , and there exists a fixed _ intensity parameter _ such that = \lambda\nu(b) ] denotes expectation .further , any csr measure over can be represented as a poisson process over , whose intensity measure has the form , where is a measure over ( with finite ) , is a non - negative real constant , and ( see below for notational conventions for point processes ) .this follows from the general characterization of csr measures given in ( see also ) .a consequence of this representation is that whenever and have equal volume , , so that the distribution of is determined only by the volume of .a _ point process _ can be defined as a special type of random measure for which with probability 1 , along with the technical condition that for all , which ensures that no two points coincide ( also called _ simplicity _ , ) .since point processes take only non - negative integer values on bounded subsets , they are also called _ counting measures_. further , since a sample from a point process is ( with probability 1 ) a countable subset of , we can use set notation and replace integrals by infinite sums in defining quantities for point processes , writing for example .the class of csr point processes is equivalent to the class of homogeneous poisson processes .the homogeneous poisson processes are parameterized by a single intensity parameter , , such that , where is the poisson probability mass function . the more general class of poisson processes ( as used in the general characterization of csr above )are completely random measures ( without stationarity ) , parameterized by an _ intensity measure _ such that . for a stationary point process, the ripley function can be defined in terms of the _ reduced second moment measure _ : ,\end{aligned}\ ] ] where is the origin , is an open ball at the origin of radius , and ] is the iverson bracket , which is 1 when is true and 0 otherwise , is the euclidean distance , is the window region in which the sample is observed , and is an _ edge correction _ : ,\end{aligned}\ ] ] where is a random vector sampled from a uniform distribution over the sphere centered at the origin of radius .[ eq : kestimator ] is shown to be unbiased for all less than the diameter of for any convex .a simpler ( but biased ) estimator for is also commonly use , which replaces the edge correction function with the volume / area of the observed region : .\end{aligned}\ ] ] the associated statistical tests introduced below are unaffected by the choice between and , and estimators for and can be straightforwardly derived from and by substituting these estimators for true values in eq .[ eq : riplh ] . in , the _ clustering index _statistic is introduced , which is denoted .we provide a general expression for below , which provides a test for clustering or dispersion at significance level : where denotes the quantile ( percentile ) of under an appropriate simulation of csr ( unlike , we use a median instead of a mean simulation - based estimator to center , so that when , to avoid complications arising if the mean estimator is greater than or less than ) . is thus a further normalization of such that , for a given value of , iff ( and hence ) is significantly above the range expected under csr on a 1-sided test at level , providing evidence of clustering ( respectively , for dispersion ) at length - scale . by inspecting eq .[ eq : hstar ] , we see that the edge correction terms from eq .[ eq : kestimator ] will cancel in calculating from , and hence it is sufficient to use the simpler estimator . to calculate and is necessary to fix a distribution for simulations appropriate for the csr null hypothesis .one possibility is to estimate the intensity parameter directly from ( ) , and simulate a homogeneous poisson process with this parameter by drawing first a poisson distributed value for the number of points in from for each simulation , and then distributing points across ( independently and uniformly ) .this method is termed _ parametric bootstrapping _ , as discussed in , and provides an asymptotically consistent statistical test ( as ) .alternatively , we may condition all simulations on the number of points observed in .hence , we can take advantage of the _ conditionality principle _ discussed in , whereby the distribution of points in region for any homogeneous poisson process is independent of when conditioned on .the points must be independently and uniformly distributed in regardless of , forming a _ binomial process _ over ( see ) . by conditioning on , we therefore derive a consistent statistical test independent of the size of against all csr point processes ( homogeneous poisson processes ) , which is the approach taken in .we note however that the simulations for the conditional test are no longer strictly csr , since they are simulations of a binomial process .this distinction will be important in generalizing .in particular , if an observation is quantized across into voxels which are small enough that the probability of two points occupying the same voxel is negligible , it is possible to view simulations of a binomial process as permutations of the voxels in , and derive the binomial process test as a monte - carlo approximation to an exact permutation test , as will be proposed for the general case .the options discussed above for calculating are summarized in algorithm [ alg1 ] , which also serves as a template for generalization below ( where denotes a spatially quantized observation of ; here , a binary indicator vector across voxels lying in an observation window which is 1 iff a voxel contains a point in ) . in , is further used to define the _ degree of clustering _ , which is the area of the curve above from to , and hence serves as an indicator for the degree of departure from csr in this range . ( vectorized sample from point process / random measure ) , ( number of simulation / permutation trials ) , ( significance level ) calculate estimators for , and using ( eqs .[ eq : riplh ] and [ eq : kbiased ] ) .draw vectorized samples using one of the following methods : ( a ) ( parametric bootstrapping ) find the best fitting csr model for in chosen null hypothesis class and run simulations of , ( b ) ( conditioning ) simulate the null model times conditioned on the measure or point count of the whole observed region in , ( c ) ( permutation ) draw permutations of .calculate , and estimators on for ( eqs . [ eq : riplh ] and [ eq : kbiased ] ) .calculate , 0.5th and quantiles of for each value of , and use to normalize to calculate the clustering index , ( eq . [ eq : hstar ] ) . now consider the generalization of the statistical tests and indices above from the point process case to the general random measure case . for a stationary random measure , the _ reduced second moment measure _ and ripley s functionare defined exactly as in eq .[ eq : ripk1 ] ( see , eq . 2.19 ) . the relevant _ palm distribution _ , , in the random measure case takes the form , with an arbitrary non - negative measurable function integrating to and ( see ) .this definition can be seen to reduce to the distribution of further points conditioned on a point at the origin for the point process case , since will be non - zero only for , regardless of .ripley s and functions follow directly , as in eq .[ eq : riplh ] . to provide a general estimator for the ripley function, we must first specify how samples from the random measure are observed .we assume that we have an observation window , which can be partitioned into a collection of regular cubical voxels with sides of length , denoted , whose centres lie at .our observation of is limited to the value it takes on each voxel , hence we observe the quantities .we can thus alternatively represent a sample as a measure with atoms at having weights respectively .we now consider the estimator : \phi(v_{n_1})\phi(v_{n_2 } ) - \bar{c } \nonumber \\ & = & \frac{1}{\lambda^2\nu(w)}\int\int [ |x - y|\leq r ] \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y ) - \bar{c } \nonumber \\ \bar{c } & = & \frac{\sum_{n=1 ... n } ( \bar{\phi}(v_n))^2}{\lambda^2\nu(w)}.\end{aligned}\ ] ] can be efficiently calculated using a discrete convolution , since we have : \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y ) = \int [ |x|\leq r ] ( \bar{\phi } * \bar{\phi}^{\prime})(x ) \text{d}x,\end{aligned}\ ] ] where , and is the convolution of and when treated as functions from to , hence . is an estimator for in the following sense : * proposition 1 . *_ for all values of , , where _ \phi(\text{d}x)\phi(\text{d}y ) - c \nonumber \\ c & = & \frac{\int_w \phi(\{x\ } ) \phi(\text{d}x)}{\lambda^2\nu(w)}.\end{aligned}\ ] ] * proof .* we begin by defining a function such that we have implies ( hence sends to the centre of the voxel to which it belongs ) .then , we can rearrange eq .[ eq : kestimatorrm2 ] as follows : \bar{\phi}(\text{d}x)\bar{\phi}(\text{d}y ) - \bar{c } \nonumber \\ & = & \frac{1}{\lambda^2\nu(w)}\int\int \mathbf{1}_w(x)\mathbf{1}_w(y ) [ |v(x)-v(y)|\leq r ] \phi(\text{d}x)\phi(\text{d}y ) - \bar{c}.\end{aligned}\ ] ] by inspection , the form of eq .[ eq : kestimatorrm3 ] is identical to eq .[ eq : kestimatorrm ] with the term ] , and substituted for . since each voxel is a dimensional cube with sides of length , we have . hence , by the triangle inequality : for the subset of for which =1 ] ) when the three quantities in the proposition are substituted ( noting that ) .the proposition follows from the nested relationship between these regions of integration , the fact that is non - negative , and the fact that ( since for any voxel , ) . is related to a further estimator , which substitutes for in eq .[ eq : kestimatorrm ] , where is an edge - correction term. we can derive as an unbiased estimator of from fully observed ( not spatially quantized ) samples using a result in ( using their eq .10 , following theorem 1 , see appendix a for the derivation ) . for values of which are small compared to the diameter of , whenever =1 ] is the iverson bracket , which is 1 when is true and 0 otherwise .the process is so - called , since it is equivalent to attaching marks to the points in a homogeneous poisson process with intensity , where mark appears with a probability proportional to ( forming a _ marked poisson process _ ) , and the value is calculated by summing across the weights of the points in ( forming its associated _ sum measure _ ) . in this equivalent representation , each mark independently follows a homogeneous poisson processes with intensity , and hence it follows that eq .[ eq : mspproc ] is csr . on the null hypothesis that is a sample from a mark sum poisson process , will be distributed according to eq .[ eq : mspproc ] , assuming , which we call a _ weighted sum of poisson distributions_. by fixing the weights , it is possible to derive an expectation - maximization ( em ) algorithm to fit by maximum - likelihood ( see appendix b ) . having fitted the model , csr samples can be drawn by generating values , distributed as respectively , and calculating at each voxel . aside from forming a broad csr measure class , mark sum poisson processes are interesting in that , in the limit of infinite marks , they form a universal representation for csr measures .this is because , as noted earlier , any csr measure over can be represented as a ( non - homogeneous ) poisson process over with intensity measure such that .as the number of marks increases , the s are better able to approximate the measure , and hence any random measure .although we considered above only the case of fitting a distribution with finite marks and fixed weights , by using a large number marks with densely and evenly sampled weights , it is therefore possible to approximate any csr measure . *general case .* we know that , since all voxels have identical volume , are independent samples from the same distribution ( whenever ) .hence , we can use the empirical distribution of voxel values as an estimate for , , which is completely general in that the only assumption we have made is that is csr .we can thus approximate a simulation of csr from the ` best fitting ' csr measure ( whose marginals approach asymptotically ) , by generating values for new voxels in the simulation using sampling with replacement of the values already seen ( equivalently , sampling from the empirical distribution ) .we note that this only approximates csr , since , with probability 1 , takes a value in the empirical distribution , and hence the values taken by on any disjoint set of sub - voxels which cover a given voxel must be dependent . if instead of sampling with replacement from the empirical distribution to generate new samples , we permute the voxel values ( sampling without replacement ) , must remain unchanged , and we can regard this as approximate sampling from the best fitting csr measure conditioned on .however , rather than viewing permutation as an approximate simulation of csr , it is also possible to view it in terms of an exact test against the general csr null hypothesis , based on the exchangeability of the voxels under csr .we summarize this as : * proposition 3 .* _ algorithm [ alg1 ] with exhaustive permutation at step 2 , is an exact test for csr of an arbitrary random measure at significance level , in the sense that for an arbitrary distribution over the class of all csr measures . _* proof . * given random measure over , observation window and cubical voxels with sides of length partitioning , , we can construct a related random measure over such that : \in b'),\end{aligned}\ ] ] where ] and .hence , now considering the region , under the assumption that is csr , we can rewrite eq .[ eq : appd2 ] as : f^*(\mathbf{x})\phi'(\text{d}\mathbf{x } ) \nonumber \\ & = & \int [ \mathbf{x}\in r ] \frac{f^*(\mathbf{x})}{g(\mathbf{x})}\phi''(\text{d}\mathbf{x}),\end{aligned}\ ] ] where , and we write for the set . by definition of we have that for all : hence ( using , by definition of ) : \frac{f^*(\mathbf{x})}{g(\mathbf{x})}\phi''(\text{d}\mathbf{x } ) \nonumber \\ & < & \int [ \mathbf{x}\in r ] ( \omega ) \phi''(\text{d}\mathbf{x } ) \nonumber\\ & = & \omega.\end{aligned}\ ] ] as mentioned , prop .3 sheds further light on the practice of fixing the number of points during point process simulations as in .further , we note that in practice , monte carlo sampling is typically required in place of exhaustive permutation in evaluating using algorithm [ alg1 ] . the main advantage of prop .3 is that it sidesteps the issues of choosing a particular csr measure or null hypothesis class , and provides a justification for methods which do not simulate csr exactly in the general case .we first test our approach on synthetically generated data , where we are interested in determining if the various forms of algorithm [ alg1 ] can distinguish between data which is known to be completely spatially random , and data which is known to contain spatial structure in the form of clustering . for synthetic csr data , we consider the gamma process and mark sum poisson process as discussed above , where the techniques used to draw samples from these processes discussed in the context of algorithm [ alg1 ] can be likewise be used to generate data for a synthetic test set. we sample and uniformly in the intervals ] respectively for the gamma process ( see eq .[ eq : gammproc ] ) , and use five marks with the fixed weights for the mark sum poisson process ( which for convenience we also fix during testing ) while sampling s uniformly in the interval ] is the iverson bracket , which is 1 when is true and 0 otherwise .we can reexpress this distribution in a form involving latent variables : or equivalently : \nonumber \\ p(\mathbf{z } ) & = & \prod_m \operatorname{poisson}(z_m;\alpha_m)\end{aligned}\ ] ] where ]. is found by setting . * m - step . * substituting eqs.[eq : latvar2 ] and [ eq : estep ] into eq .[ eq : em ] yeilds : where is the entropy of .[ eq : mstep1 ] can be seen to break into separate collections of terms involving each of the s . differentiating with respect to and setting to zero yields the update : /3t3 mouse fibroblast cells were serum - starved for 16 hours prior to seeding on fibronectin crossbow micropatterned surfaces ( individual micropatterns approximately 25 m in height and width ) .the cells were allowed to grow for various lengths of time ( 2 , 3 , 4 , 5 and 7 hours ) before fixing in formaldehyde , permeabilization , and hybridization of probes .rna fish probes were designed and applied using the method of , which targets multiple 20-mer oligonucleotide probes to each mrna .rabbit polyclonal anti - arghdia , anti - gapdh , anti--actin and anti - par3 antibodies were used for the if staining ( santa cruz and abcam ) . rat monoclonal anti - tubulin antibody ( abcam )was used for tublin staining in all cells , along with dapi for nuclear staining .images were captured on a spinning disk confocal revolution xd system ( andor ) .each cell was imaged as an individual -stack , with each image comprising 512 pixels , 15 - 25 -levels , and approximately 0.1 m pixel width and m separation between -levels .background subtraction was applied to all images using imagej ( if and fish ) , and spot detection was performed to determine mrna positions from the fish -stacks using .2d segmentation of the nucleus region was performed by max - projecting the dapi -stacks , thresholding the resulting images , and applying image dilation to the binary masks .2d segmentation of cellular regions was performed similarly by max - projection , thresholding and dilating the tubulin if -stacks . to estimate a height map across the cellular region ( to construct a 3d cellular model ) , we first estimated the base -level of the cell to be the level with the maximum total tubulin intensity ( cells adhere to micropatterned regions on a 2d surface , and thus achieve greatest spread at their base ) .we then search at each 2d location for the -level with the max - tubulin intensity above the base level , which we observed empirically to provide a reliable indicator of the cell boundary .the final height - map was formed by smoothing the resulting surface with a 3 box filter . c. lee , h. zhang , a. e. baker , p. occhipinti , m. e. borsuk , and a. s. gladfelter .protein aggregation behavior regulates cyclin transcript localization and cell - cycle control ._ developmental cell _ , 25:572 - 584 , 2013 .gatrell ac , bailey tc , diggle pj and rowlingson bs .spatial point pattern analysis and its application in geographical epidemiology ._ transactions of the institute of british geographers _ 21 : 256274 , 1996 .m. i. jordan .hierarchical models , nested models and completely random measures . in_ frontiers of statistical decision making and bayesian analysis : in honor of james o. berger _, new york : springer , 2010 .thery , m. , racine , v. , piel , m. , pepin , a. , dimitrov , a. , chen , y. , jean - baptiste , s. and bornens , m. anisotropy of cell adhesive microenvironment governs cell internal organization and orientation of polarity ._ proceedings of the national academy of sciences _ , 2006 .j. schmoranzer , j. p. fawcett , m. segura , s. tan , r. b. vallee , t. pawson , and g. g. gundersen .par3 and dynein associate to regulate local microtubule dynamics and centrosome orientation during migration ._ current biology _ , 19 ( 13 ) , 1065 - 1074 ( 2009 ) .e. lcuyer , h. yoshida , n. parthasarathy , c. alm , t. babak , t. cerovina , t. r. hughes , p. tomancak , and h. m. krause . global analysis of mrna localization reveals a prominent role in organizing cellular architecture and function . _ cell _ 131:174187 , 2007 .junker , j. p. , nol , e. s. , guryev , v. , peterson , k. a. , shah , g. , huisken , j. , mcmahon ap , berezikov e , bakkers j , and van oudenaarden , a. genome - wide rna tomography in the zebrafish embryo ._ cell _ , 159:3 , 662 - 675 , 2014 .snijder , b. , sacher , r. , rm , p. , damm , e. m. , liberali , p. and pelkmans , l. population context determines cell - to - cell variability in endocytosis and virus infection . _, 461(7263 ) , 520 - 523 , 2009
we derive generalized estimators for a number of spatial statistics that have been used in the analysis of spatially resolved omics data , such as ripley s _ k _ , _ h _ and _ l _ functions , clustering index , and degree of clustering , which allow these statistics to be calculated on data modelled by arbitrary random measures . our estimators generalize those typically used to calculate these statistics on point process data , allowing them to be calculated on random measures which assign continuous values to spatial regions , for instance to model protein intensity . the clustering index ( ) compares ripley s _ h _ function calculated empirically to its distribution under complete spatial randomness ( csr ) , leading us to consider csr null hypotheses for random measures which are not point - processes when generalizing this statistic . for this purpose , we consider restricted classes of completely random measures which can be simulated directly ( gamma processes and marked poisson processes ) , as well as the general class of all csr random measures , for which we derive an exact permutation - test based estimator . we establish several properties of the estimators we propose , including bounds on the accuracy of our general ripley _ k _ estimator , its relationship to a previous estimator for the cross - correlation measure , and the relationship of our generalized estimator to a number of previous statistics . we test the ability of our approach to identify spatial patterning on synthetic and biological data . with respect to the latter , we demonstrate our approach on mixed omics data , by using fluorescent in situ hybridization ( fish ) and immunofluorescence ( if ) data to probe for mrna and protein subcellular localization patterns respectively in polarizing mouse fibroblasts on micropattened cells . using the generalized clustering index and degree of clustering statistics we propose , we observe correlated patterns of clustering over time for corresponding mrnas and proteins , suggesting a deterministic effect of mrna localization on protein localization for several pairs tested , including one case in which spatial patterning at the mrna level has not been previously demonstrated . = 1
classification is a machine learning task that requires construction of a function that classifies examples into one of a discrete set of possible categories .formally , the examples are vectors of _ attribute _ values and the discrete categories are the _ class _ labels .the construction of the classifier function is done by training on preclassified instances of a set of attributes .this kind of learning is called _ supervised _ learning as the learning is based on labeled data .a few of the various approaches for supervised learning are artificial neural networks , decision tree learning , support vector machines and bayesian networks .all these methods are comparable in terms of classification accuracy .bayesian networks are especially important because they provide us with useful information about the structure of the problem itself .one highly simple and effective classifier is the naive bayes classifier .the naive bayes classifier is based on the assumption that the attribute values are conditionally independent of each other given the class label .the classifier learns the probability of each attribute given the class from the preclassified instances .classification is done by calculating the probability of the class given all attributes .the computation of this probability is made simple by application of bayes rule and the rather naive assumption of attribute independence . in practical classification problems ,we hardly come across a situation where the attributes are truly conditionally independent of each other . yetthe naive bayes classifier performs well as compared to other state - of - art classifiers .an obvious question that comes to mind is whether relaxing the attribute independence assumption of the naive bayes classifier will help improve the classification accuracy of bayesian classifiers . in general, learning a structure ( with no structural restrictions ) that represents the appropriate attribute dependencies is an np - hard problem .several authors have examined the possibilities of adding arcs ( augmenting arcs ) between attributes of a naive bayes classifier that obey certain structural restrictions .for instance , friedman , geiger and goldszmidt define the tan structure in which the augmenting arcs form a tree on the attributes .they present a polynomial time algorithm that learns an optimal tan with respect to mdl score .keogh and pazzani define augmented bayes networks in which the augmenting arcs form a forest on the attributes ( a collection of trees , hence a relaxation of the structural restriction of tan ) , and present heuristic search methods for learning good , though not optimal , augmenting arc sets .the authors , however , evaluate the learned structure only in terms of observed misclassification error and not against a scoring metric , such as mdl .sacha in his dissertation ( unpublished , ) , defines the same problem as forest augmented naive bayes ( fan ) and presents polynomial time algorithm for finding good classifiers with respect to various quality measures ( not mdl ) .the author however , does not claim the learned structure to be optimal with respect to any quality measure . in this paper , we present a polynomial time algorithm for finding optimal augmented bayes networks / forest augmented naive bayes with respect to mdl score . the rest of the paper is organized as follows . in section 2 ,we define the augmented bayes structure .section 3 , defines the mdl score for bayesian networks .the reader is referred to the friedman paper for details on mdl score , as we present only the necessary details in section 3 .section 4 provides intuition about the problem and section 5 and 6 present the polynomial time algorithm and prove that its optimal .the augmented bayes network ( abn ) structure is defined by keogh and pazzani as follows : * every attribute has the class attribute as its parent . *an attribute may have at most one other attribute as its parent .note that , the definition is similar to the tan definition given in .the difference is that whereas tan necessarily adds augmenting arcs ( where is the number of attributes ) ; abn adds any number of augmenting arcs up to .figure 1 shows a simple abn .the dashed arcs represent augmenting arcs .note that attributes and in the figure do not have any incoming augmenting arcs .thus the abn structure does not enforce the tree structure of tan , giving more model flexibility .in this section we present the definitions of bayesian network and its mdl score .this section is derived from the friedman paper .we refer the reader to the paper for more information as we only present the necessary details .a bayesian network is an annotated directed acyclic graph ( dag ) that encodes a joint probability distribution of a domain composed of a set of random variables ( attributes ) .let be a set of discrete attributes where each attribute takes values from a finite domain .then , the bayesian network for is the pair , where is a dag whose nodes correspond to the attributes and whose arcs represent direct dependencies between the attributes . the graph structure encodes the following set of independence assumptions : each node is independent of its non - descendants given its parents in .the second component of the pair contains a parameter for each possible value of and of . defines a unique joint probability distribution over defined by : the problem of learning a bayesian network can be stated as follows .given a _ training set _ of instances of , find a network that best fits .we now review the _ minimum description length _ ( mdl ) of a bayesian network . as mentioned before , our algorithm learns optimal abns with respect to mdl score .the mdl score casts learning in terms of data compression .the goal of the learner is to find a structure that facilitates the shortest description of the given data .intuitively , data having regularities can be described in a compressed form . in context of bayesian network learning, we describe the data using dags that represent dependencies between attributes .a bayesian network with the least mdl score ( highly compressed ) is said to model the underlying distribution in the best possible way .thus the problem of learning bayesian networks using mdl score becomes an optimization problem .the mdl score of a bayesian network is defined as where , is the number of instances of the set of attributes , is number of parameters in the bayesian network , is number of attributes , and is the mutual information between an attribute and its parents in the network . as per the definition of the abn structure , the class attribute does not have any parents .hence we have .also , each attribute has as its parents the class attribute and at most one other attribute .hence for the abn structure , we have the first term on r.h.s in equation ( 2 ) represents all attributes with an incoming augmenting arc .the second term represents attributes without an incoming augmenting arc .consider the chain law for mutual information given below applying the chain law to the first term on r.h.s of equation ( 2 ) we get for any abn structure , the second term of equation ( 4 ) - is a constant .this is because , the term represents the arcs from the class attribute to all other attributes in the network , and these arcs are common to all abn structures ( as per the definition ) . using equations ( 1 ) and ( 4 ) , we rewrite the non - constant terms of the mdl score for abn structures as follows where, denotes an abn structure .looking at the mdl score given in equation ( 5 ) , we present a few insights on the learning abn problem .the first term of the mdl equation - represents the length of the abn structure .note that the length of any abn structure depends only on the number of augmenting arcs , as the rest of the structure is the same for all abns .if we annotate the augmenting arcs with mutual information between the respective head and tail attributes , then the second term - represents the sum of costs of all augmenting arcs .since the best mdl score is the minimum score , our problem can be thought of as balancing the number of augmenting arcs against the sum of costs of all augmenting arcs , where we wish to maximize the total cost .the mdl score for abn structures is decomposable on attributes .we can rewrite equation ( 5 ) as \ ] ] where are the number of parameters stored at attribute .the number of parameters stored at attribute depends on the number of parents of in , and hence on whether has an incoming augmenting arc .since we want to minimize the mdl score of our network , we should add an augmenting arc to an attribute only if its cost dominates the increase in the number of parameters of .for example , consider an attribute with no augmenting arc incident on it .then the number of parameters stored at the attribute in abn will be , where and are the number of states of the attributes and respectively . thus .if now an augmenting arc having a cost of is made incident on the attribute , then the number of parameters stored at will be , where is the number of states of the attribute . note that the addition of the augmenting arc has increased the number of parameters of the network . since we want to add an augmenting arc on only if it reduces the mdl score , the following condition must be satisfied which is equivalent to note that this equivalence implies that the overall change in mdl score is independent of the arc direction . that is , adding an augmenting arc changes the network score identically to adding the arc .thus any augmenting arc is eligible to be added to an abn structure if it has a cost at least the defined threshold and if it does not violate the abn structure .note that , this threshold depends only on the number of discrete states of the attributes and the number of cases in the input database , and is independent of the direction of the augmenting arc .we now present a polynomial time greedy algorithm for learning optimal abn with respect to mdl score .1 . construct a complete undirected graph , such that is the set of attributes ( excluding the class attribute ) .2 . for each edge , compute .annotate with .3 . remove from the graph any edges that have a cost less than the threshold .this will possibly make the graph unconnected .run the kruskal s maximum spanning tree algorithm on each of the connected components of .this will make a maximum cost forest ( a collection of maximum cost spanning trees ) .5 . for each tree in , choose a root attribute and set directions of all edges to be outward from the root attribute .add the class variable as a vertex to the set and add directed edges from to all other vertices in .return .the algorithm constructs an undirected graph in which all edges have costs above the defined threshold .as seen in the previous section , all edges having costs greater than the threshold improve the overall score of the abn structure . running the maximum spanningtree algorithm on each of the connected components of ensures that the abn structure is preserved and at the same time maximizes the second term of the mdl score given in equation ( 5 ) .note that , if in step 3 of the algorithm the graph remains connected , our algorithm outputs a tan structure . in this sense, our algorithm can be thought of as a generalization of the tan algorithm given in .the next section proves that the augmented bayes structure output by our algorithm is optimal with respect to the mdl score .we prove that the abn output by our algorithm is optimal by making the observation that no optimal abn can contain any edge that was removed in step 3 of the algorithm .this is because , removing any such edge lowers the mdl score and leaves the structure an abn .consequently , an optimal abn can contain only those edges that remain after step 3 of the algorithm .if an optimal abn does not connect some connected component of the graph that results following step 3 , edges with costs greater than or equal to can be added without increasing overall mdl score until the component is spanned .hence there exists an optimal abn that spans each component of the graph that results from step 3 . bythe correctness of kruskal s algorithm run on each connected component to find a maximum cost spanning tree , an optimal abn is found .thus the abn output by our algorithm is an optimal abn .n. friedman and m. goldszmidt .discretization of continuous attributes while learning bayesian networks . in_ proceedings of the thirteenth international conference on machine learning _ , pages 157165 , 1996 .e. keogh and m. pazzani .learning augmented bayesian classifiers : a comparison of distribution - based and classification - based approaches . in _ proceedings of the seventh international workshop on artificial intelligence and statistics _ ,pages 225230 , 1999 .
naive bayes is a simple bayesian classifier with strong independence assumptions among the attributes . this classifier , despite its strong independence assumptions , often performs well in practice . it is believed that relaxing the independence assumptions of a naive bayes classifier may improve the classification accuracy of the resulting structure . while finding an optimal unconstrained bayesian network ( for most any reasonable scoring measure ) is an np - hard problem , it is possible to learn in polynomial time optimal networks obeying various structural restrictions . several authors have examined the possibilities of adding augmenting arcs between attributes of a naive bayes classifier . friedman , geiger and goldszmidt define the tan structure in which the augmenting arcs form a tree on the attributes , and present a polynomial time algorithm that learns an optimal tan with respect to mdl score . keogh and pazzani define augmented bayes networks in which the augmenting arcs form a forest on the attributes , and present heuristic search methods for learning good , though not optimal , augmenting arc sets . in this paper , we present a simple , polynomial time greedy algorithm for learning an optimal augmented bayes network with respect to mdl score .
global helioseismology has proven very successful at inferring large scale properties of the sun ( for a review , see ; ) . because they are very robust, the extension of methods of global helioseismology to study localized variations in the structure and dynamics of the solar interior has been of some interest ( e.g. ). however , the precise sensitivities of global modes to local perturbations are difficult to estimate through analytical means , especially in cases where the flows or thermal asphericities of interest possess complex spatial dependencies . to address questions relating to sensitivities and with the hope of perhaps discovering hitherto unknown phenomena associated with global modes , we introduce here for the first time a technique to study the effects of arbitrary perturbations on global mode parameters in the linear limit of small wave amplitudes .global modes attain resonant frequencies as a consequence of differentially sampling the entire region of propagation , making it somewhat more difficult ( in comparison to local helioseismology ) to pinpoint local thermal asphericities at depth .exactly how difficult is one of the questions we have attempted to answer in this article .jets in the tachocline ( e.g. ) are a subject of considerable interest since their existence ( or lack thereof ) could be very important in understanding the angular momentum balance of the sun .studying the sensitivities and signatures of waves to flows at depth may open up possibilities for their detection .forward modeling as a means of studying wave interactions in a complex medium like the sun has become quite favoured ( e.g. ; ; ; ) .the discovery of interesting phenomena , especially in the realm of local helioseismology ( e.g. ; ) , adds motivation to the pursuit of direct calculations . with the application of noise subtraction , we can now study the signatures of a wide range of perturbations in a realistic multiple source picture . here, we attempt to place bounds on the detectability of thermal asphericities at various depths in the sun .we introduce and discuss the method of simulation with a description of the types of perturbations introduced in the model in section [ simulations.sec ] .the estimation of mode parameters can prove somewhat difficult due to restrictions on the temporal length of the simulation ( hours ; owing to the expensive nature of the computation ) .the data analysis techniques used to characterize the modes are presented in section [ peakbag.sec ] .we then discuss the results from the analyses of the simulated data in [results.sec ] and summarize this work in [conclusions.sec ] .the linearized 3d euler equations in spherical geometry are solved in the manner described in .the computational domain is a spherical shell extending from to , with damping sponges placed adjacent to the upper and lower radial boundaries to allow the absorption of outgoing waves .the background stratification is a convectively stabilized form of model s ; only the highly ( convectively ) unstable near - surface layers ( ) are altered while the interior is the same as model s. waves are stochastically excited over a 200 km thick sub - photospheric spherical envelope , through the application of a dipolar source function in the vertical ( radial ) momentum equation .the forcing function is uniformly distributed in spherical harmonic space ; in frequency , a solar - like power variation is imposed .any damping of the wave modes away from the boundaries is entirely of numerical origin .the radial velocities associated with the oscillations are extracted 200 km above the photosphere and used as inputs to the peakbagging analyses .data over the entire 360 extent of the sphere are utilized in the analyses , thus avoiding issues related to mode leakage .we show an example power spectrum in figure [ power.spectrum ] along with the fits .the technique of realization noise subtraction ( e.g. ) is extensively applied in this work . due to the relatively short time lengths of the simulations ( the shortest time seriesyet that we have worked with is 500 minutes long ! ) , the power spectrum is not highly resolved and it would seem that the resulting uncertainty in the mode parameter fits might constrain our ability to study small perturbations . to beat this limit , we perform two simulations with identical realizations of the forcing function : a ` quiet ' run with no perturbations , and a ` perturbed ' run that contains the anomaly of interest. fits to the mode parameters in these two datasets are then subtracted , thus removing nearly all traces of the realization and retaining only effects arising due to mode - perturbation interactions ( see section [ peakbag.sec ] ) . as an example , we show in figure [ noise.subtract ] how a localized sound - speed perturbation placed at the bottom of the convection zone scatters waves which then proceed to refocus at the antipode ( the principle of farside holography , ) . the presence of the sound - speed perturbation is not seen in panel a , whereas it is clearly seen in the noise - subtracted images of panels b and c. in these calculations , we only consider time - stationary perturbations . the sound - speed perturbations are taken to be solely due to changes in the first adiabatic index , ; we do not study sound - speed variations arising from changes in the background pressure or density since altering these variables can create hydrostatic instabilities .lastly , the amplitude of all perturbations are taken to be much smaller than the local sound speed ( ) .our first round of peakbagging is done on the -averaged power spectrum for the quiet simulation . for each that we attempt to fit , we search for peaks in the negative second derivative of the power . unlike the power itself , which has a background , the second derivative has the advantage of having an approximately zero baseline .the search is accomplished by finding the frequency at which the maximum value of the negative second derivative occurs , estimating the mode parameters using a frequency window of width 100 centered on this peak frequency , zeroing the negative second derivative in this interval , and iterating .if the range of power in the frequency window is not above a certain threshold , we check the peak frequency found ; if it is too close to a frequency found on a previous iteration , that maximum is rejected , the same interval is again zeroed , and iteration continues .note that such a simple algorithm is feasible only because simulation data contains no leaks .once we have found as many peaks as possible with this procedure , we assign a value of to each one based on a model computed using adipack . the next step is to perform an actual fit to the power spectrum in the vicinity of each peak we identified .for the line profile we use a lorentzian of the form where is the total power , is the half width at half maximum , is the peak frequency , and is the background power . the initial guesses for these parameters are obtained in the first step as follows : is set to the minimum value of the power in the frequency window around the peak , is set to the integral under the power curve minus times the width of the window , and is set to where is the maximum value of the power in the frequency window .the fitting interval extends halfway to the adjacent peaks , or 100 beyond the peak frequency of the modes at the edge .the fitting itself is done using the idl routine ` curvefit ` .once we have fit these mode parameters for the -averaged spectrum , we use them as the initial guesses for fitting the individual spectra .then for each and we can fit a set of -coefficients to the frequencies as functions of .although for the quiet sun we would expect for all the -coefficients to be zero , this calculation is still necessary in order to perform the noise subtraction .we also use the mode parameters from the -averaged spectrum of the quiet simulation as initial guesses for fitting the ( unshifted ) -averaged spectrum of the perturbed simulation .although the perturbations may lift the degeneracy in , we expect the splitting to be very small , so that the peaks in the -averaged spectrum can still be well represented by a lorentzian .we also use those same initial guesses for fitting the individual spectra of the perturbed simulation , and recalculate the -coefficients .an empirical estimate of the error in frequency differences for the sound - speed perturbation at ( see section [ sound.speed.anomalies ] ) is computed in the following manner .we look at the difference in mode parameters only for those modes that do not penetrate to the depth of the perturbation ( all modes with ) .we then make a histogram of these differences with a bin size of 0.001 and fit a gaussian to the resulting distribution . with this methodwe find a standard deviation of 0.000474 or 0.47 nhz .this result is confirmed by also computing the standard deviation of 95% of the closest points to the mean .we place three perturbations of horizontal size ( in longitude and latitude ) with a full width at half maximum in radius of 2 ( 13.9 mm ) at depths of , each with an amplitude of % of the local sound speed .because of the fixed angular size , the perturbations grow progressively smaller in physical size with depth ; our intention was to keep the perturbation as localized and non - spherically symmetric as possible .despite the fact that the perturbation is highly sub - wavelength ( the wavelength at is mm or ) , we notice that for these ( relatively ) small amplitude anomalies , the global mode frequency shifts are predominantly a function of the spherically symmetric component of the spatial structure of the perturbation . in other words ,what matters most is the contribution from the coefficient in the spherical harmonic expansion of the horizontal spatial structure of the perturbation .we verify this by computing the frequency shifts associated with a spherically symmetric area - averaged version of the localized perturbation ( with an amplitude of , where is the solid angle subtended by the localized perturbation , 0.05 referring to the 5% increase in sound speed ) .we were careful to ensure that the radial dependence of the magnitude of the perturbation was unchanged .the frequency shifts associated with the spherically symmetric perturbations were calculated independently through simulation and the oscillation frequency package , adipack and seen to match accurately , as shown in figure [ sound.speeds ] .because of the non - spherically symmetric nature of the perturbation , we expect to see shifts in the -coefficients .similarly , it is likely that there will be slight deviations in the amplitudes of modes that propagate in regions close to and below the locations of the perturbation .we display these effects for the case with the perturbation located at in figure [ mode.parameters.fig ] .we introduce a non - dimensional measure , , to characterize the degree of scattering exhibited by the anomaly : where , the amplitude of the sound - speed perturbation expressed in fractions of the local sound speed , the angular area of the perturbation , and the number of modes in the summation term .essentially , this parameter tells us how strongly perturbations couple with the wave field , with larger implying a greater degree of scatter and vice versa .because it is independent of perturbation size or magnitude , can be extended to study flow perturbations as well .this measure is meaningful only in the regime where the frequency shifts are presumably linear functions of the perturbation magnitude .also , it is expected that will retain a strong dependence on the radial location of the perturbation since different parts of the spectrum see different regions of the sun .for example , placing an anomaly at the surface will likely affect the entire spectrum of global modes , as seen in figure [ sound.speeds]c .results for shown in table [ t - scatter ] contain no surprises ; for a given size and magnitude of the perturbation , the effect on the global frequencies increases strongly with its location in radius .the signature of a perturbation at the bottom of the convection zone on the global modes is twice as strong as an anomaly in the radiative interior ( ) .the surface perturbation is a little more difficult to compare with the others because contrary to the two deeper perturbations , it is locally far larger than the wavelengths of the modes .the result however is in line with expectation ; the near - surface scatterer is far more potent than the other two anomalies ..the scattering extents , of various perturbations . the root mean square ( rms ) variation in frequenciesis shown as well . [ cols="^,^,^,<,^ " , ]we have introduced a method to systematically study the effects of various local perturbations on global mode frequencies .techniques of mode finding and parameter fitting are applied to artificial data obtained from simulations of wave propagation in a solar - like stratified spherical shell .we are able to beat the issue of poor frequency resolution by extending the method of realization noise subtraction to global mode analysis .these methods can prove very useful in the study of shifts due to perturbations of magnitudes beyond the scope of first order perturbation theory ; moreover , extending this approach to investigate systematic frequency shifts in other stars may prove exciting .we are currently studying the impact of complex flows like convection and localized jets on the global frequencies .preliminary results seem to indicate that flows are stronger scatterers ( larger ) than sound - speed perturbations although more work needs to be done to confirm and characterize these effects .s. m. hanasoge and t. p. larson were funded by grants hmi nas5 - 02139 and mdi nng05gh14 g .we would like to thank jesper schou , tom duvall , jr ., and phil scherrer for useful discussions and suggestions .the simulations were performed on the columbia supercomputer at nasa ames .birch , a. c. , braun , d. c. , and hanasoge , s. m. : 2007 , _ solar phys ._ * this volume*. cameron , r. , gizon , l. , and daifallah , k. : 2007 , _ astronomische nachrichten _ * 328 * , 313 .christensen - dalsgaard , j. and berthomieu , g. : 1991 , theory of solar oscillations . in : _ solar interior and atmosphere ( a92 - 36201 14 - 92)_. tucson , az , usa . editors : a. n. cox , w. c. livingston , and m. matthews , p. 401christensen - dalsgaard , j. , et al .1996 , science , 272 , 1286 christensen - dalsgaard , j. et al . : 1996 , _ science _ , * 272 * , 1286 .christensen - dalsgaard , j. : 2002 , _ reviews of modern physics _ , * 74 * , 1073 .christensen - dalsgaard , j. : _ lecture notes on stellar oscillations _ 2003 , http://astro.phys.au.dk / jcd / oscilnotes/. christensen - dalsgaard , j. et al .: 2005 , asp conference series , * 346 * , 115 .hanasoge , s. m. et al . : 2006 , _ astrophys .j. _ * 648 * , 1268 .hanasoge , s. m. and duvall , t. l. , jr . : 2007 , _ astronomische nachrichten _ * 323 * , 319 .hanasoge , s. m. , duvall , t. l. , jr . , and couvidat , s. : 2007 , _ astrophys . j. _ * 664 * , 1234 .hanasoge , s. m. et al . : 2007 , _ astrophys ._ accepted _ , arxiv 0707.1369 .hanasoge , s. m. : _ theoretical studies of wave propagation in the sun _ , 2007 , ph .d. thesis , stanford university , http://soi.stanford.edu/papers/dissertations/hanasoge/ lindsey , c. and braun , d. c. : 2000 , _ science _ * 287 * , 1799 .parchevsky , k. and kosovichev , a. g. : 2007 , _ astrophys .j. _ * 666 * , 547 .schou , j. et al . : 1998 , _ astrophys . j. _ * 505 * , 390 .swisdak , m. , and zweibel , e. : 1999 , _ astrophys .j. _ * 512 * , 442 .werne , j. , birch , a. , and julien , k. : 2004 , the need for control experiments in local helioseismology . in : _ proceedings of the soho 14/ gong 2004 workshop ( esa sp-559 ) .`` helio- and asteroseismology : towards a golden future''_. 12 - 16 july , 2004 . new haven ,connecticut , usa . editor : d. danesy , p.172 , 172 .
we study the effect of localized sound - speed perturbations on global mode frequencies by applying techniques of global helioseismology on numerical simulations of the solar acoustic wave field . extending the method of realization noise subtraction ( e.g. ) to global modes and exploiting the luxury of full spherical coverage , we are able to achieve very highly resolved frequency differences that are used to study sensitivities and the signatures of the thermal asphericities . we find that ( 1 ) global modes are almost twice as sensitive to sound - speed perturbations at the bottom of the convection zone as in comparison to anomalies well in the radiative interior ( ) , ( 2 ) the -degeneracy is lifted ever so slightly , as seen in the coefficients , and ( 3 ) modes that propagate in the vicinity of the perturbations show small amplitude shifts ( ) .
we address the problem of comparing samples from two probability distributions , by proposing statistical tests of the hypothesis that these distributions are different ( this is called the two - sample or homogeneity problem ) .such tests have application in a variety of areas . in bioinformatics , it is of interest to compare microarray data from identical tissue types as measured by different laboratories , to detect whether the data may be analysed jointly , or whether differences in experimental procedure have caused systematic differences in the data distributions . equally of interest are comparisons between microarray data from different tissue types , either to determine whether two subtypes of cancer may be treated as statistically indistinguishable from a diagnosis perspective , or to detect differences in healthy and cancerous tissue . in databaseattribute matching , it is desirable to merge databases containing multiple fields , where it is not known in advance which fields correspond : the fields are matched by maximising the similarity in the distributions of their entries .we test whether distributions and are different on the basis of samples drawn from each of them , by finding a well behaved ( e.g.smooth ) function which is large on the points drawn from , and small ( as negative as possible ) on the points from .we use as our test statistic the difference between the mean function values on the two samples ; when this is large , the samples are likely from different distributions .we call this statistic the maximum mean discrepancy ( mmd ). clearly the quality of the mmd as a statistic depends on the class of smooth functions that define it . on one hand, must be `` rich enough '' so that the population mmd vanishes if and only if .on the other hand , for the test to be consistent , needs to be `` restrictive '' enough for the empirical estimate of mmd to converge quickly to its expectation as the sample size increases .we shall use the unit balls in universal reproducing kernel hilbert spaces as our function classes , since these will be shown to satisfy both of the foregoing properties ( we also review classical metrics on distributions , namely the kolmogorov - smirnov and earth - mover s distances , which are based on different function classes ) . on a more practical note , the mmd has a reasonable computational cost , when compared with other two - sample tests : given points sampled from and from , the cost is time .we also propose a less statistically efficient algorithm with a computational cost of , which can yield superior performance at a given computational cost by looking at a larger volume of data .we define three non - parametric statistical tests based on the mmd .the first two , which use distribution - independent uniform convergence bounds , provide finite sample guarantees of test performance , at the expense of being conservative in detecting differences between and .the third test is based on the asymptotic distribution of the mmd , and is in practice more sensitive to differences in distribution at small sample sizes .the present work synthesizes and expands on results of , , and who in turn build on the earlier work of .note that the latter addresses only the third kind of test , and that the approach of employs a more accurate approximation to the asymptotic distribution of the test statistic .we begin our presentation in section [ sec : basicstuffandreview ] with a formal definition of the mmd , and a proof that the population mmd is zero if and only if when is the unit ball of a universal rkhs .we also review alternative function classes for which the mmd defines a metric on probability distributions . in section [ sec : prevwork ] , we give an overview of hypothesis testing as it applies to the two - sample problem , and review other approaches to this problem .we present our first two hypothesis tests in section [ sec : firstbound ] , based on two different bounds on the deviation between the population and empirical .we take a different approach in section [ sec : asymptotictest ] , where we use the asymptotic distribution of the empirical estimate as the basis for a third test .when large volumes of data are available , the cost of computing the mmd ( quadratic in the sample size ) may be excessive : we therefore propose in section [ sec : lineartimestatistic ] a modified version of the mmd statistic that has a linear cost in the number of samples , and an associated asymptotic test . in section[ sec : relatedmethods ] , we provide an overview of methods related to the mmd in the statistics and machine learning literature .finally , in section [ sec : experiments ] , we demonstrate the performance of mmd - based two - sample tests on problems from neuroscience , bioinformatics , and attribute matching using the hungarian marriage method .our approach performs well on high dimensional data with low sample size ; in addition , we are able to successfully distinguish distributions on graph data , for which ours is the first proposed test .in this section , we present the maximum mean discrepancy ( mmd ) , and describe conditions under which it is a metric on the space of probability distributions .the mmd is defined in terms of particular function spaces that witness the difference in distributions : we therefore begin in section [ sec : mmdintro ] by introducing the mmd for some arbitrary function space . in section [ sec : mmdinrkhs ] , we compute both the population mmd and two empirical estimates when the associated function space is a reproducing kernel hilbert space , and we derive the rkhs function that witnesses the mmd for a given pair of distributions in section [ sec : mmdwitness ] . finally , we describe the mmd for more general function classes in section [ sec : mmdotherfuncclasses ] . our goal is to formulate a statistical test that answers the following question : [ prob : problem ] let and be borel probability measures defined on a domain .given observations and , drawn independently and identically distributed ( i.i.d . ) from and , respectively , can we decide whether ?to start with , we wish to determine a criterion that , in the population setting , takes on a unique and distinctive value only when .it will be defined based on lemma 9.3.2 of .[ lem : dudley ] let be a metric space , and let be two borel probability measures defined on . then if and only if for all , where is the space of bounded continuous functions on .although in principle allows us to identify uniquely , it is not practical to work with such a rich function class in the finite sample setting .we thus define a more general class of statistic , for as yet unspecified function classes , to measure the disparity between and .[ def : mmd ] let be a class of functions and let be defined as above .we define the maximum mean discrepancy ( mmd ) as } : = \sup_{f \in { \mathcal{f } } } \left({\mathbf{e}}_{x \sim p}[f(x ) ] - { \mathbf{e}}_{y \sim q}[f(y ) ] \right).\ ] ] calls this an integral probability metric .a biased empirical estimate of the mmd is } : = \sup_{f \in { \mathcal{f } } } \left ( \frac{1}{m } \sum_{i=1}^m f(x_i ) - \frac{1}{n } \sum_{i=1}^n f(y_i ) \right).\ ] ] the empirical mmd defined above has an upward bias ( we will define an unbiased statistic in the following section ) .we must now identify a function class that is rich enough to uniquely identify whether , yet restrictive enough to provide useful finite sample estimates ( the latter property will be established in subsequent sections ) .if is the unit ball in a reproducing kernel hilbert space , the empirical mmd can be computed very efficiently. this will be the main approach we pursue in the present study .other possible function classes are discussed at the end of this section .we will refer to as universal whenever , defined on a compact metric space and with associated kernel , is dense in with respect to the norm .it is shown in that gaussian and laplace kernels are universal .we have the following result : [ th : stronger ] let be a unit ball in a universal rkhs , defined on the compact metric space , with associated kernel . then } = 0 ] is zero if .we prove the converse by showing that }=d ] : this is equivalent to }=0 ] ( where this last result implies by lemma [ lem : dudley ] , noting that compactness of the metric space implies its separability ) .let be the universal rkhs of which is the unit ball .if }=d ] .we know that is dense in with respect to the norm : this means that for , we can find some satisfying .thus , we obtain } - { \mathbf{e}}_p{\left[\tilde{f}\right]}\right| } < \epsilon ] .we now review some properties of that will allow us to express the mmd in a more easily computable form .since is an rkhs , the operator of evaluation mapping to is continuous .thus , by the riesz representation theorem , there is a feature mapping from to such that .moreover , , where is a positive definite kernel function .the following lemma is due to .[ le : simplemmd ] denote the expectation of by } ] , where and are independent random variables drawn according to .in other words , is a trace class operator with respect to the measure .] then & = \sup_{{\left\|f\right\|}_{\mathcal{h}}\leq 1 } { \left\langle \mu[p ] - \mu[q],f \right\rangle } = { \left\| \mu[p ] - \mu[q ] \right\|}_{{\mathcal{h}}}.\end{aligned}\ ] ] & = & \left[\sup_{{\left\|f\right\|}_{\mathcal{h}}\leq 1 } \left ( { \mathbf{e}}_p{\left[f(x)\right ] } - { \mathbf{e}}_q{\left[f(y)\right ] } \right ) \right]^2\\ & = & \left[\sup_{{\left\|f\right\|}_{\mathcal{h}}\leq 1 } \left ( { \mathbf{e}}_p{\left[{\left\langle \phi(x),f \right\rangle}_{{\mathcal{h}}}\right ] } - { \mathbf{e}}_q{\left[{\left\langle \phi(y),f \right\rangle}_{{\mathcal{h}}}\right ] } \right ) \right]^2\\ & = & \left[\sup_{{\left\|f\right\|}_{\mathcal{h}}\leq 1 } { \left\langle \mu_p - \mu_q , f \right\rangle}_{{\mathcal{h } } } \right]^2 = { \left\| \mu_p - \mu_q \right\|}^2_{{\mathcal{h } } } \end{aligned}\ ] ] given we are in an rkhs , the norm may easily be computed in terms of kernel functions .this leads to a first empirical estimate of the mmd , which is unbiased .[ lem : rkhs - mmd ] given and independent random variables with distribution , and and independent random variables with distribution , the population is } = { \mathbf{e}}_{x , x ' \sim p } { \left[k(x , x')\right ] } -2{\mathbf{e}}_{x \sim p , y \sim q } { \left[k(x , y)\right ] } + { \mathbf{e}}_{y , y ' \sim q } { \left[k(y , y')\right]}.\ ] ] let be i.i.d .random variables , where ( i.e. we assume ) .an _ unbiased _ empirical estimate of is } = \frac{1}{(m)(m-1)}\sum_{i\neq j}^{m } h({z}_i,{z}_{j}),\ ] ] which is a one - sample u - statistic with ( we define to be symmetric in its arguments due to requirements that will arise in section [ sec : asymptotictest ] ) .starting from the expression for ] and :=\frac{1}{n } \sum_{i=1}^n \phi(y_i) ] , whether biased or unbiased , to be small if , and large if the distributions are far apart .it costs time to compute both statistics .finally , we note that recently proposed a modification of the kernel mmd statistic in lemma [ le : simplemmd ] , by scaling the feature space mean distance using the inverse within - sample covariance operator , thus employing the kernel fisher discriminant as a statistic for testing homogeneity .this statistic is shown to be related to the divergence . that witnesses the mmd has been scaled for plotting purposes , and was computed empirically on the basis of samples , using a gaussian kernel with .[fig : mmddemo1d],scaledwidth=50.0% ] it is also instructive to consider the witness which is chosen by mmd to exhibit the maximum discrepancy between the two distributions .the population and its empirical estimate are respectively - \mu[q ] \right\rangle } & = & { \mathbf{e}}_{x ' \sim p } { \left[k(x , x')\right ] } - { \mathbf{e}}_{x ' \sim q } { \left[k(x , x')\right ] } \\\hat{f}(x ) & \propto & { \left\langle \phi(x),\mu[x ] - \mu[y ] \right\rangle } & = & \frac{1}{m } \sum_{i=1}^m k(x_i , x ) - \frac{1}{n } \sum_{i=1}^n k(y_i , x ) .\end{array}\ ] ] this follows from the fact that the unit vector maximizing in a hilbert space is .we illustrate the behavior of mmd in figure [ fig : mmddemo1d ] using a one - dimensional example . the data and were generated from distributions and with equal means and variances , with gaussian and laplacian .we chose to be the unit ball in an rkhs using the gaussian kernel .we observe that the function that witnesses the mmd in other words , the function maximizing the mean discrepancy in ( [ eq : mmd - a ] ) is smooth , positive where the laplace density exceeds the gaussian density ( at the center and tails ) , and negative where the gaussian density is larger .moreover , the magnitude of is a direct reflection of the amount by which one density exceeds the other , insofar as the smoothness constraint permits it .the definition of the maximum mean discrepancy is by no means limited to rkhs .in fact , any function class that comes with uniform convergence guarantees and is sufficiently powerful will enjoy the above properties .let be a subset of some vector space .the star ] is dense in with respect to the norm. then } = 0 ] is a metric on the space of probability distributions .whenever the star of is _ not _ dense , is a pseudo - metric space .satisfies the following four properties : symmetry , triangle inequality , , and .a pseudo - metric only satisfies the first three properties . ] the first part of the proof is almost identical to that of theorem [ th : stronger ] and is therefore omitted . to see the second part, we only need to prove the triangle inequality .we have } \\ &\geq \sup_{f \in { \mathcal{f } } } { \left|e_p f - e_r f\right|}. \end{aligned}\ ] ] the first part of the theorem establishes that ] .note that any uniform convergence statements in terms of allow us immediately to characterize an estimator of explicitly .the following result shows how ( we will refine this reasoning for the rkhs case in section [ sec : firstbound ] ) .[ th : general ] let be a confidence level and assume that for some the following holds for samples drawn from : - \frac{1}{m } \sum_{i=1}^m f(x_i)\right| } > \epsilon(\delta , m , { \mathcal{f}})\right\ } } \leq \delta .\end{aligned}\ ] ] in this case we have that - { \mathrm{mmd}}_b[{\mathcal{f}},x , y]\right| } > 2 \epsilon(\delta/2,m,{\mathcal{f}})\right\ } } \leq \delta .\end{aligned}\ ] ] the proof works simply by using convexity and suprema as follows : - { \mathrm{mmd}}_b[{\mathcal{f}},x , y]\right| } \\ = & { \left|\sup_{f \in { \mathcal{f } } } { \left|{\mathbf{e}}_p[f ] - { \mathbf{e}}_q[f]\right| } - \sup_{f \in { \mathcal{f } } } { \left|\frac{1}{m } \sum_{i=1}^m f(x_i ) - \frac{1}{n } \sum_{i=1}^n f(y_i)\right|}\right| } \\\leq & \sup_{f \in { \mathcal{f } } } { \left|{\mathbf{e}}_p[f ] - { \mathbf{e}}_q[f ] - \frac{1}{m } \sum_{i=1}^m f(x_i ) + \frac{1}{n } \sum_{i=1}^n f(y_i)\right| } \\\leq & \sup_{f \in { \mathcal{f } } } { \left|{\mathbf{e}}_p[f ] - \frac{1}{m } \sum_{i=1}^m f(x_i)\right| } + \sup_{f \in { \mathcal{f } } } { \left|{\mathbf{e}}_q[f ] - \frac{1}{n } \sum_{i=1}^n f(y_i)\right|}. \end{aligned}\ ] ] bounding each of the two terms via a uniform convergence bound proves the claim .this shows that ] and that the quantity is asymptotically unbiased .any classifier which maps a set of observations with on some domain and labels , for which uniform convergence bounds exist on the convergence of the empirical loss to the expected loss , can be used to obtain a similarity measure on distributions simply assign if and for and find a classifier which is able to separate the two sets . in this casemaximization of - { \mathbf{e}}_q[f] ] for a certain banach space ( * ? ? ? * theorem 5.2 ) [ th : ks ] let be the class of functions of bounded variation defined on ] .it is well known that is given by the absolute convex hull of indicator functions }(\cdot) ] as follows : } - { \mathbf{e}}_q \chi_{(-\infty , x]}\right| } , \sup_{x \in { \mathbb{r } } } { \left|{\mathbf{e}}_p \chi_{[x,\infty ) } - { \mathbf{e}}_q \chi_{[x,\infty)}\right| } \right ] } \nonumber \\\nonumber & = \sup_{x \in { \mathbb{r } } } { \left|f_p(x ) - f_q(x)\right| } = { \left\|f_p - f_q\right\|}_\infty .\end{aligned}\ ] ] this completes to proof .another class of distance measures on distributions that may be written as an mmd are the earth - mover distances .we assume is a separable metric space , and define to be the space of probability measures on for which for all and ( these are the probability measures for which when ) .we then have the following definition .let and .the monge - wasserstein distance is defined as where is the set of joint distributions on with marginals and .we may interpret this as the cost ( as represented by the metric ) of transferring mass distributed according to to a distribution in accordance with , where is the movement schedule . in general , a large variety of costs of moving mass from to can be used , such as psychooptical similarity measures in image retrieval .the following theorem holds ( * ? ? ?* theorem 11.8.2 ) .[ thm : kantorovichrubinstein ] let and , where is separable . then a metric on is defined as where is the lipschitz seminorm only for . ] for real valued on .a simple example of this theorem is as follows ( * ? ? ?* exercise 1 , p. 425 ) .let with associated .then given such that , we use integration by parts to obtain where the maximum is attained for the function with derivative ( and for which ) .we recover the distance between distribution functions, one may further generalize theorem [ thm : kantorovichrubinstein ] to the set of all laws on arbitrary metric spaces ( * ? ? ? * proposition 11.3.2 ) .let and be laws on a metric space .then is a metric on , where belongs to the space of bounded lipschitz functions with norm we now define a general mean operator in the same fashion as introduced in lemma [ le : simplemmd ] for rkhs , in the context of banach spaces .denote by a banach space of functions on and let be its dual . in this casewe denote by the evaluation operator whenever it exists .the definition is implicit via fischer - riesz .moreover , for a distribution on we denote by the linear map given by }\end{aligned}\ ] ] whenever it exists .note that existence of is far from trivial for some functions ( or some distributions ) expectations of certain random variable may not exist , hence is undefined in this case .a sufficient condition for the existence of is that the norm of the evaluation operators is bounded , i.e. for some . if is the unit ball in a banach space we have the following result which allows us to find a more concise expression for ] .[ th : norms ] assume that is a banach space where the operator is well defined .then the following holds : = { \left\|\mu_p - \mu_q\right\|}_{{\mathcal{b}}^ * } \text { and } { \mathrm{mmd}}_b[{\mathcal{f } } , x , y ] = { \left\|\mu[x ] - \mu[y]\right\|}_{{\mathcal{b}}^*}. \end{aligned}\ ] ] the set of all expectations , that is , the marginal polytope \text { where } p \in { \mathcal{p}}({\mathcal{x}})\right\}} ] defines a _ metric _ on the space of probability distributions , induced by the banach space . as we shall see , it is often easier to compute distances in this metric , as it will not require density estimation as an intermediate step .we now present three background results .first , we introduce the terminology used in statistical hypothesis testing .second , we demonstrate via an example that even for tests which have asymptotically no error , one can not guarantee performance at any fixed sample size without making assumptions about the distributions .finally , we briefly review some earlier approaches to the two - sample problem .having described a metric on probability distributions ( the mmd ) based on distances between their hilbert space embeddings , and empirical estimates ( biased and unbiased ) of this metric , we now address the problem of determining whether the empirical mmd shows a _ statistically significant _ difference between distributions . to this end, we briefly describe the framework of statistical hypothesis testing as it applies in the present context , following ( * ? ? ?* chapter 8) . given i.i.d .samples of size and of size , the statistical test , is used to distinguish between the null hypothesis and the alternative hypothesis .this is achieved by comparing the test statistic ] to ] , has the acceptance region < \sqrt{2k / m } \left ( 1 + \sqrt{2\log \alpha^{-1 } } \right ) . ] and the empirical means ] , uniformly at rate ( * ? ? ?* theorem b , p. 193 ) . under ,the u - statistic is degenerate , meaning . in this case, converges in distribution according to ,\ ] ] where i.i.d ., are the solutions to the eigenvalue equation and is the centred rkhs kernel . the asymptotic distribution of the test statistic under is given by ( * ?* section 5.5.1 ) , and the distribution under follows ( * ? ? ?* section 5.5.2 ) and ( * ? ? ? * appendix ) ; see appendix [ sec : distribh0 ] for details .we illustrate the mmd density under both the null and alternative hypotheses by approximating it empirically for both and .results are plotted in figure [ fg : distributionofmmd ] ., with and both gaussians with unit standard deviation , using 50 samples from each .* right : * empirical distribution of the mmd under , with a laplace distribution with unit standard deviation , and a laplace distribution with standard deviation , using 100 samples from each . in both cases ,the histograms were obtained by computing 2000 independent instances of the mmd.,title="fig : " ] , with and both gaussians with unit standard deviation , using 50 samples from each . *right : * empirical distribution of the mmd under , with a laplace distribution with unit standard deviation , and a laplace distribution with standard deviation , using 100 samples from each . in both cases ,the histograms were obtained by computing 2000 independent instances of the mmd.,title="fig : " ] our goal is to determine whether the empirical test statistic is so large as to be outside the quantile of the null distribution in ( [ eq : mmd_under_h0 ] ) ( consistency of the resulting test is guaranteed by the form of the distribution under ) .one way to estimate this quantile is using the bootstrap on the aggregated data , following .alternatively , we may approximate the null distribution by fitting pearson curves to its first four moments ( * ? ? ?* section 18.8 ) .taking advantage of the degeneracy of the u - statistic , we obtain ( see appendix [ sec : momentsh0 ] ) ^ 2\right ) & = \frac{2}{m(m-1)}{\mathbf{e}}_{z , z'}\left[h^{2}(z , z ' ) \right ] \text { and } \\\label{moment3 } { \mathbf{e}}\left(\left[{\mathrm{mmd}}_u^2\right]^3\right ) & = \frac{8(m-2)}{m^{2}(m-1)^{2}}{\mathbf{e}}_{z , z'}\left[h(z , z'){\mathbf{e}}_{z''}\left(h(z , z'')h(z',z'')\right)\right ] + o(m^{-4 } ) .\end{aligned}\ ] ] the fourth moment ^ 4\right) ] . ] with a lower bound due to , .note that may be negative , since it is an unbiased estimator of )^2 ] . while it is expected ( as we will see explicitly later ) that has higher variance than , it is computationally much more appealing . in particular ,the statistic can be used in stream computations with need for only memory , whereas requires storage and time to compute the kernel on all interacting pairs .since is just the average over a set of random variables , hoeffding s bound and the central limit theorem readily allow us to provide both uniform convergence and asymptotic statements for it with little effort .the first follows directly from ( * ? ? ?* theorem 2 ) .[ cor : asy - linear ] assume .then where ( the same bound applies for deviations of and below ) .note that the bound of theorem [ thm : hoeffdingquadraticmmd ] is identical to that of theorem [ cor : asy - linear ] , which shows the former is rather loose .next we invoke the central limit theorem .[ cor : linearstat ] assume .then converges in distribution to a gaussian according to } \right ) \overset{d}{\rightarrow } \mathcal{n}\left(0 , \sigma^2_l \right),\ ] ] where ^ 2\right]} ] , whereas in the latter case we compute the full variance ] ( or ] will not hold .hence we may use - \mu[\pr_x \times \pr_y]\right\|} ] .then - \mu[q]\right\|}_{{\mathcal{h}}} ] . using linearity of the inner product , equation ( [ eq : shado ] )equals - \mu[q],f \right\rangle}_{{\mathcal{h}}}\right| } \mathrm{d}r(f)\\ = & { \left\|\mu[p ] - \mu[q]\right\|}_{{\mathcal{h } } } \int { \left|{\left\langle \frac{\mu[p ] - \mu[q]}{{\left\|\mu[p ] - \mu[q]\right\|}_{{\mathcal{h}}}},f \right\rangle}_{{\mathcal{h}}}\right| } \mathrm{d}r(f ) , \end{aligned}\ ] ] where the integral is independent of . to see this, note that for any , - \mu[q]}{{\left\|\mu[p ] - \mu[q]\right\|}_{{\mathcal{h}}}} ] be the norm .then ( [ eq : shado ] ) can also be written as - \mu[q]\right\|}_{{\mathcal{b}}} ] .[ th : dualwasser ] let .denote by the class of differentiable functions on for which both limits and exist and for which for all . in this case = { \left\|f_p - f_q\right\|}_b \text { where } \textstyle \frac{1}{a } + \frac{1}{b } = 1\ ] ] whenever is finite . for follows from proposition [ th : ks ] .hence , assume that .we exploit integration by parts to obtain for dx & = f(x ) [ f_p(x ) - f_q(x ) ] \bigr|_{-\infty}^\infty - \int f'(x ) [ f_p(x ) - f_q(x ) ] dx \\ & = \int -f'(x ) [ f_p(x)- f_q(x ) ] dx \end{aligned}\ ] ] since = 0 ] for all .moreover , any such function ) ] which we denote by and respectively , is -close to .moreover , by the density of ) ] there exists some ) ] .this means that for any , the set of test functions contains some which is close to the upper bound on ] under the null hypothesis . in this circumstance, we denote it by ] is as defined in ( [ eq : teststat2 ] ) , with , and furthermore assume .then ] .these are all of the form where we shorten , and we know and are always independent . most of the terms vanish due to ( [ eq : zeromean2 ] ) and ( [ eq : degeneratecondition ] ) .the first terms that remain take the form and there are of them , which gives us the expression \nonumber \\ & = \frac{8(n-2)}{n^{2}(n-1)^{2}}{\mathbf{e}}_{z , z'}\left[h(z , z'){\mathbf{e}}_{z''}\left(h(z , z'')h(z',z'')\right)\right].\label{eq : pre3rdmoment}\end{aligned}\ ] ] note the scaling .the remaining non - zero terms , for which and , take the form ,\ ] ] and there are of them , which gives .\label{eq : negligible3rdmoment}\ ] ] however so this term is negligible compared with ( [ eq : pre3rdmoment ] ) .thus , a reasonable approximation to the third moment is ^ 3\right ) \approx \frac{8(n-2)}{n^{2}(n-1)^{2}}{\mathbf{e}}_{z , z'}\left[h(z , z'){\mathbf{e}}_{z''}\left(h(z , z'')h(z',z'')\right)\right].\ ] ] we thank philipp berens , olivier bousquet , john langford , omri guttman , matthias hein , novi quadrianto , le song , and vishy vishwanathan for constructive discussions ; patrick warnat ( dkfz , heidelberg ) , for providing the microarray datasets ; and nikos logothetis , for providing the neural datasets .national ict australia is funded through the australian government s _ backing australia s ability _ initiative , in part through the australian research council .this work was supported in part by the ist programme of the european community , under the pascal network of excellence , ist-2002 - 506778 , and by the austrian science fund ( fwf ) , project # s9102-n04 .y. altun and a.j .smola . unifying divergence minimization and statistical inference via convex duality . in h.u .simon and g. lugosi , editors , _ proc . annual conf .computational learning theory _ , lncs , pages 139153 .springer , 2006 .n. anderson , p. hall , and d. titterington .two - sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel - based density estimates ._ journal of multivariate analysis _, 50:0 4154 , 1994 .s. andrews , i. tsochantaridis , and t. hofmann .support vector machines for multiple - instance learning . in s.becker , s. thrun , and k. obermayer , editors , _ advances in neural information processing systems 15_. mit press , 2003 . k. m. borgwardt , a. gretton , m. j. rasch , h .- p .kriegel , b. schlkopf , and a. j. smola .integrating structured biological data by kernel maximum mean discrepancy . _bioinformatics ( ismb ) _ , 220 ( 14):0 e49e57 , 2006 .m. dudk and r. e. schapire .maximum entropy distribution estimation with generalized regularization . in gbor lugosi and hans u. simon , editors , _ proc . annual conf .computational learning theory_. springer verlag , june 2006 .a. gretton , o. bousquet , a.j .smola , and b. schlkopf .measuring statistical dependence with hilbert - schmidt norms . in s.jain , h. u. simon , and e. tomita , editors , _ proceedings algorithmic learning theory _ , pages 6377 , berlin , germany , 2005 .springer - verlag .a. gretton , k. borgwardt , m. rasch , b. schlkopf , and a. smola .a kernel method for the two - sample - problem . in _ advances in neural information processing systems 19 _ , pages 513520 , cambridge ,ma , 2007 . mit press .a. gretton , k. borgwardt , m. rasch , b. schlkopf , and a. smola .a kernel approach to comparing distributions ._ proceedings of the 22nd conference on artificial intelligence ( aaai-07 ) _ , pages 16371641 , 2007 .t. jebara and i. kondor .bhattacharyya and expected likelihood kernels . in b.schlkopf and m. warmuth , editors , _ proceedings of the sixteenth annual conference on computational learning theory _ , number 2777 in lecture notes in computer science , pages 5771 , heidelberg , germany , 2003 .springer - verlag .a. j. smola and b. schlkopf .sparse greedy matrix approximation for machine learning . in p.langley , editor , _ proc .machine learning _ , pages 911918 , san francisco , 2000 .morgan kaufmann publishers .smola , a. gretton , l. song , and b. schlkopf . a hilbert space embedding for distributions . in e.takimoto , editor , _ algorithmic learning theory _ , lecture notes on computer science .springer , 2007 .christoper k. i. williams and matthias seeger .using the nystrom method to speed up kernel machines .in t. k. leen , t. g. dietterich , and v. tresp , editors , _ advances in neural information processing systems 13 _ , pages 682688 , cambridge , ma , 2001 . mit press .
we propose a framework for analyzing and comparing distributions , allowing us to design statistical tests to determine if two samples are drawn from different distributions . our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel hilbert space ( rkhs ) . we present two tests based on large deviation bounds for the test statistic , while a third is based on the asymptotic distribution of this statistic . the test statistic can be computed in quadratic time , although efficient linear time approximations are available . several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general ( eg . a banach space ) . we apply our two - sample tests to a variety of problems , including attribute matching for databases using the hungarian marriage method , where they perform strongly . excellent performance is also obtained when comparing distributions over graphs , for which these are the first such tests . kernel methods , two sample test , uniform convergence bounds , schema matching , asymptotic analysis , hypothesis testing .
halving lines have been an interesting object of study for a long time . given points in general position on a planethe minimum number of halving lines is .the maximum number of halving lines is unknown .the current lower bound of was found by nivasch , an improvement from tth s lower bound .the current asymptotic upper bound of was proven by dey . in 2006a tighter bound for the crossing number was found , which also improved the upper bound for the number of halving lines . in our paper we further tightened the dey s bound .this was done by studying the properties of halving edges graphs , also called underlying graphs of halving lines .we discussed connected components of halving edges graphs in . in this paperwe continue our studies of halving edges graphs .in particular , we introduce a construction which we call fission .fission is replacing each point in a given configuration with a small cluster of points ; this operation produces elegant results in relation to halving lines .we start in section [ sec : definitions ] with supplying necessary definitions and providing examples . in section[ sec : fission ] we define fission and prove some results pertaining to the behavior of newly appearing halving lines .then , in section [ sec : chains ] we discuss the behavior of chains . in the next section [ sec : plain ]we define and discuss the simplest case of fission called plain fission , in which no halving lines are generated within each cluster . in section [ sec : multiplication ] we note the similarities between fission and multiplication as well as lifting . in section [ sec : parallel ] and section [ sec : forest ] we study two more special cases of fission .the former is called parallel fission and describes fission when all clusters are sets of points on lines parallel to each other .the latter is called a fission of a 1-forest and is an interesting example of fission where the starting graph is a 1-forest . in the last section [ sec : defission ]we discuss the opposite operation defission which is analog of division for graphs that can be created through fission .let points be in general position in , where is even .a _ halving line _ is a line through 2 of the points that splits the remaining points into two sets of equal size . from our set of points, we can determine an _ underlying graph _ of vertices , where each pair of vertices is connected by an edge iff there is a halving line through the corresponding 2 points .the underlying graph is also called the _ halving edges graph_. we denote the number of vertices in graph as . in dealing with halving lines, we consider notions from both euclidean geometry and graph theory .a _ geometric graph _ , or a _ geograph _ for short , is a pair of sets , where is a set of points on the coordinate plane , and consists of pairs of elements from . in essence ,a geograph is a graph with each of its vertices assigned to a distinct point on the plane .many problems in the intersection of geometry and graph theory have relevance to geographs , and the study of halving lines is no different .earlier works on geographs include .suppose we have four non - collinear points. if their convex hull is a quadrilateral , then there are two halving lines. if their convex hull is a triangle , then there are three halving lines .both cases are shown on figure [ fig:4points ] .[ fig : squaretriangle ] if all points belong to the convex hull of the point configuration , then each point lies on exactly one halving line .the number of halving lines is , and the underlying graph is a matching graph a union of disjoint edges .the left side of figure [ fig:4points ] shows an example of this configuration .if our point configuration is a regular -gon and a point at its center , then the underlying graph is a star with a center of degree .the configuration has halving lines .the right side of figure [ fig:4points ] shows an example of this configuration .dey uses the notion of _ convex chains _ to improve the upper bound on the maximum number of halving lines .chains prove to be useful in proving other properties of the underlying graph as seen in .the following construction partitions halving lines into chains : 1 .choose an orientation to define as up . "the leftmost vertices are called the left half , and the rightmost vertices are called the right half . 2 .start with a vertex on the left half of the graph , and take a vertical line passing through this vertex .3 . rotate this line clockwise until it either aligns itself with an edge , or becomes vertical again .4 . if it aligns itself with an edge in the underlying graph , define this edge to be part of the chain , and continue rotating the line about the rightmost vertex in the current chain .if the line becomes vertical , we terminate the process .the set of edges in our set is defined as the chain .repeat step 2 on a different point on the left half of the underlying graph until every edge is part of a chain .the construction is illustrated in figure [ fig : chains ] .the thickest broken path is the first chain .the next chain is thinner , and the third chain is the thinnest .note that the chains we get are determined by which direction we choose as up . "the following properties of chains follow immediately . later properties on the listfollow from the previous ones and : * a vertex on the left half of the underlying graph is a left endpoint of a chain . * the process is reversible .we could start each chain from the right half and rotate the line counterclockwise instead , and obtain the same chains . *a vertex on the right half of the underlying graph is a right endpoint of a chain . *every vertex is the endpoint of exactly one chain . *the number of chains is exactly . *the degrees of the vertices are odd .indeed , each vertex has one chain ending at it and several passing through it . *every halving line is part of exactly one chain .suppose we have a set of points .any affine transformation does not change the set of halving lines .sometimes it is useful to picture that our points are squeezed into a long narrow rectangle . this way our points are almost on a segment .we call this procedure _ segmenterizing _ , and we introduced it in .the figure [ fig : squeezing ] shows three pictures .the first picture has six points , that we would squeeze towards the line .the second picture shows the configuration squeezed by a factor of 10 , and if we make the factor arbitrary large the points all lie very close to a segment as shown on the last picture . the following construction we call a _ cross _ .we form the cross as follows .we squeeze initial sets into long narrow segments . then we intersect these segments at middle lines , so that half of the points of each segment lie on one side of all halving lines that pass through the points of the other segment ( see figure [ fig : cross ] ) . given two sets of points with and points respectivelywhose underlying geographs are and , the cross is the construction of points on the plane whose underlying geograph has two isolated components and .note that the left example in figure [ fig:4points ] is a cross of two 2-paths .we introduce a construction that we call _ k - fission_. replace each point in the graph by a small cluster of points that all are not more than away from the replaced point , for some small .we call the cluster that replaces the point the _ cluster_. figure [ fig:2fission ] shows a picture of a 2-fission of the star configuration from the right of figure [ fig:4points ] .how small should be ?first , we want to be much smaller than any distance between the given points. this way different clusters do not interfere with each other .in addition , we want the lines connecting points from two different clusters not to pass through other clusters . for this purposewe introduce the notion of -corridor .that means that is small enough so that if a line connects two points from the clusters of and , then the other clusters are on the same side of the line if and only if the corresponding points prior to fission are on the same side of the line .given a segment , call all the directions that can be formed by lines connecting points that are not more than apart from the ends of the segment , its _-corridor_. we assume that is so small that no two segments connecting points in our configuration have overlapping -corridors . given an underlying graph and its -fission , we know that . if is an edge of , then any edge of with one endpoint in the cluster and one endpoint in the cluster is said to _ traverse _ these two clusters .we begin by proving the following two lemmas : if two points and do not form a halving edge of , then there are no halving edges in traversing from the cluster of to the cluster of .suppose the line has points from on one side , and points on the other side , where is not .then a line traversing from the cluster of to the cluster of has at least points from on one side and at least points on the other side .the leftover points can not bring the balance to zero .[ thm : traversing ] for any halving edge of , there are exactly corresponding halving edges in traversing from the cluster of to the cluster of .consider the geograph consisting of solely the vertices in which are in the clusters of and .orient so that the vertices which correspond to all lie to the left half of , and the vertices which correspond to all lie on the right half .then since there are exactly chains in , and each chain contains one edge that traverses from the left half to the right half , there must be exactly edges between the cluster of and the cluster of .now since is a fission of , all the vertices of which belong to neither the cluster of nor must be divided exactly in half by any line of the form , where are vertices of such that is in the cluster of and is in the cluster of .hence , the edges that traverse from the cluster of to the cluster of are preserved upon deleting all the vertices of other clusters . but this gives precisely the geograph , so we are done .the previous proof gave us a description how traversing halving edges of a -fission are arranged .[ thm : traversinginduced ] for any halving edge of , the corresponding traversing halving edges in are halving edges of a subgraph formed by points of two clusters and .intuitively , our lemmas assert that every edge of splits conveniently into traversing edges in under fission and there are no other traversing edges. however , it does not assert anything about the number of non - traversing edges of , namely those whose endpoints lie in the same cluster . in figure[ fig : segment2fission ] there is a 2-point configuration on the left . in the middle its 2-fission contains a halving line connecting two points within the cluster of the left vertex . on the rightthe 2-fission does not contain non - traversing edges .there are , however , restrictions on the orientation of such halving lines if they do appear .[ thm : corridor ] if there is a non - traversing halving line within a cluster , then the direction of this halving line is within the -corridor of one of the halving edges of with an end point at .if the direction of a line connecting two points in a cluster does not belong to any of the -corridors , then the line can not pass through any of the other clusters .that means that the number of full clusters on each side of the line is different and this line can not be a halving line .theorem [ thm : corridor ] shows that it is easy to build fission examples such that there are no halving lines within clusters .we just need to choose points in clusters in such a way that lines connecting any two points in the cluster have directions that do not coincide with any corridors .we will call a fission such that non - traversing edges do not appear a _ plain _ fission .a geograph is a plain -fission of a geograph if and only if the number of edges of is times the number of edges of .it is natural to consider the relation between the chains of an underlying geograph , and the chains of its fission . we can use lemmas above to deduce that chains in split into chains in . for every chain in with vertices ,there are exactly corresponding chains in which traverse from the cluster of to the cluster of for every .if a chain in contains edge , and a chain in contains an edge traversing from the cluster of to the cluster of , then we say that _ overlaps _ with .chain can not overlap with more than one chain : this follows from the construction of chains , in which every edge of a chain determines the next .now note that there are exactly times as many chains in than in .therefore , we must have exactly chains in which overlap with any chain in . since every edge in corresponds to traversing edges in , the chains in overlapping with must traverse all the clusters corresponding to the vertices of , as desired .we know that in plain fission there are exactly edges traversing two clusters and . moreover , from the corollary [ thm : traversinginduced ] we know that the having lines traversing and are the halving lines of the graph consisting of vertices that belong to and clusters. as there are no more halving lines within the two clusters and each vertex of a graph has to have at least one halving line we get the following lemma .[ thm : covering ] every vertex in a cluster of a plain fission has exactly one halving edge connecting to a cluster if and only if and are connected in .every vertex in the cluster of a plain fission has the same degree , which is the degree of in .we already know from figure [ fig : segment2fission ] that two different -fissions of the same graph can produce different halving edges graphs .is the same true for plain fissions ?it turns out that two plain fissions of the same graph can indeed produce different graphs .suppose graph has a cycle of length 3 , as for example a graph with 6 vertices in figure [ fig : graph6 ] .this graph can produce two different resulting graphs under plain 2-fission .we will only draw the part of the graph that corresponds to the 3-cycle , since the halving edges of a fission of a subgraph do not depend on the rest of the graph , see corollary [ thm : traversinginduced ] . in figure[ fig : differentfissions ] the left 2-fission example consists of two 3-cycles , and the right example consists of a single 6-cycle .suppose that every cluster consists of the configurations of points .we can view this fission as multiplication of the halving edges graph by a configuration . if configuration does not have two points that generate a line within the direction of any -corridor , then it is a plain fission .with such a multiplication the resulting graph has times more vertices and times more edges .the identity operation in this multiplication is a -fission : replacing every vertex by itself .we can also see that fission is transitive .we can consider an -fission of a -fission of a graph as a -fission. does this multiplication depend on configuration or only on the number of points ?figure [ fig : differentsameclusterfissions ] shows a part of a 3-fission of a triangle subgraph of a graph .the resulting halving lines form a 9-cycle .we will see in the next section that a 3-fission of a triangle subgraph is a 3-cycle and a 6-cycle if the cluster consists of 3 nearly collinear points .this proves that the graph does depend on the configuration .now we consider how fission affects connected components .if a graph is a -fission of a connected graph , then every connected component of passes through all the clusters .moreover , any given connected component of has the same number of vertices in every cluster of .take any two points of which have halving lines passing through them , say and .let be a connected component of .it suffices to show that has the same number of vertices in the clusters of and .removing all the vertices of not belonging to these two clusters does not affect the halving lines from -cluster to -cluster .call the resulting graph .orient such that all the vertices of the -cluster lie on the left half .suppose that is a connected component of .by , has as many vertices on the left half of as on the right half .hence , it has as many vertices in and clusters , regardless of orientation .however , this is true for any choice of connected component of , and the intersection of and is simply a union of connected components of .hence , itself must have equally many vertices in the clusters of and . by the same argument , given any two clusters of which have halving lines passing through them, we can conclude that has the same number of vertices in each .but is a -fission of , which is connected , so in fact has the same number of vertices in every cluster .[ thm : fissionconcor ] if is connected , then the number of vertices in any connected component of its -fission is a multiple of .note that plain fission can be viewed as graph lifting or covering .if and are two graphs , then is called a _ covering graph _ or a _ lift _ of if there is a surjection from the points of to the points of and the restriction of this map to any vertex of is a bijection with respect to the neighborhoods .there is a natural surjection of , that is a -fission of , onto : every point in the -cluster maps to .the -fission graph is a covering of , if the -fission is plain .the proof directly follows from lemma [ thm : covering ] .we now consider a special type of plain fission .suppose that we replace each point by the same set of points on a line , and that the orientation of this line does not belong to any -corridor .we call such a fission a _ parallel fission_. after properly rotating a parallel -fission graph we can assume that points in every cluster form a horizontal line .thus , we can identify the left and the right points in each cluster . to prevent our -fission graph from having collinear points, we can perturb the points very slightly to almost lie on a line within each cluster . in this manner, any two points in a cluster will be connected by a line almost in the same direction , and all the angular directions achieved by connecting two such points form a set of directions that does not overlap with any of the -corridors of edges of graph . a parallel -fission of the halving edges graph can be divided into connected components , where each component contains only -th left and -th right points in each cluster . if a traversing edge from to cluster starts with the -th point from the left , it has to connect to the -th point from the right .the lemma allows to reduce the description of the structure of the parallel -fission graph to 2-fission graphs .but first we add a standard notation : is a complete graph with vertices .the parallel 2-fission of the graph is the tensor product of and .the proof of the lemma is straightforward after we define tensor products of two graphs that was introduced by a. n. whitehead and b. russell in .the _ tensor product _ of graphs and is a graph such that the vertex set of is the cartesian product of the vertex sets of and ; and any two vertices and are adjacent in if and only if is adjacent with and is adjacent with . in particularif is , then any two vertices and are adjacent if is adjacent with and and are two different vertices of .now we present a theorem which immediately follows from the two lemmas above .the parallel -fission of the graph is the union of tensor products of and if is even . if is odd , then parallel -fission of the graph is the union of tensor products of and and one copy of .the following corollaries follow from properties of tensor products .the parallel -fission of the graph is bipartite . if is bipartite , then the parallel -fission of is copies of .in previous sections we considered -fissions where there are no halving edges within one cluster , namely plain fissions . now we discuss examples where there are a lot of halving edges within the cluster . but first , we introduce some definitions . a graph is called a _ 1-tree _ or _ unicyclic _ if it is connected and has exactly one cycle . a graph in which each connected component is a 1-tree is called a _ 1-forest_. it is easy to check that if graph is a 1-forest , then we can direct its edges in such a way that every vertex has outdegree 1 .this directed graph is called the _ functional graph_.we define a new type of fission which we call _ forest _ fission . as a prerequesite ,our halving edges graph must be a 1-forest .each cluster will be a configuration of points that match some given halving lines configuration .we define the forest fission as follows .replace each vertex of by a segmenterized copy of that fits in an neighborhood of the vertex .in addition , let the segmenterized vertices be aligned with the edge coming out of the original vertex in the functional graph of and the vertex divide the point in in half .figure [ fig : forestfission ] gives an example of forest fission .we start with a smallest halving line configuration that is a forest . in this casethe configuration has 6 vertices .the left picture shows the directions of each edge so that the outdegree is 1 . in the middle picturewe replace each vertex with a segment of two vertices , so that the segment is oriented in the direction of the outedge and the middle of the segment is located at the former vertex .the last picture shows the resulting forest fission configuration .the non - traversing halving lines of the forest fission within any given cluster are exactly the halving lines of that cluster when all other clusters are removed .consider two points in the cluster of .suppose that in , the line from was directed towards .every line passing through two points in the same cluster divides all clusters other than and in half .also , it divides the points in the -cluster in half .therefore , it divides all points not in -cluster in half .thus , a line through two points in the -cluster is the halving line of the fission graph if and only if it is a halving line in , or equivalently , an non - traversing halving line in .if graph has edges and graph has edges , then the forest fission gives a set with vertices and edges .as is 1-forest , we know that . the smallest known unicyclic halving edges graph has 6 vertices and is depicted on figure [ fig : graph6 ] .if we use this graph as in the forest fission construction we can get from a graph with vertices and edges .suppose we start with the same as and create their forest fission graph .then we fission the graph with the resulting graph from the previous step and continue this recursively .we will get a sequence of graphs that have vertices and edges .that gives us another construction of an -point configuration with at least halving edges .thanks to multiplication , we have a notion of divisibility .if is a fission of , we will say that _ divides _ .the most natural follow - up questions to ask are those related to divisibility of numbers .for instance , given a geograph , we can trivially represent it as the fission of another smaller graph : every halving line graph divides the 2-path graph .we can segmentarize , divide vertices into left and right halves and move these halves away from each other , so that each half is a cluster . as we mentionedbefore if divides , then divides .the following lemma follows : if has vertices , where is prime , then it only has two divisors itself and the two - path .as we mentioned before , fission is a transitive operation , so divisibility is transitive as well .are there prime " halving geographs ? if fission is similar to multiplication , what are prime building blocks of halving edges graphs ?let us call a halving edges graph _ primitive _ if it only divides itself and 2-path . clearly if the geograph in question has vertices , where is prime , then it is primitive . are there other non - trivially primitive graphs ?the graph that is a cross of a 6-star and a 2-path is primitive .the only possibility for this to be a non - trivial fission is for it to be a 2-fission of a connected graph with 4 vertices .then by corollary [ thm : fissionconcor ] each connected component has to have at least 4 vertices .contradiction .we can imitate the proof above to easily produce other primitive underlying geographs which do not have vertices , with several connected components .however , it is unclear whether there exists a primitive connected halving edges graph ; this poses an interesting object of study for future exploration into fission and divisibility .we are grateful to professor jacob fox for helpful discussions .the second author was supported by urop .b. m. brego , s. fernndez - merchant , j. leaos and g. salazar , the maximum number of halving lines and the rectilinear crossing number of for , _ electronic notes in discrete mathematics _, * 30 * ( 2008 ) , 261266 .g. nivasch , an improved , simple construction of many halving edges , _ surveys on discrete and computational geometry : twenty years later _ ( j. e. goodman et al . , editors ) , contemporary mathematics * 453 * , ams , ( 2008 ) , 299305 .
in this paper we discuss an operation on halving edges graph that we call fission . fission replaces each point in a given configuration with a small cluster of points . the operation interacts nicely with halving edges , so we examine its properties in detail .
statistical approaches to complex problems have been successful in a wide range of areas , from complex nuclei to the statistical program for complex systems ( see for instance ref . ) , including the fruitful analogies with statistical mechanics . the main goal is to find universal properties , i.e. , properties that do not depend on the specifics of the system treated but on very few symmetry or general considerations .an example of such an approach is represented by the random matrix theory ( rmt ) which has been applied successfully to wave systems in a range of typical lengths from one femtometre to one metre .attempts to apply a rmt approach to a non - polynomial problem like the euclidean traveling salesman problem ( tsp ) is not the exception . for an _ ensemble _ of cities ,randomly distributed , general statistical properties appear and they are well described for a daisy model of rank . however , in realistic situations the specific system seems to be important , as we shall see later , and the initial conditions rule the statistical properties of the solutions . in the present work we deal with a similar analysis both for actual tsp maps for several countries through the planet and for some toy models that could help to understand the role of initial conditions in the transition to universal statistical properties .an interesting link appears with the distribution of corporate vote when we map the lengths in the tsp problem to the number of votes , after a proper normalization .the paper is organized as follows : in section ii we define the tsp and the statistical measures we shall use .special attention will be paid to the separation of secular and fluctuation properties . here , we analyze maps of actual countries . in section iiiwe discuss the transition from a map defined on a rectangular grid to a randomly distributed one using two random perturbations to the city positions : i ) a uniform random distribution with a width and , ii ) a gaussian distribution .in the same section , we discuss the relation of the latter toy model to the distributions of corporate vote .the conclusions appear in section iv .in the traveling salesman problem a seller visits cities with given positions , returning to her or his city of origin .each city is visited only once and the task is to make the circuit as short as possible .this problem pertains to the category of so called _ np - complete _ problems , whose computational time for an exact solution increases as an exponential function of .it is , also , a minimization problem and it has the property that the objective function has many local minima .several algorithms exist in order to solve it and the development of much more efficient ones is matter of current research . since we are interested in statistical properties of the quasi - optimal paths , small differences between the different algorithms are not of relevance and we shall consider all of them as equivalent .the computational time is irrelevant as well . in this paperwe used the results of concorde for the actual - country tsp , and simulated annealing and 2-optimal for the analysis of specific models . the first step in the analysis consist in separating the fluctuating properties from the secular ones in the data .this process could be nontrivial .the idea behind this procedure is that all the peculiarities of the system resides in the secular part and the information carried out by the fluctuations have an universal character . in the energy spectrum of many quantum systems ,when almost all the symmetries are broken , this kind of analysis has shown that the fluctuations are universal and regarding only to the existence or not of a global symmetry like time reversal invariance .the peculiarities of the system , wheather it is a many particle nucleus , an atom in the presence of a strong electromagnetic field , or a billiard with a chaotic classical dynamics , all these characteristics are in the secular part . in the present case , we assume that the dynamics that rules where the cities are located is sufficiently complex in order to admit this kind of analysis . if not , as we shall see later , the next step in our analysis is to search for the reason of such a lack of universality . in order to perform this separationwe consider the density of cumulative lengths , , named with , the dirac s delta function .the cumulative lengths are ordered as they appear in the quasi optimal path and are defined as with being the length between city and .the cities are located at in the plane .the corresponding cumulative density is with , the heaviside function .the task is to separate as .the secular part is calculated using a polynomial fitting of degree .after this , we consider as the variable to analyze the one mapped as we shall study the distributions of the set of numbers .notice that this transformation makes that .the analysis is performed on windows of different size , this kind of analysis is always of local character . for historical reasons this spreading procedureis named unfolding .from all the statistics we shall focus on the nearest neighbor distribution , , with ( which is the normalized length ) , and the number variance , , for the short and large range correlations , respectively .the is the variance in the number of levels in a box of size , see for a larger explanation .the actual maps considered in this work were those which are reported in concorde s web page , and we selected those that have a number of cities larger than and present no duplications .the countries selected are reported in table [ tab:1 ] ..[tab:1 ] countries considered for the study .the quasi - optimal path was obtained from the web page in ref .maximum and average length ( in _ km _ ) are reported too .[ cols="<,<,>,>,>",options="header " , ] the results in the are full of variations , as expected . the secular part was calculated with several windows size and polynomial degrees looking for those parameter values that stabilize the statistics .however , a no universal behavior appears .the graphs presented in fig .[ fig:0 ] for the nearest neighbor distribution were calculated using a polynomial fitting of fourth degree and windows of lengths , the histograms have a bin of size .there we show the distribution for all the countries in table [ tab:1 ] .there is no a single distribution even when some of the countries show a exponential decay as occurs with finland ( see fig .[ fig:1 ] ) .we analyzed the data with several windows size and polynomial degrees with similar results .no regularities were found in our analysis but all the histograms show almost a maximum and some of them present a polynomial grow , , at the beginning of the distribution . this could be seen in burma ( 1 ) , japan ( 2 ) and sweden ( 13 ) .notice that the distributions present the maximum at meanwhile the average is at .hence , all the distributions present a long tail .however the type of decay is diverse as well ( see fig .[ fig:1 ] ) .some distributions present a clear exponential decay , like finland . some others present a mixed decay as is the case of china . for sake of claritywe do not show all the cases in fig .[ fig:1 ] .notice that the bin size is larger in fig .[ fig:1 ] compared to that used in fig .[ fig:0 ] , for this reason , the polynomial grow does not appear .we try to show countries of several sizes and urban configurations .there are some countries with an urban density almost constant , like sweden , and some others with long tails and a maximum for very short distances as canada .changes in the parameters of analysis , windows size and fitting polynomial degree , do not give us an universal behavior .recall that this is not the case for an _ ensemble _ of randomly distributed cities as seen in ref. .from all these results it is clear that the initial distribution of cities plays a crucial role in the final quasi - optimal result .this is the subject of the next section .the statistical properties of quasi - optimal paths for an ensemble of randomly distributed cities in the euclidean plane is well described by the so called daisy model of rank .these models are the result of retaining each number from a sequence of random numbers which follow a poisson distribution , i.e , its -nearest neighbor distribution has the form with and corresponds to first nearest neighbors . the rarefied sequence must be re - scaled in order to recover the norm and the proper average of the -nearest neighbor distributions . for the general daisy model of rank have the following expression for the nearest neighbor distribution : and , , \end{split } \label{s2daisy}\ ] ] for the number variance . here are the roots of unity and stands for . in the case of , both ,the nearest neighbor distribution and the statistics have the theoretical results , namely and . \label{s2daisy2}\ ] ] as mentioned above , quasi - optimal paths of an _ ensemble _ of maps with cities randomly distributed nearly follow equations ( [ pdsdaisy2 ] ) and ( [ s2daisy2 ] ) .again , the final distribution of lengths in the quasi - optimal paths depends on the initial distribution of cities . in order to understand the role of initial distribution of cities ,we depart from a master map of cities on a square grid of side , where the initial position of each city is in the intersections . in this case , the distribution of lengths for the quasi - optimal path is close to a delta function with a small tail .now , we take an ensemble of maps , each one is built up relocating the cities from their original positions using a probability distribution of width , .that is where the numbers and are taken from which have zero mean and and are integers .two distributions were selected , i ) a uniform one with width , that we call model i , and ii ) a gaussian one with the same width , that we named model ii . in figure [ fig:2a ]several examples are shown of the quasi - optimal paths .the starting grid or master rectangle defining the cities is similar to that of figure [ fig:2a](a ) .the distribution of lengths shows a transition from the original one to the limit case given by equations ( [ pdsdaisy2 ] ) and ( [ s2daisy2 ] ) as can be seen in fig .[ fig:2 ] for a uniform distribution and in fig .[ fig:3 ] for the gaussian one . in these figureswe plot in both ( a ) linear and ( b ) logarithmic scale .the analysis was performed on an ensemble of maps of cities each .the variable was considered in the interval $ ] .only relevant values are reported . for model i, we plotted the histograms for and ( see fig.[fig:2 ] ) .for the first case the distribution departs barely from the initial one but , the quasi optimal solution presents a revival for slightly below (fig .[ fig:2](a ) the black histogram with circles ) , and for values slightly larger than representing the existence of diagonal lengths in the grid and lengths of order two s ( in this uncorrelated variable , see fig . [fig:2a](a ) ) .the histograms show a continuous transition to the distribution given by eq .( [ pdsdaisy2 ] ) when ( blue histogram with stars ) .the tail , in this case , follows very closely the daisy model of rank 2 to it and the start of is consistent with it ( see fig .[ fig:2](b ) ) .the histogram in red with crosses corresponds to and follows very closely the daisy model of rank .meanwhile , the histogram in green with diamonds , corresponding to , follows the rank model .the second case corresponds to the value of when the distribution of the cities start to admit overlapping , i.e .when the histogram coincides with the daisy model of rank ( not shown ) . for model iithere exists a transition and the fitting to a daisy model is , as well , defined as in model i. in fig .[ fig:3 ] we plot the cases and which are close to daisy models of rank and , respectively . in fig .[ fig:3](b ) we re - plotted in semi - log scale in order to see the tail decay .the interpretation is the same as the previous one .fluctuations in the tails for large values of are observed in the gaussian case .the reason for them is that the map admits cities positioned far away from the master rectangle ( not shown ) .certainly , a fit using weibull or brody distributions is possible for both models , however it does not exist a link with any physical model whereas the daisy models are related to the 1-dimensional coulomb problem .an interesting link exists in this context between the distribution of lengths in the quasi - optimal path and the vote distribution trough daisy models . in ref . it has been established that the distribution of corporate vote in mexico , during elections from to follows a daisy model with ranks from to . in the present case ,exponential decays that are compatible with occur at for model i and for model ii , i.e. , for the former and for the latter . in both cases ,distributions of the random perturbation overlap is significant .other links look possible and are under consideration .this behavior remains poorly understood , but it opens new questions about the relationship between statistical mechanics problems and social behavior . in the case of long range behavior , the analysis with the statistics looks promising , but the asymptotic behavior does not coincide with daisy models .this statistics is highly sensitive to the unfolding procedure described before .wider studies are currently in progress , however we give in advance that the statistics for model i at is close to that of the daisy model of rank 2 .the slope is which is close to the value obtained for the daisy model . for both modelsthe starts following the behavior of eq .( [ s2daisy2 ] ) . for small values of numerical results show an oscillatory behavior compatible with the presented in the daisy model , eq .( [ s2daisy ] ) , even when the asymptotic slope is not the correct one .we presented a statistical approach to the traveling salesman problem ( tsp ) .no universal behavior appears in the case of actual distribution of cities for several countries world wide as it appears in the case of the euclidean tsp with a uniform random distribution of cities . as a first step to understand the role of the initial distributions of cities , we study the nearest neighbor distribution for the lengths of quasi - optimal paths for a model which start with a periodic distribution of cities on a grid and it is perturbed by a random fluctuation of width .we use two models for the fluctuation : model i , a uniform distribution and , model ii , a gaussian one .both models evolve , as a function of , from a delta like initial distribution to one well described by a daisy model of rank ( see eq . ( [ pdsdaisy2 ] ) ) . as the perturbative distribution widthis increased the evolution of the models present a nearest neighbor distributions compatible with several ranks of the daisy model .two values of the width are important , the first one is when the random perturbation admits an overlap of the cities originally at the periodic sites . for model i that occurs when ( the total width of the distribution is ) , being the distance of the periodic lattice .for model ii that happens when , i.e. two standard deviations of the gaussian distribution . in these casesthe histogram of lengths fits a rank daisy model .an interesting link appears when we notice that such a daisy model fits the tail of the distribution of votes ( for the chambers ) for a corporate party in mexico during election of 2006 .the reason of this coincidence remains open and requires further analysis .an attempt in this direction is presented in ref. .another open question concerns about if an _ ensemble _ of world wide countries have universal properties or not .this topic is in current research .hhs was supported by promep 2115/35621 and partially by dgapa / papiit in-111308 .wigner . _ proc .* 47 * ( 1951 ) 790 .j. palis jr ._ dynamical systems and chaos _ , vol.*1 * , ( singapore , world scientific , 1995 ) .p 217 - 225 .m. l. mehta ._ random matrices_. 3rd . ed .( amsterdam , elsevier , 2004 ) .t. guhr , a .mller - groeling , and h.a .weidenmller . _ phys .* 299 * ( 1998 ) 190 .mndez , a. valladares , j. flores , t.h .seligman , and o. bohigas ._ physica a _ * 232*(1996 ) 554 .h. hernndez - saldaa , j. flores and t.h .e. _ * 60 * ( 1999 ) 449 .w.h . press .teukolsky , w.t .vetterling , b.p ._ numerical recipes .the art for scientific computing_. 3rd . ed .( n.y . , cambridge univ . press , 2007 ) .d. l. applegate , r. e. bixby , v. chvatal , w. cook _ the traveling salesman problem : a computational study_. ( princeton university press , 2006 ) .chapter 16 .http://www.tsp.gatech.edu/world/countries.html g.a .croes . _ op .* 6*(6 ) ( 1958 ) 791 - 812 .h. hernndez - saldaa._physica a _ * 388 * ( 2009 ) 2699 - 2704 . h. hernandez saldaa ._ traveling salesman problem .theory and applications_. d. davendra , ed .( india , intech , 2010 ) pp .283 - 298 .
solutions to traveling salesman problem have been obtained with several algorithms . however , few of them have discussed about the statistical distribution of lengths for the quasi - optimal path obtained . for a random set of cities such a distribution follows a rank 2 daisy model but our analysis on actual distribution of cities does not show the characteristic quadratic growth of this daisy model . the role played by the initial city distribution is explored in this work . _ keywords : _ traveling salesman problem , city distribution , statistical properties . las soluciones al problema del agente viajero han sido obtenidas con varios algoritmos . sin embargo , poco se ha discutido sobre la distribucin de longitudes para el camino cuasi - optimal obtenido . para un conjunto al azar de ciudades esta distribucin sigue un modelo margarita de rango 2 , pero nuestro anlisis sobre distribuciones reales de ciudades no muestra el crecimientos cuadrtico caracterstico de este modelo margarita . en este trabajo se explora el rol jugado por la distribucin inicial de ciudades . _ descriptores : _ problema del agente viajero , distribucin de ciudades , propiedades estadsticas .
_ planck _ ( ( * ? ? ?* tauber et al . 2010 ) , ( * ? ? ?* planck collaboration i. 2011 ) ) provides 8 full sky surveys at the frequencies of its lfi instrument ( 30 , 44 , and 70 ghz ) , and 5 full surveys at the 6 hfi frequencies in the range 100857 ghz .the satellite rotates with 1rpm around an axis kept fixed for a pointing period ( pid ) of about 50 minutes , then shifts along the ecliptic by 2. this keeps a point source near the ecliptic in the main beam ( 2 fwhm ) for about pids ( depending on frequency ) for each survey ; for sources near the ecliptic poles the coverage can be much larger . analysing the time information in planck data for a particular sky directionthus allows to search for variability in the sky signal .variability mapping is based on four - dimenional healpix ( ( * ? ? ? * grski etal . 2005 ) ) constructs called 4d - maps , which record for every sky pixel all contributions of a given detector at times and beam orientation , where the index refers to planck pids . to construct an average sky signal free from beam orientation effects, we use the _ artdeco _ beam deconvolution code ( ( * ? ? ?* keihnen & reinecke 2012 ) ) .a variability map is then a two - dimensional healpix map of a quantity \ ] ] where + is the number of entries for pixel in the 4d signal map , is the beam - reconvolved average 4d - map based on the _ artdeco _ map , is the detector white noise , and combines all instrumental fluctuations which factor on the signal ( e.g. , calibration , inaccuracies of the beam model ) .the definition of is motivated by the chernoff bound on the cdf , with for , and for .the distribution of x - values over a sky map is indicateive not only of variability , but also to incorrect estimations of noise and instrumental variations ( see fig .[ fig1 ] ) . , and detector noise as given in ( * ? ? ?* planck collaboration ii .( 2014 ) ) , thin dotted lines show the distribution expected for pure gaussian noise .the tail to large values of indicates the presence of true sky variability .a tail to large negative values of would indicate an overestimation of , an offset of the peak from an incorrect estimation of .,scaledwidth=80.0% ]as the analysis of time variations in the sky is essentially background - free , our method provides a way to extract time resolved planck fluxes down to a time resolution of a few hours .if the position of a variable point source is known , the residual 4d - map , , can be beam - deconvolved to the source position by a simple division and thus provide an estimate of the flux variation , , where is a _ beamfactor map _ expressing the measured flux of a unit emitter at the source position in a beam centered at pixel and time .this method is used , e.g. , to extract time - resolved planck fluxes for co - eval monitoring of blazars with the f - gamma program ( ( * ? ? ?* fuhrmann et al . 2007 ) , ( * ? ? ?* rachen et al . 2015 ) ) .our analysis tools work on unified data structures , which ensures that all methods for variability analysis and flux extraction are applicable to lfi and hfi data in the same way .interfaces for intitial 4d - mapping are in place at both the lfi and hfi dpc .we expect that all planck frequencies will be analysed for variability , and science results prepared for publication well before the planck legacy release .
the sky is full of variable and transient sources on all time scales , from milliseconds to decades . _ planck _ s regular scanning strategy makes it an ideal instrument to search for variable sky signals in the millimetre and submillimetre regime , on time scales from hours to several years . a precondition is that instrumental noise and systematic effects , caused in particular by non - symmetric beam shapes , are properly removed . we present a method to perform a full sky blind search for variable and transient objects at all planck frequencies .
simulations of nano scale liquid systems can provide insight into many naturally - occurring phenomena , such as the action of proteins that mediate water transport across biological cell membranes .they may also facilitate the design of future nano devices and materials ( e.g. high - throughput , highly selective filters or lab - on - a - chip components ) .the dynamics of these very small systems are dominated by surface interactions , due to their large surface area to volume ratios. however , these surface effects are often too complex and material - dependent to be treated by simple phenomenological parameters or by adding ` equivalent ' fluxes at the boundary .direct simulation of the fluid using molecular dynamics ( md ) presents an opportunity to model these phenomena with minimal simplifying assumptions .some md fluid dynamics simulations have been reported , but md is prohibitively computationally costly for simulations of systems beyond a few tens of nanometers in size .fortunately , the molecular detail of the full flow - field that md simulations provide is often unnecessary ; in liquids , beyond 510 molecular diameters ( for water ) from a solid surface the continuum - fluid approximation is valid and the navier - stokes equations with bulk fluid properties may be used .hybrid simulations have been proposed to simultaneously take advantage of the accuracy and detail provided by md in the regions that require it , and the computational speed of continuum mechanics in the regions where it is applicable .an example application of this technique is shown schematically in figure [ figure_postsandnanochannel ] , where a complex molecule is being electrokinetically transported into a nanochannel for separation and identification . only the complex molecule ,its immediately surrounding solvent molecules , and selected near - wall regions require an md treatment ; the remainder of the fluid ( comprising the vast majority of the volume ) may be simulated by continuum mechanics .a hybrid simulation would allow the effect of different complex molecules , solvent electrolyte composition , channel geometry , surface coatings and electric field strengths to be analysed , at a realistic computational cost . in order to produce a useful , general simulation tool for hybrid simulations ,the md component must be able to model complex geometrical domains .this capability does not exist in currently available md codes : domains are simple shapes , usually with periodic boundaries .the most important , computationally demanding , and difficult aspect of any md simulation is the calculation of intermolecular forces .this paper describes an algorithm that is capable of calculating pair force interactions in arbitrary , unstructured mesh geometries that have been parallelised for distributed computing by spatial decomposition .the conventional method of md force evaluation in distributed parallel computation is to use the cells algorithm to build ` neighbour lists ' for interacting pairs with the ` replicated molecule ' method of providing interactions across periodic boundaries and interprocessor boundaries , where the system has been spatially partitioned . the spatial location of molecules in md is dynamic , and hence not deducible from the data structure that contains them .a neighbour list defines which pairs of molecules are within a certain distance of each other , and as such need to interact via intermolecular forces ._ neighbour lists are unsuitable _ when considering systems of arbitrary geometry , that may have been divided into irregular and complex mesh segments using standard mesh partitioning techniques ( see , for example , figure [ figure_testmolconfig_complex_bw ] ) for two reasons : 1 .* interprocessor molecule transfers : * a molecule may cross an interprocessor boundary at any point in time ( even part of the way through a timestep ) , at which point it should be deleted from the processor it was on and an equivalent molecule created on the processor on the other side of the boundary . given that neighbour lists are constructed as lists of array indices , references or pointers to the molecule s location in a data structure , deleting a molecule would invalidate this location and require searching to remove all mentions of it .likewise , creating a molecule would require the appropriate new pair interactions to be identified .clearly this is not practical .it is conventional to allow molecules to ` stray ' outside of the domain controlled by a processor and carry out interprocessor transfers ( deletions and creations ) during the next neighbour list rebuild .this is only possible when the spatial region associated with a processor can be simply defined by a function relating a position in space to a particular cell on a particular processor ( i.e. a uniform , structured mesh , representing a simple domain ) . in a geometry where the space in question is defined by a collection of individual cells of arbitrary shape ,this is not possible .for example the location the molecule has ` strayed ' to may be on the other side of a solid wall on the neighbour processor , or across another interprocessor boundary . for the molecule to end up unambiguously in the correct place, interprocessor transfers must happen as molecules cross a face .* spatially resolved flow properties : * md simulations used for flow studies must be able to spatially resolve fluid mechanic and thermodynamic fields .this is achieved by accumulating and averaging measurements of the properties of molecules in individual cells of the same mesh that defines the geometry .if a molecule is allowed to stray outside of the domain controlled by a processor , as above , then it would not be unambiguous and automatic which cell s measurement the molecule should contribute to .while both of these problems could be mitigated by , for example , working out which cell a molecule outside the domain should be in on another processor and sending its information , doing so would result in an inelegant and inflexible arrangement with each additional simulation feature ( i.e. a different measured variable or class of intermolecular potential ) requiring special treatment .we have , therefore , developed a new algorithm which is of comparable computational cost to neighbour lists , but designed to be powerful and generic for simulation in arbitrary geometries .as will be shown , neighbour lists also have some unfavourable features that could be improved upon .when parallelising an md calculation , the spatial domain of the simulation is decomposed and each processor is given responsibility for a single region .molecules are , of course , able to cross the boundaries between these regions and therefore need to be communicated from one processor to the next. processors also communicate when carrying out intermolecular force calculations , in which molecules close to processor boundaries need to be replicated on their neighbours to provide interactions .this process is illustrated in figure [ figure_referredmolecules ] .periodic boundaries also require information about molecules that are not adjacent physically in the domain ( see figure [ figure_referredmoleculesperiodic ] ) , these required interactions can also be constructed by creating copies of molecules outside the boundary .it is possible to handle processor and periodic boundaries in exactly the same way , since they have the same underlying objective : molecules near to the edge of a region need to be copied either between processors or to other locations on the same processor at every timestep to provide interactions .this is a useful feature because decomposing a mesh for parallelisation will often turn a periodic boundary into a processor boundary .the issue is how to efficiently identify which molecules need to be copied , and to which location , because this set continually changes as the molecules move .the new arbitrary interacting cells algorithm ( aica ) we propose here is an extension of the conventional cells algorithm ( cca ) . in the cca ,a simple ( usually cuboid ) simulation domain is subdivided into equally sized cells . for computational and theoretical reasons, intermolecular potentials do not extend to infinity ; they are assigned a cut - off radius , , beyond which they are set to zero .the minimum dimension of the cca cells must be greater than , so that all molecules in a particular cell interact with all other molecules in their own cell and with those in their nearest neighbour cells ( i.e. those they share a face , edge or vertex with 26 in 3d ) .aica uses the same type of mesh as would be used in computational fluid dynamics ( cfd ) to define the geometry of a region ._ there are no restrictions on cell size , shape or connectivity ._ each cell has a unique list of other cells that it is to interact with , this list is known as the direct interaction list ( dil ) for the cell in question ( ciq ) .it is constructed by searching the mesh to create a set of cells that have at least one vertex within a distance of from any of the vertices of the ciq , see figure [ figure_interactingcellidentification ] .the dils are established prior to the start of simulation and are valid throughout because the spatial relationship of the cells is fixed , whereas the set of molecules they contain is dynamic . in a similar way to the cca , at every timestepa molecule in a particular cell calculates its interactions with the other molecules in that cell and consults the cell s dil to find which other cells contain molecules it should interact with .information is required to be maintained stating which cell a molecule is in this is straightforward and computationally cheap .the construction of the dils and accessing molecules is done in such a way as to not double - count interactions , similarly to the cca .for example , if cells a and b need to interact , cell b will be on cell a s dil , but cell a will not be on cell b s dil .when a molecule in cell a interacts with one in cell b the molecule in cell b will receive the inverse of the force vector calculated to be added to the molecule in cell a , because the pair forces are reciprocal , i.e. .it is possible in rare cases for small errors to be caused by this algorithm : small slices of volume in cells may not be identified for interaction ( see figure [ figure_errorsintwistedmeshes ] ) .the errors introduced will be slight because the relative volume not accounted for will be very small : in figure [ figure_errorsintwistedmeshes ] , only when molecules are in both the indicated regions of each cell will either miss out on an interaction , and the intermolecular potential will be small at this distance . a small guard distance could be added to when constructing the dil to reduce this error . this is less of a problem in good quality meshes , e.g. hexahedral rather than tetrahedral cells . replicated molecule parallelisation and periodic boundaries are handled in the same way using _ referred cells _ ( see figure [ figure_interactingcellidentification ] ) and _ referred molecules_. referred molecule : : : a copy of a real molecule that has been placed in a region outside a periodic or processor boundary in order to provide the correct intermolecular interaction with molecules inside the domain .a referred molecule holds only its own position and i d ( i.e. identification of which type of molecule it is for heterogenous simulations ) .referred molecules are created and discarded at each timestep , and do not report any information back to their source molecules . therefore if molecule _j _ on processor 1 needs to interact with molecule _k _ on processor 2 , a separate referred molecule will be created on each processor .referred cell : : : referred cells define a region of space and hold a collection of referred molecules .each referred cell knows + * which real cell in the mesh ( on which processor ) is its source ; * the required transformation to refer the positions and orientations of the real molecules in the source cell to the referred location ( see below for the details of how cells and molecules are referred ) ; * the positions of all of its own vertices .these are the positions of the vertices of the source cell which have been transformed by the same referring transform as the referred molecules it contains ; * which real cells are in range of this particular referred cell and hence require intermolecular interactions to be calculated .this is constructed once at the start of the simulation in the same way as the dil for real - real cell interactions the vertices of the real cells in the portion of mesh on the same processor as the referred cell are searched , those with at least one vertex in range of any vertex of the referred cell are found .a spatial transformation is required to refer a cell across a periodic or processor boundary .figure [ figure_cellreferringtransform ] shows the most general case of a cell being referred across a separated , non - parallel boundary , where , [ cols= " > , < " , ] all cell combinations have been created once , i.e. when , would produce the 21 combination , which is already identified as 12 .this non double counting procedure means that some cells will have few or no other cells in their dil , however , as long as the correct cell pair is created in one of the cell s dil , this is not an issue .it is also designed so that a cell does not end up on its own dil , the interactions between molecules in the same cell are handled separately . _ coupled patches _ are the basis of periodic and interprocessor communication for creating referred cells .patches , in general , are a collection of cell faces representing a mesh boundary of some description they may provide solid surfaces , inlets , outlets , symmetry planes , periodic planes , or interprocessor connections . coupled patches provide two surfaces ; whatever exits one enters the other and vice versa .two types are used in aica : * _ periodic patches _ on a single processor are arranged into two halves , each half representing one of the coupled periodic surfaces . when a molecule crosses a face on one surface , it is wrapped around to the corresponding position on the corresponding face on the other surface .* _ processor patches _ provide links between portions of the mesh on different processors , one half of the patch is on each processor . when a molecule crosses a face on one surface , it is moved to the corresponding position on the corresponding face on the other surface , on the other processor . decomposing a mesh for parallelisation will often require a periodic patch to be changed to a processor patchthe surfaces of coupled patches may have any relative orientation , and may be spatially separated as long as the face pairs on each surface correspond to each other .this allows them to also be used as a ` recycling ' surface in a flow simulation whatever exits from an outlet is reintroduced at an inlet , with the option of scaling the molecule s properties , e.g. temperature and pressure . in the decomposed portion of the mesh on each processor, each processor and periodic patch should be split into segments , such that : * faces on a processor that were internal to the mesh prior to decomposition end up on a segment ; * faces on a processor patch that were on separate periodic patches on the undecomposed mesh end up on different segments .these segments are further split such that faces that were on different halves of the periodic patch on the undecomposed mesh end up on different segments ; * faces on different halves of a periodic patch end up on different segments .each segment must produce a single vector / tensor transformation pair ( see section [ text_cellpositionreferringtransformations ] ) which will be applied to all cells referred across it . for each patch segment: 1 . find all real and existing referred cells in range ( with at least one vertex within a distance of ) of at least one vertex of the faces comprising the patch segment .2 . refer or re - refer this set of cells across the boundary defined by the patch segment using its transformation , see section [ text_cellpositionreferringtransformations ] . in the case of a segment of periodic boundary , this creates new referred cells on the same processor . for a segment of a processor patch ,these cells are communicated to , and created on the neighbouring processor . before creating any new referred cell a checkis carried out to ensure that it is not * a duplicate referred cell , one that has been created already by being referred across a different segment ; * a referred cell trying to be duplicated on top of a real cell , i.e. a cell being referred back on top of itself . + if the proposed cell for referral would create a duplicate of an existing referred cell , or end up on top of a real cell , then it is not created . to be a duplicate , the source processor , source cell and the vector part of the transformation ( see section [ text_cellpositionreferringtransformations ] )must be the same for the two cells ( note , the vector part of transformation for a real cell is zero by definition ) .a single run of these steps will usually not produce all of the required referred cells .they are repeated until no processor adds a referred cell in a complete evaluation of all segments , meaning all possible interactions are accounted for . in iterations after the first , in step 1 it is enough to only search for referred cells in range of the faces on the patch segment , because the real cells will not have changed , and would all be duplicates .the final configuration of referred cells does not depend on the order of patch segment evaluation .appendix [ text_exampleconstructionofreferredcells ] contains an example of the results of the cell referring process .figures [ figure_parallelgridpolyhedra1step_bw ] and [ figure_parallelgridpolyhedra2step_bw ] show the start and end points of the cell referring operation on a mesh that has been decomposed to run in parallel .the example is in 2d for clarity but the process is exactly the same in 3d . in this examplethe number of referred cells created far exceeds the number of real cells , which would lead to much costly interprocessor communication .this is because the mesh has been made deliberately small relative to to demonstrate as many features of the algorithm as possible and to be practical to understand . in realistic systems, the mesh portions would be significantly bigger than and the referred cells would form a relatively thin ` skin ' around each portion .decomposition of the mesh should preferably be carried out to minimise the number of cells that need to be referred , and to ensure that the vast majority of the intermolecular interactions happen between real - real molecule pairs ; in this way the communication cost is minimised . .some of those molecules may be on a periodic image of the system , and as such , in implementation terms , will reside on the other side of the domain .serial calculations in simple geometries typically use the minimum image convention , but this is not suitable for parallelisation.,width=272 ] that interact with the ciq ( dark ) are shaded in grey .the required referred cells are hatched in alternate directions according to which boundary they have been referred across .realistic systems would be significantly larger compared to than shown here.,width=264 ] drawn from the indicated vertex on cell a intersects a face of cell b , therefore , molecules near this vertex should interact with molecules in the shaded region .similarly , an arc of radius drawn from the indicated face on cell b encompasses the small shaded region of cell a. the cell searching algorithm will not , however , identify cells a and b as needing to interact because none of their vertices lie within of each other.,width=264 ] on each processor . a circle of radius drawn from any vertex of a real cellwould be fully encompassed by other real or referred cells , thereby providing all molecules in that cell with the appropriate intermolecular interactions , either across a periodic boundary or from another processor .note that many cells are referred several times ; for example , the source cell marked with on processor 2 is referred to processors 0 and 1 twice on each.,width=529 ]
a new algorithm for calculating intermolecular pair forces in molecular dynamics ( md ) simulations on a distributed parallel computer is presented . the arbitrary interacting cells algorithm ( aica ) is designed to operate on geometrical domains defined by an unstructured , arbitrary polyhedral mesh , which has been spatially decomposed into irregular portions for parallelisation . it is intended for nano scale fluid mechanics simulation by md in complex geometries , and to provide the md component of a hybrid md / continuum simulation . aica has been implemented in the open - source computational toolbox openfoam , and verified against a published md code . and molecular dynamics , nano fluidics , hybrid simulation , intermolecular force calculation , parallel computing 31.15.qg , 47.11.mn
long - term numerical integrations play an important role in our understanding of the dynamical evolution of many astrophysical systems ( see , e.g. , duncan , these proceedings , for a review of solar - system integrations ) .an essential tool for long - term integrations is a fast and accurate integration algorithm .symplectic integration algorithms ( sias ) have become popular in recent years because the newtonian gravitational -body problem is a hamiltonian problem and sias enforce certain conservation laws that are intrinsic to hamiltonian systems ( see sanz - serna & calvo 1994 for a general introduction to sias ) . for an autonomous hamiltonian system ,the equations of motion are where is the explicitly time - independent hamiltonian , are the canonical phase - space coordinates , is the poisson bracket , and ( ) is the number of degrees of freedom .the formal solution of eq.([heq ] ) is if the hamiltonian has the form , where and are separately integrable , we can devise a sia of constant timestep by approximating as a composition of terms like and .for example , a second - order sia is for the gravitational -body problem , we can write , where and are the kinetic and potential energies . then the second - order sia eq.([sia2 ] ) becomes where and ; this is the familiar leapfrog integrator . for solar - system type integrations ,a central body ( the sun ) is much more massive than the other bodies in the system , and it is better to write , where is the part of the hamiltonian that describes the keplerian motion of the bodies around the central body and is the part that describes the perturbation of the bodies on one another .symplectic integrators using this decomposition of the hamiltonian were introduced by wisdom & holman ( 1991 ) , and they are commonly called mixed variable symplectic ( mvs ) integrators .the constant timestep sias have the following desirable properties : \(1 ) as their names imply , sias are symplectic , i.e. , they preserve .\(2 ) for sufficiently small , sias solve almost exactly a nearby `` surrogate '' autonomous hamiltonian problem with .for example , the second - order sia in eq.([sia2 ] ) has consequently , we expect that the energy error is bounded and the position ( or phase ) error grows linearly .\(3 ) many sias ( e.g. , eq.[[sia2 ] ] ) are time reversible .note , however , that there are algorithms that are symplectic but not reversible .1 shows the energy error of an integration of the kepler orbit using the constant timestep leapfrog integrator eq.([lf ] ) .( in this and all subsequent figures , the orbits are initially at the apocenter , where and are the semi - major axis and eccentricity . )1 illustrates that there is no secular drift in .for problems with large variations in timescale ( due to close encounters or high eccentricities ) , it is desirable to use a variable timestep . a common practice is to set the timestep using the phase - space coordinates at the beginning of the timestep : . if the sias discussed in 1are implemented with this simple variable timestep scheme , they are still symplectic if we assume that the sequence of timesteps determined for a particular initial condition are also used to integrate neighboring initial conditions ( see skeel & gear 1992 for another point of view ) .however , tests have shown that this and similar simple variable timestep schemes [ and also destroy the desirable properties of the integrators ( e.g. , gladman , duncan , & candy 1991 ; calvo & sanz - serna 1993 ) . in fig .2_a _ we show the energy error of an integration with and .although the error is initially smaller than that in fig . 1 ( the integrations shown in figs . 1 and 2 use nearly the same number of timesteps per orbit ) , it shows a linear drift . in fig.2_a_ we also show the errors for two neighboring initial conditions ( ) .they were integrated using the timesteps determined for the orbit .note that these errors grow even faster .the degradation in performance is due to the fact that the properties ( 2 ) and ( 3 ) listed in 1 are no longer true .the algorithm is not time reversible because the timestep depends only on the coordinates at the beginning of the timestep . since depends explicitly on the timestep ( see , e.g. , eq.[[herr ] ] ) , a variable timestep changes the surrogate hamiltonian problem that the integrator is solving from step to step .thus the solution from to after steps is not in general the solution of a nearby autonomous hamiltonian problem .hut , makino , & mcmillan ( 1995 ; see also funato et al .1996 ; hut , these proceedings ) pointed out that the performance of a variable timestep integrator can be improved if time reversibility is restored by choosing the timestep in a time symmetric manner : with .for example , /2 r < r_i$. } \end{array}\right.\ ] ] an alternative decomposition uses where . unlike the forces suggested by skeel & biesiadecki ( 1994 ) , these forces have continuous first derivatives and decrease rapidly ( or exactly ) to zero at ( see fig .we shall not provide the details here , but we can understand why these properties are important from an analysis of . in fig .4 we show the energy error of two integrations using the multiple timescale symplectic integrator with . as expected , there is no secular drift in .note also that , with the chosen integration parameters ( ) , the maximum error is almost independent of the pericentric distance ( which changes by in the two cases shown ) .this again agrees with the expectation from an analysis of .one of the goals of this study is to develop a variable timestep integrator for solar system integrations .we have developed a second - order multiple timescale mvs integrator based on the algorithm described in this section ( see levison & duncan 1994 for another approach ) .we are currently testing this integrator in detail .initial results indicate that the integrator is fast and accurate and has all the desirable properties of the constant timestep symplectic integrators .calvo , m. p. , & sanz - serna , j. m. 1993 , siam j. sci ., 14 , 936 funato , y. , hut , p. , mcmillan , s. , & makino , j. 1996 , , 112 , 1697 gladman , b. , duncan , m. , & candy , j. 1991 , celest .astr . , 52 , 221 hut , p. , makino , j. , & mcmillan , s. 1995 , , 443 , l93 levison , h. f. , & duncan , m. j. 1994 , icarus , 108 , 18 macevoy , w. , & scovel , j. c. 1994 , preprint saha , p. , & tremaine , s. 1994 , , 108 , 1962 sanz - serna , j. m. , & calvo , m. p. 1994, numerical hamiltonian problems ( london : chapman & hall ) skeel , r. d. , & biesiadecki , j. j. 1994 , ann . numer . math . , 1 , 191 skeel , r. d. , & gear , c. w. 1992 , physica d , 60 , 311 wisdom , j. , & holman , m. 1991 , , 102 , 1528
symplectic integration algorithms have become popular in recent years in long - term orbital integrations because these algorithms enforce certain conservation laws that are intrinsic to hamiltonian systems . for problems with large variations in timescale , it is desirable to use a variable timestep . however , naively varying the timestep destroys the desirable properties of symplectic integrators . we discuss briefly the idea that choosing the timestep in a time symmetric manner can improve the performance of variable timestep integrators . then we present a symplectic integrator which is based on decomposing the force into components and applying the component forces with different timesteps . this multiple timescale symplectic integrator has all the desirable properties of the constant timestep symplectic integrators .
various phenomenological models developed over the years have raised the possibility that nuggets of strange quark matter ( sqm ) , called strangelets , are present in cosmic rays .such strangelets will have nearly equal numbers of up , down and strange quarks and so will have an anomalous ratio ( ) , where is the charge and is the baryon number , compared to ordinary nuclei .one particular model for strangelet propagation through the earth s atmosphere has strongly hinted at the possible presence of low energy ( few mev/ n ) strangelets at high mountain altitudes , although with a very low flux .also , over the years , various experimental groups have reported the presence of particles with anomalous ratios in cosmic rays . but none of those groups have been able to make any definitive claim because of lack of statistics .so the search for strangelets in cosmic rays remains an active area of research .one of the best ways to look for low energy strangelets with very low fluxes at high mountain altitudes is through the deployment of very large area arrays of nuclear track detectors ( ntds ) .such passive detector arrays offer several advantages over many other detector types .they are relatively inexpensive to deploy , easy to maintain and do not require any power for their operation , a fact which offers particular advantages when it comes to the deployment of large arrays at very remote high altitude locations . also because of their high intrinsic thresholds of registration , some ntds offer a natural way to suppress the huge low- background ( neutron recoil tracks , atmospheric radon alpha tracks etc . )expected in any such experiment .nuclear track detectors ( ntds ) like cr-39 , makrofol etc . have been used for charged particle detection for many years .we plan to use a particular brand of commercially available polymer , identified as polyethylene terephthalate ( pet ) , as ntd in the search for exotic particles in cosmic rays , through the deployment of large area arrays of pet films at high mountain altitudes .pet was found to have a much higher detection threshold ( , where is the charge and the measure of the velocity of the impinging particle ) compared to other commercially available ntds like cr-39 , makrofol etc .( lying in the range 6 - 60 ) .this makes pet particularly suitable for low energy rare event search in cosmic rays .before any new material can be employed as a detector , it needs to be properly characterized and calibrated . with that aim ,systematic studies were carried out on pet to determine its ideal etching condition and also to ascertain its charge response to various ions using accelerators as well as natural radioactive sources . a calibration curve for pet ( vs. , where is the specific energy loss , and are the track and the bulk etch rates respectivelywhile their ratio is the reduced etch rate or charge response ) utilizing , , , ions was obtained .it was then updated with additional data points corresponding to , and ions obtained from the rex - isolde facility at cern .also , studies were carried out to determine the charge and energy resolution that could be achieved with pet .all these studies have firmly established pet as a very efficient detector of heavily ionizing particles with a detection threshold much higher than the other commercially available detector material cr-39 which is in widespread use today . in addition to such calibration experiments ,pilot studies were carried out at different high altitude locations where pet , as well as standard cr-39 detectors , were given open air exposures for durations ranging from a few months to two years .the goal was to study how the detector behaviour changes with exposure to harsh environmental conditions and also to survey the local radiation background .the results of such studies are presented in this paper .the sites chosen for these studies are darjeeling in eastern himalayas , hanle in northern himalayas and ooty in the nilgiri hills .table [ table : parameters ] lists the altitudes ( above mean sea level ) and some other parameters associated with those sites .one reason for the choice of these sites is the existence of scientific research facilities there , which is going to make the eventual deployment and maintenance of any such large area array easier .also these facilities have records of climatic conditions at those sites going back many years and they continue to collect and maintain such records .stacks containing three pet films of a4 size ( ) and thickness as well as cr-39 films of size and thickness were mounted on perspex stands and given open air exposures for durations ranging from three months to two years at those sites .fig [ detectors ] shows pet films as well as smaller pieces of cr-39 mounted on perspex stands .the stands in turn are fitted inside a box for convenience of transport . a4 size pet films as well as smaller pieces of cr-39 mounted on perspex stands . ].[table : parameters]some parameters of the sites where ntds were given open air exposures [ cols="^,^,^,^,^ " , ] fig. [ angle ] , fig .[ diameter ] , fig .[ tracklength ] , fig .[ vtvb ] give distribution of the angle of incidence , minor axis diameter , track length and values for the tracks recorded on cr-39 at darjeeling , ooty and hanle .distribution of the angle of incidence of tracks recorded on cr-39 at darjeeling , ooty and hanle . ]distribution of the minor axis diameter of tracks recorded on cr-39 at darjeeling , ooty and hanle ( samples etched for 4 h ) . ] distribution of the length of tracks recorded on cr-39 at darjeeling , ooty and hanle ( samples etched for 4 h ) . ]distribution of the of the values of tracks recorded on cr-39 at darjeeling , ooty and hanle . ]from the studies conducted so far , high threshold pet seem to be a very good choice as a detector material for the planned rare event search .also hanle appears to be the place most suitable as a site for the setup of the large area array .the site is basically a cold desert with robust existing scientific research facilities for astronomical research. we will continue to conduct further studies along these lines as well as begin the process of large scale search for strangelets using large area ntd arrays .the authors sincerely thank staff members of iia at hanle and of tifr at ooty for their help in setting up the detectors at those places .the authors also thank mr .sujit k. basu and mr .deokumar rai for technical assistance .the work is funded by irhpa ( intensification of research in high priority areas ) project ( ir / s2/pf-01/2011 dated 26.06.2012 ) of the science and engineering research council ( serc ) , dst , government of india , new delhi .99 j. madsen , phys .d 71 ( 2005 ) 014026 .s. biswas , j.n . de , p. s. joarder , s. raha , d. syam , physb 715 ( 2012 ) 30 .sayan biswas , j.n . de , partha s. joarder , sibaji raha , debapriyo syam , proc.indian natl.sci.acad . 81 ( 2015 ) 1 , 277 .s. banerjee , s.k .ghosh , s. raha , d. syam , phys .85 ( 2000 ) 1384 .t. saito et .al . , phys .lett . 85 ( 2000 ) 1384 .basudhara basu , sibaji raha , swapan k. saha , sukumar biswas , sandhya dey , atanu maulik , amal mazumdar , satyajit saha , debapriyo syam , astropart.phys .61 ( 2015 ) 88 . o. adriani et .al . , phys .( 2015 ) 111101 .fleischer , p.b .price , r.m .walker , nuclear tracks in solids , university of california press , 1975 .durrani , r.k .bull , solid state nuclear track detection principles , methods and applications , pergamon press , 1987 .b. basu , s. dey , b. fischer , a. maulik , a. mazumdar , s. raha , s. saha , s. k. saha , d. syam , radiat .meas . 43 ( 2008 ) s95 .s. banerjee , s.k .ghosh , s. raha , d. syam , phys .85 ( 2000 ) 1384 .b. basu , a. mazumdar , s. raha , s. saha , s. k. saha , d. syam , ind . j. phys . 79 ( 2005 ) 279 .r. bhattacharyya , s. dey , sanjay k. ghosh .a. maulik , sibaji raha , d. syam , nucl .instr . and meth .b 370 ( 2016 ) 63 .d. bhowmik , s. dey , a. maulik , s. raha , s. saha , s. k. saha , d. syam , nucl .instr . and meth .b 269 ( 2011 ) 197 .s. dey , d. gupta , a. maulik , s. raha , s. k. saha , d. syam , j. pakarinen , d. voulot , f. wenander , astropart .34 ( 2011 ) 805 .s. dey , a. maulik , sibaji raha , swapan k. saha , d. syam , nucl .instr . and meth .b 336 ( 2014 ) 163 . j.f .ziegler , j.p .biersack , the stopping and range of ions in matter ( srim computer code ) , version : 2003.26 , 2003 .
various phenomenological models presented over the years have hinted at the possible presence of strangelets , which are nuggets of strange quark matter ( sqm ) , in cosmic rays . one way to search for such rare events is through the deployment of large area nuclear track detector ( ntd ) arrays at high mountain altitudes . before the deployment of any such array can begin , a detailed study of the radiation background is essential . also a proper understanding of the response of detectors exposed to extreme weather conditions is necessary . with that aim , pilot studies were carried out at various high altitude locations in india such as darjeeling ( 2200 m a.m.s.l ) , ooty ( 2200 m a.m.s.l ) and hanle ( 4500 m a.m.s.l ) . small arrays of cr-39 as well as high threshold polyethylene terephthalate ( pet ) detectors were given open air exposures for periods ranging from three months to two years . the findings of such studies are reported in this paper .
during the last years there has been great interest in applications of statistical physics to financial market dynamics .a variety of agent - based models have been proposed over the last few years to study financial market dynamics . in particular, spin models as the most popular models of statistical mechanics have been applied to describe the dynamics of traders in financial markets by several researchers .a particularly simple model of a stock market in the form of a spin model motivated by the ising model has been proposed recently , in order to simulate the dynamics of expectations in systems of many agents .the model introduces a new coupling that relates each spin ( agent ) to the global magnetization of the spin model , in addition to the ferromagnetic ( ising ) couplings connecting each spin to its local neighborhood .the global coupling effectively destabilizes local spin orientation , depending on the size of magnetization . the resulting frustration between seeking ferromagnetic order locally ,but escaping ferromagnetic order globally , causes a metastable dynamics with intermittency and phases of chaotic dynamics .in particular , this occurs at temperatures below the critical temperature of the ising model . while the model exhibits dynamical properties which are similar to the stylized facts observed in financial markets , a careful interpretation in terms of financial markets is still lacking .this is the main goal of this paper . in particular , treat the magnetization of the model as price signal which , however , is unnatural when deriving a logarithmic return of this quantity as these authors do . as a result ,small magnetization values cause large signals in the returns with an exponent of the size distribution different from the underlying model s exponent. let us here consider bornholdt s spin model in the context of a stock market with heterogeneous traders .the aim of this paper is ( i ) to interpret the magnetization of the spin model in terms of financial markets and to study the mechanisms that create bubbles and crashes , and ( ii ) to investigate the statistical properties of market price and trading volume .the new elements in the model studied in this paper are the explicit introduction of two groups of traders with different investment strategies , fundamentalists and interacting traders , as well as of a market clearing system that executes trading at matched book . given these conditions ,the market price is related to the sum of fundamental price and magnetization , and the trading volume is simply given by the magnetization .we also show that the model is able to explain the empirically observed positive cross - correlation between volatility and trading volume .finally , we observe that the model reproduces the well - known stylized facts of the return distribution such as fat tails and volatility clustering , and study volatilities at different time - scales .let us consider a stock market where a large stock is traded at price .two groups of traders with different trading strategies , _ fundamentalists _ and _ interacting traders _ , participate in the trading . the number of fundamentalists and the number of interacting traders are assumed to be constant .the model is designed to describe the stock price movements over short periods , such as one day . in the following, a more precise account of the decision making of each trader type is given .fundamentalists are assumed to have a reasonable knowledge of the fundamental value of the stock .if the price is below the fundamental value , a fundamentalist tends to buy the stock ( as he estimates the stock to be undervalued ) , and if the price is above the fundamental value , a fundamentalist tends to sell the stock ( then considered as an overvalued and risky asset ) .hence we assume that fundamentalists buying or selling order is given by : where is the number of fundamentalists , and parametrizes the strength of the reaction on the discrepancy between the fundamental price and the market price . during each time period , an interacting trader may choose to either buy or sell the stock , and is assumed to trade a fixed amount of the stock in a period of trading .interacting traders are labeled by an integer .the investment attitude of interacting trader is represented by the random variable and is defined as follows : if interacting trader is a buyer of the stock during a given period , then , otherwise he sells the stock and .now let us formulate the dynamics of the investment attitude of interacting traders in terms of the spin model .let us consider that the investment attitude of interacting trader is updated with a heat - bath dynamics according to where is the local field of the spin model , governing the strategic choice of the trader .let us consider the simplest possible scenario for local strategy changes of an interacting trader .we assume that the decision which an interacting trader makes is influenced by two factors , local information , as well as global information .local information is provided by the nearby interacting traders behavior . to be definite ,let us assume that each interacting trader may only be influenced by its nearest neighbors in a suitably defined neighborhood .global information , on the other hand , is provided by the information whether the trader belongs to the majority group or to the minority group of sellers or buyers at a given time period , and how large these groups are . the asymmetry in size of majority versus minority groupscan be measured by the absolute value of the magnetization , where the goal of the interacting traders is to obtain capital gain through trading .they know that it is necessary to be in the majority group in order to gain capital , however , this is not sufficient as , in addition , the majority group has to expand over the next trading period .on the other hand , an interacting trader in the majority group would expect that the larger the value of is , the more difficult a further increase in size of the majority group would be .therefore , interacting traders in the majority group tend to switch to the minority group in order to avert capital loss , e.g. to escape a large crash , as the size of the majority group increases .in other words , the interacting trader who is in the majority group tends to be a risk averter as the majority group increases . on the other hand , an interacting agent who is in the minority group tends to switch to the majority group in order to gain capital .an interacting agent in the minority group tends to be a risk taker as the majority group increases . to sum up ,the larger is , the larger the probability with which interacting traders in the majority group ( interacting traders in the minority group , respectively ) withdraw from their coalition . following , the local field containing the interactions discussed above is specified by with a global coupling constant .the first term is chosen as a local ising hamiltonian with nearest neighbor interactions and for all other pairs .we assume that the interacting - traders excess demand for the stock is approximated as let us leave the traders decision - making processes and turn to the determination of the market price .we assume the existence of a market clearing system . in the systema _ market maker _ mediates the trading and adjusts the market price to the market clearing values .the market transaction is executed when the buying orders are equal to the selling orders .the balance of demand and supply is written as + b\ ; n\ ; m(t ) = 0 .\label{eqn : a7}\ ] ] hence the market price and the trading volume are calculated as and using the price equation ( 8) we can categorize the market situations as follows : if , the market price is equal to the fundamental price .if , the market price exceeds the fundamental price ( _ bull _ market regime ) .if , the market price is less than the fundamental price ( _ bear _ market regime ) . using ( 8) , the logarithmic relative change of price , the so - called log - return , is defined as let us consider for a moment that only fundamentalists participate in trading .then in principle the market price is always equal to the fundamental price .this implies that the so - called _ efficient market hypothesis _ holds .following the efficient market model by then the fundamental price follows a random walk .since the continuous limit of a random walk is a gaussian process , the probability density of the log - return , defined as , is normal .for real financial data , however , there are strong deviations from normality . as we discuss here , including both fundamentalists and interacting traders to coexist in the market , offers a possible mechanism for excessive fluctuations such as bull markets and bear markets . to investigate the statistical properties of the price and the trading volume in the spin model of stock markets , we will assume for simplicity that the fundamental price is constant over time .in the new framework developed so far we see that the dynamics of the log - return corresponds to the linear change in absolute magnetization of the spin model .typical behavior of the such defined return as well as the magnetization is shown in figures 1(a ) and 1(b ) . here , a 101 * 101 lattice of the general version of the model as defined in eq .( 8) and with the spins updated according to ( 5 ) is shown .it is simulated at temperature with couplings and , using random serial and asynchronous heat bath updates of single sites . in figure 1(a )the intermittent phases of ordered and turbulent dynamics of the log - return are nicely visible .qualitatively , this dynamical behavior is similar to the dynamics of daily changes of real financial indices , as for example the dow jones index shown in figure 1(c ) . to some degree, these transitions of the return can be related to the magnetization in the spin model .figure 1 ( b ) illustrates that the bull ( bear ) market ( ) becomes unstable , and the transition from a metastable phase to a phase of high volatility occurs , when the absolute magnetization approaches some critical value . noting that trading volume is defined as , this suggests that also some critical trading volume exists near the onset of turbulent phases .this is in agreement with the empirical study by who found the empirical regularities : ( i ) positive correlation between the volatility and the trading volume ; ( ii ) large price movements are followed by high trading volume .the origin of the intermittency can be seen in the local field eq .( 5 ) representing the external influences on the decision - making of interacting - trader . in particular , the second term of tends to encourage a spin flip when magnetization gets large .thus each interacting - trader frequently switches his strategy to the opposite value if the trading volume gets large . as a consequence ,the bull ( bear ) market is unstable and the phase of the high volatility appears .the metastable phases are the analogue of speculative bubbles as , for example , the bull market is defined as a large deviation of the market price from the fundamental price .in fact it is a common saying that `` it takes trading volume to move stock price '' in the real stock market .a typical example is the crash of oct .1987 , when the dow jones industrial average dropped 22.6% accompanied by an estimated shares that changed hands at the new york stock exchange alone .fama has argued that the crash of oct .1987 at the us and other stock markets worldwide could be seen as the signature of an efficient reassessment of and convergence to the correct fundamental price after the long speculative bubble proceeding it .it is interesting to investigate how long the bull or bear markets last from the point of view of practical use .figure 2 shows the distribution of the bull ( bear ) market durations that is defined as the period from the beginning of a bull ( bear ) market to the end of the bull ( bear ) market . in other words ,the bull ( bear ) market duration means the period from a point of time that the market price exceeds the fundamental price to the next point of time that the market price falls short of the fundamental price . in the model oneobserves that the bull ( or bear ) market durations are power law distributed with an exponent of approximately . as shown in the previous works and the simple spin model considered here reproduces major stylized facts of real financial data .the actual distribution of log - return has _ fat tails _ in sharp contrast to a gaussian distribution ( figure 3 ) .that is , there is a higher probability for extreme values to occur as compared to the case of a gaussian distribution .recent studies of the distribution for the absolute returns report power law asymptotic behavior , with an exponent between about 2 and 4 for stock returns .figure 4 shows the cumulative probability distribution of the absolute return that is generated from the model .the observed model exponent of lies in the range of empirical data .let us consider a time - scale at which we observe price fluctuations .the log - return for duration is then defined as .volatility clustering as described in the previous section is this observable defined for an interval ranging from several minutes to more than a month or even longer . in this intermediate and strongly correlated regime of time - scales ,volatilities at different time - scales may show self - similarity .self - similarity in this context states that volatilities at different time - scales are related in such a way that that the ratio of volatilities at two different scales does not statistically depend on the coarse - graining level .thus daily volatility is related to weekly , monthly volatilities by stochastic multiplicative factors .the self - similarity has been shown to be equivalent to scaling of moments under some conditions .scaling occurs when , where is the scaling function which is related to the statistical property of the multiplicative factors .figure 6 ( a ) depicts the presence of self - similarity in actual data of the nyse stock index .though it is not straightforward to relate time - scales between the simulation and real data , it is interesting to look at how the volatilities at different time - scales behave in the regime where the volatility clustering is valid . figure 6 ( b ) shows the scaling of moments from the data of log - returns calculated for different scales in time - steps of the simulation .we observe a range where self - similarity dominates and that this property is broken at some time - scale , which corresponds to the scale where volatility clustering as seen in figure 5 deviates from a power - law of the autocorrelation function .this observation is encouraging as it might help relate the time - scales of simulations and real markets .in this paper we have considered the spin model presented in in the context of a stock market with heterogeneous traders , that is , fundamentalists and interacting traders .we have demonstrated that magnetization in the spin model closely corresponds to trading volume in the stock market , and the market price is determined by magnetization under natural assumptions . as a consequencewe have been able to give a reasonable interpretation to an aperiodic switching between bull markets and bear markets observed in bornholdt s spin model . as a result, the model reproduces main observations of real financial markets as power - law distributed returns , clustered volatility , positive cross - correlation between volatility and trading volume , as well as self - similarity in volatilities at different time - scales .we also have found that scaling is observed in the distribution of transition times between bull and bear markets .although the power law scaling of the distribution has never been examined empirically on short time scales , the power law statistics showed here is not only an interesting finding theoretically but presumably also useful to measure the risk of security investments in practice .the empirical study will be left for future work .k. sznajd - weron and r. weron , a simple model of price formation , int . j. mod .c * 13 * 115 ( 2002 ) .t. yamano , bornholdt s spin model of a market dynamics in high dimensions , int .j. mod .c * 13 * 89 ( 2002 ) .g. iori , a microsimulation of traders activity in the stock market : the role of heterogeneity , agents interactions and trade frictions , www.arxiv.org/abs/adap-org/9905005 ( 2001 ) ; journal of economic behavior and organization , vol .49 , no . 1 to appear in 2002 .fama , perspectives on october 1987 , or , what did we learn from the crash ? , in r. barro , et .al . , black monday and the future of financial markets , dow jones - irwin , inc . , homewood , illinois . 71 ( 1989 ) . , defined as change in magnetization in the spin model .( b ) magnetization of the spin model .( c ) for comparison with ( a ) , the log - return of the dow jones daily changes 1896 - 1996 is shown.,height=793 ] -th order ) of volatilities at time - scales .( a ) for a stock in nyse with in minutes .( b ) for the simulation result with in time - steps . in both plots, ranges from 0.5 ( cross ) to 4.0 ( triangles downward ) with increment 0.5.,width=529 ]
the dynamics of a stock market with heterogeneous agents is discussed in the framework of a recently proposed spin model for the emergence of bubbles and crashes . we relate the log returns of stock prices to magnetization in the model and find that it is closely related to trading volume as observed in real markets . the cumulative distribution of log returns exhibits scaling with exponents steeper than and scaling is observed in the distribution of transition times between bull and bear markets . , econophysics , stock market , spin model , volatility 89.90.+n , 02.50.-r , 05.40.+j
numerical techniques and , in particular , computer simulations are by now firmly established as a third pillar of the scientific method , complementing the pillars of experiment ( or observation ) and mathematical theory , both of which were erected already at the birth of modern science in the scientific revolution . while in the early times simulation studies were not quite competitive with analytical calculations in condensed matter physics and quantum field theory ( usually based on perturbative and variational approaches ) nor were their outcomes adequate for comparison with experimental results ( usually due to limited length or time scales ) , this situation has dramatically changed over the past decades .this very successful race to catch up was fueled by a combination of two factors : a continuous , sometimes revolutionary refinement of the numerical toolbox , for instance through the invention of cluster algorithms , reweighting or generalized - ensemble techniques in the field of monte carlo simulations , and the impressive increase in generally available computational power , which has followed an exponential form known as moore s law for the past forty years . at any time, however , there has been no shortage of fascinating scientific problems whose computational study required resources at or beyond the cutting edge of the available technology .this has led scientists to regularly use the latest commodity hardware as soon as it became available , but has also motivated the design and construction of a number of special purpose machines such as , e.g. , the cluster processor and janus for the simulation of spin systems or a series of initiatives for qcd calculations with its recent addition of the qpace architecture .it has been true for the last couple of generations of graphics processing units ( gpus ) that their theoretical peak performance , in particular for single - precision floating point operations ( flop / s ) , significantly exceeds that of the corresponding x86 based cpus available at the same time ( as of this writing up to around 100 gflop / s in cpus vs. up to 5 tflop / s in gpus ) .it is therefore natural that scientists and , increasingly , also programmers of commercial applications other than computer games , have recently started to investigate the possible value of gpu based computing for their purposes ; for scientific applications see , e.g. , refs .apart from their mere peak performance , systems based on gpus or other highly parallel architectures such as the cell processor might outperform current cpu based systems also in terms of their efficiency , i.e. , in terms of flop / s per watt or flop / s per euro and thus might also contribute to the advancement of `` green computing . ''the low prices and convenient over - the - counter availability of gpu devices clearly make for advantages as compared to custom - built special - purpose machines , for which many man - months or years need to be invested for the design , realization and programming of devices . the relative increase in peak performance of gpus versus cpusis achieved at the expense of flexibility , however : while today s cpus will perform pretty well on most of a large variety of computer codes and , in particular , in a multi - tasking environment where on - the - fly optimizations such as branch prediction are essential , gpus are optimized for the highly vectorized and parallelized floating - point computations typical in computer graphics applications .a one - to - one translation of a typical code written for cpus to gpus will therefore , most often , _ not _ run faster and , instead , algorithms and parallelization and vectorization schemes must be chosen very carefully to tailor for the gpu architecture in order to harvest any performance increases .the efficiency of such calculations in terms of flop / s per _ human _ time crucially depends on the availability of easily accessible programming environments for the devices at hand .while in view of a lack of such supporting schemes early attempts of general purpose gpu ( gpgpu ) calculations still needed to encapsulate the computational entities of interest in opengl primitives , the situation changed dramatically with the advent of device - specific intermediate gpgpu language extensions such as ati stream and nvidia cuda . in the future ,one hopes to become independent of specific devices with general parallel programming environments such as opencl .classical spin systems have turned out to be extremely versatile models for a host of phenomena in statistical , condensed matter and high - energy physics , with applications ranging from critical phenomena over surface physics to qcd . a rather significant effort ,therefore , has been invested over the years into optimized implementations of monte carlo simulations of spin models .they hence appear to be particularly suited for an attempt to fathom the potential gain from switching to the gpu architecture .additionally , there are a plethora of questions relating to classical spin models which , decades of research notwithstanding , are still awaiting a combination of an increase in available computational resources , better algorithms and clever techniques of data analysis to find generally satisfactory answers .this applies , in particular , to disordered systems such as random - field models and spin glasses .as will be discussed below , due to their short - ranged interactions and the typically simple lattice geometries such models appear to be near ideal problems for gpu calculations .this applies , in particular , to the inherently local single spin - flip algorithms discussed here . for studying the critical behavior of models without disorder, cluster algorithms will outperform any optimized implementation of a local spin - flip dynamics already for intermediate lattice sizes ; the possibilities for porting such approaches to gpu will be discussed elsewhere . for disordered systems , on the other hand , efficient cluster algorithms are ( with few exceptions ) not known . for them , instead , local spin - flip simulations combined with parallel tempering moves are the state of the art . when comparing gpu and cpu performance for the case of general purpose computational applications , it has become customary to benchmark different implementations in terms of the relative speed - up ( or slow - down ) of the gpu code versus the cpu implementation .while such numbers make for good selling points for the company producing the `` better '' type of computational device , it should be clear that speed - ups , being a relative measure , will vary to a large degree depending on how much effort is invested in the optimization of the codes for the different architectures . to avoid this problem ,the main focus is put here on the _ absolute _ performance of different implementations , measured for the present application mostly in the wall - clock time for performing a single spin flip , citing speed - up factors only as an additional illustration of the relative performance .if relative measures of performance are given , the question arises of what type of cpu code to compare to , in particular , since with the advent of multi - core processors and vector extensions such as sse , cpus also offer a significant potential for optimizations .i decided here to use serial cpu code , reasonably optimized on the level of a high - level programming language and the use of good compilers , as i feel that this comes closest to what is most often being used in actual simulation studies .as regards the possible effects of further cpu optimizations , see also ref . which , however , unfortunately does not cite any measures of absolute performance .simulations of the ferromagnetic ising model using implementations on gpu have been discussed before . compared to these implementations , the current approach with the double checkerboard decomposition and multi - hit updates to be discussed below offers significant advantages .other applications , such as the simulation of ferromagnetic , short - range heisenberg models , the simulation of ising spin glasses with asynchronous multi - spin coding or parallel tempering for lattice spin systems are considered here for the first time .for some very recent discussions of further spin models see also refs . .the rest of this article is organized as follows . in sec .[ sec : architecture ] , i give some necessary background on the architecture of the nvidia gpus used in this study and its consequences for algorithm design .section [ sec : ising ] discusses , in some detail , the chosen implementation of a single - spin flip metropolis simulation of the two - dimensional ( 2d ) ising model and its performance as well as a generalization to the three - dimensional ( 3d ) model .section [ sec : heisen ] is devoted to generalizations of these considerations to continuous - spin systems , exemplified in the 2d heisenberg model .in secs . [sec : spinglass ] and [ sec : tempering ] , applications to simulations of spin - glass models and the use of parallel - tempering updates are discussed .finally , sec .[ sec : concl ] contains my conclusions .as indicated above , there are significant differences in the general design of cpu and gpu units .cpus have been optimized over the past decades to be `` smart '' devices for the rather unpredictable computational tasks encountered in general - purpose computing .current intel cpus feature about 800 million transistors on one die .a rather small fraction of them is used for the alus ( arithmetic logic units ) doing actual computations , whereas most of the available transistors are devoted to flow control ( such as out - of - order execution and branch prediction ) and cache memory .this structure is very successful in increasing the performance of _ single_-threaded applications , which still make up the vast majority of general computer codes .in contrast , about 80 % of the 1.4 billion transistors on a gpu die of the nvidia gt200 series ( now superseded by the fermi architecture ) are alus .gpus do not try to be `` clever '' , but they are extremely efficient at doing the same computational steps on different bits of a large data - set in parallel .this is what makes them interesting for applications in scientific computing .figure [ fig : hardware ] shows a schematic representation of the general architecture of a current gpu .the chip contains a number of multiprocessors each composed of a number of parallel processing units .the systems based on the gt200 architecture used in this study feature 30 multiprocessors of 8 processors each ( versus 15 multiprocessors 32 cores for the gtx 480 fermi card ) .the systems come with a hierarchy of memory layers with different characteristics .each processor is equipped with a number of registers which can be accessed with essentially no latency .the processors in a multiprocessor unit have read / write access to a small amount of shared memory ( kb in the gt200 architecture and kb for fermi cards ) which is on - chip and can be accessed with latencies around a hundred times smaller than those for global memory . the large device or global memory ( up to 6 gb in current tesla systems ) is connected with read / write access to all multiprocessors .latency for global memory accesses is of the order of 400600 clock cycles ( as compared to , e.g. , one clock cycle for a multiply or add operation ) .the additional constant and texture memory areas are cached and designed for read - only access from the processing units , in which case they operate very fast with small latencies .the memory of the host computer can not be accessed directly from within calculations on the gpu , such that all relevant data need to be copied into and out of the gpu device before and after the calculation , respectively . since the processing units of each multiprocessor are designed to perform identical calculations on different parts of a data set ,flow control for this simd ( single instruction multiple data ) type of parallel computations is rather simple .it is clear that this type of architecture is near ideal for the type of calculations typical for computer graphics , namely rendering a large number of triangles in a 3d scene or the large number of pixels in a 2d projection in parallel .the organization of processing units and memory outlined in fig .[ fig : hardware ] translates into a combination of two types of parallelism : the processing units inside of each multiprocessor work synchronously on the same data set ( vectorization ) , whereas different multiprocessors work truly independent of each other ( parallelization ) .the corresponding programming model implemented in the cuda framework is outlined schematically in fig .[ fig : gridblock ] : computations on gpu are encapsulated in functions ( called `` kernels '' ) which are compiled to the gpu instruction set and downloaded to the device .they are executed in a two - level hierarchic set of parallel instances ( `` execution configuration '' ) called a `` grid '' of thread `` blocks '' .each block can be thought of as being executed on a single multiprocessor unit .its threads ( up to 512 for the gt200 architecture and 1024 for fermi cards ) access the same bank of shared memory concurrently .ideally , each thread should execute exactly the same instructions , that is , branching points in the code should be reduced to a minimum .the blocks of a grid ( up to ) are scheduled independently of each other and can only communicate via global memory accesses .the threads within a block can make use of cheap synchronization barriers and communicate via the use of shared ( or global ) memory , avoiding race conditions via atomic operations implemented directly in hardware . on the contrary, the independent blocks of a grid can not effectively communicate within a single kernel call .if synchronization between blocks is required , consecutive calls of the same kernel are required , since termination of a kernel call enforces all block computations and pending memory writes to complete .since the latency of global memory accesses is huge as compared to the expense of elementary arithmetic operations , many computational tasks on gpu will be limited by memory accesses , i.e. , the bandwidth of the memory subsystem .it is therefore crucial for achieving good performance to ( a ) increase the number of arithmetic operations per memory access , and ( b ) optimize memory accesses by using shared memory and choosing appropriate memory access patterns .the latter requirement in particular includes the adherence to the specific _ alignment _ conditions for the different types of memory and clever use of the _ coalescence _ phenomenon , which means that accesses to adjacent memory locations issued by the different threads in a block under certain conditions can be merged into a single request , thus greatly improving memory bandwidth . due to most typical computations being limited by memory accesses ,it is important to use an execution configuration with a total number of threads ( typically several ) much larger than the total number of processing units ( for the tesla c1060 and for the gtx 480 ) .if a thread block issues an access , e.g. , to global memory , the gpu s scheduler will suspend it for the number of cycles it takes to complete the memory accesses and , instead , execute another block of threads which has already finished reading or writing its data .the good performance of these devices thus rests on the idea of latency hiding and transparent scalability through flexible thread scheduling .the layout of the gpu architecture outlined above implies guidelines for the efficient implementation of computer simulation codes . along these lines, a code for single - spin flip metropolis simulations of the ferromagnetic ising model is developed .we consider a ferromagnetic , nearest - neighbor , zero - field ising model with hamiltonian on square and simple cubic lattices of edge length , using periodic boundary conditions . in the single - spin flip metropolis simulation scheme for this modeleach step consists of randomly selecting a spin and proposing a spin flip , which is accepted according to the metropolis criterion ,\ ] ] where corresponds to the energy change incurred by the flip and denotes the inverse temperature .it is straightforward to show that this update is ergodic , i.e. , all states of the system can be reached in a finite number of spin - flip attempts with non - zero probability , and satisfies the detailed balance condition ensuring convergence to the equilibrium boltzmann distribution . in practice , one usually walks through the lattice in a sequential fashion instead of picking spins at random , which requires less pseudo - random numbers and , generically , leads to somewhat faster convergence .this updating scheme , in fact , violates detailed balance , but is consistent with the global balance condition which is sufficient to ensure convergence .this point is of some importance for the gpu implementation developed here , and will be further discussed below in section [ sec : balance ] . to leverage the effect of the massively parallel architecture of gpu devices for simulations of spin models , in most casesdomain decompositions where the system is divided into a large number of largely independent sub - units are the only approach with satisfactory scaling properties as the number of processors or the size of the system is increased . for the case of lattice systems , the simplest scheme amounts to a coloring of lattice sites with the minimally feasible number of colors . here , i focus on bipartite graphs such as the square and ( hyper-)cubic lattices where two colors suffice , resulting in a ( generalized ) checkerboard decomposition .generalizations to other cases are trivial . for performing a single spin - flip metropolis simulation of the ising model on gpu .each of the big tiles is assigned as a thread block to a multiprocessor , whose individual processors work on one of the two sub - lattices of all sites of the tile in parallel.,scaledwidth=60.0% ] in such a scheme , each site of one sub - lattice can be updated independently of all others ( assuming nearest - neighbor interactions only ) , such that all of them can be treated in parallel followed by an analogous procedure for the second sub - lattice . for an implementation on gpu , the checkerboard decomposition needs to be mapped onto the structure of grid blocks and threads .although , conceptually , a single thread block suffices to update one sub - lattice in parallel , the limitation to 512 threads per block for the gt200 architecture ( resp .1024 threads for fermi ) enforces an additional block structure for systems of more than 1024 resp .2048 spins . for the square lattice , in ref . stripes with a height of two lattice spacings were assigned to independent thread blocks while performing all spin updates in global memory .this has several disadvantages , however : ( a ) shared memory with its much lower latency is not used at all , ( b ) determining the indices of the four neighboring spins of each site requires costly conditionals due to the periodic boundary conditions , and ( c ) the system size is limited to for gt200 ( for fermi ) . here , instead , i suggest a more versatile and efficient approach leveraging the inherent power of the memory hierarchy intrinsic to gpus . to this enda twofold hierarchy of checkerboard decompositions is used .figure [ fig : checker ] illustrates this for the case of a square lattice : on the first level , the lattice is divided into big tiles in a checkerboard fashion .these are then updated by individual thread blocks , where the individual threads exploit a second - level checkerboard decomposition of the lattice sites inside each tile . the size of big tiles is thereby chosen such that the spin configuration on the tile ( plus a boundary layer of spins ) fits into shared memory , such that the spin updates can then be performed entirely inside of this much faster memory area .on the block level , through the checkerboard arrangement all tiles of one ( `` even '' ) sub - lattice can be updated concurrently before updating the other ( `` odd '' ) half of tiles . inside of each block , againall sites of the finer sub - lattices are independent of each other and are therefore updated concurrently by the threads of a thread block . in summary, the updating procedure looks as follows : 1 .the updating kernel is launched with thread blocks assigned to treat the _ even _tiles of the coarse checkerboard .2 . the threads of each thread block cooperatively load the spin configuration of their tile plus a boundary layer into shared memory .3 . the threads of each block perform a metropolis update of each _ even _ lattice site in their tile in parallel . 4 .all threads of a block wait for the others to finish at a barrier synchronization point .the threads of each block perform a metropolis update of each _ odd _ lattice site in their tile in parallel .the threads of each block are again synchronized .a second kernel is launched working on the _ odd _ tiles of the coarse checkerboard in the same fashion as for the even tiles .the cooperative loading of each tile into shared memory is organized in a fashion to ensure _ coalescence _ of global memory accesses ,i.e. , subsequent threads in each block access consecutive global memory locations wherever possible .while it turns out to be beneficial to load tiles into shared memory already for a single update per spin before treating the second coarse sub - lattice due to an avoidance of bank conflicts in global memory as well as the suppressed need to check for periodic wrap - arounds in determining lattice neighbors , the ratio of arithmetic calculations to global memory accesses is still not very favorable .this changes , however , if a multi - hit technique is applied , where the spins on each of the two coarse sub - lattices are updated several times before returning to the other half of the tiles . as is discussed below in sec .[ sec : balance ] , this works well , in general , and only close to criticality somewhat increased autocorrelation times are incurred . by design ,monte carlo simulations of spin systems require a large amount of pseudo - random numbers .depending on implementation details , for a system as simple as the ising model , the time required for generating random numbers can dominate the total computational cost .hence , the efficient implementation of random - number generators ( rngs ) in a massively parallel environment is a crucial requirement for the efficient use of the gpu architecture .speed is not everything , however , and it is well known that the statistical deficiencies that no _ pseudo _ rng can entirely avoid can have rather dramatic consequences in terms of highly significant deviations of simulation results from the correct answers .hence , some effort must be invested in the choice and implementation of an appropriate rng . from the highly parallel setup described above, it is clear that each of the concurrent threads must be able to generate its stream of random numbers to a large degree independently of all others , since the bottleneck in any configuration with a centralized random - number production would severely impede scaling of the code to a large number of processing units .as each of these ( sub-)sequences of random numbers are used in the same simulation , one needs to ensure that ( a ) either each thread generates different members of the same global sequence of numbers or ( b ) the sequences generated by different threads are at least statistically uncorrelated .the simplest choice of rng is given by the linear congruential generator ( lcg ) of the form with .the authors of ref . used and , originally suggested in ref .the period of this generator is small with . in a simulation of a ising system , for instance ,this period is exhausted already after 256 sweeps . on theoretical grounds , it is argued that one actually should not use more than numbers out of such a sequence , which would render the sequence of available numbers very short indeed .additionally , simple lcgs are known to exhibit strong correlations which can be revealed by plotting -tuples of successive ( normalized ) numbers as points in , where it is found that , already for rather small , the points are confined to a sequence of hyperplanes instead of being uniformly distributed .the choice has the advantage that the whole calculation can be implemented entirely in 32-bit integer arithmetic since on most modern architectures ( including gpus ) integer overflows are equivalent to taking a modulo operation . for such power of two moduli , however , the period of the less significant bits is even shorter than that of the more significant bits , such that the period of the least significant bit is only .an advantage for the parallel calculations performed here is that one can easily skip ahead in the sequence , observing that where therefore , choosing equal to the number of threads , all threads can generate numbers out of the same global sequence ( [ eq : lcg ] ) concurrently .an alternative setup , that can not guarantee the desired independence of the sequences associated to individual rng instances , however , starts from randomized initial seeds for each generator , without using any skip - ahead . to improve on the drawback of a short period, one might consider moving to a generator with larger , for instance , where the modulo operation again can be implemented by overflows , this time for -bit integers , a data type which is also available in the cuda framework .as multiplier i chose with provably relatively good properties , where an odd offset , here , needs to be chosen to reach the maximal period of .as for the -bit case , this generator can be used in the parallel framework yielding numbers from a common sequence via skip - ahead , or as independent generators with initial randomized seeds , leading to overlapping sub - sequences . and , , while `` lcg32 '' and `` lcg64 '' correspond to the recursion with and , respectively .all thread blocks use threads . ] for high - precision calculations , one normally would not want to rely on a simple lcg. a significant number of generators with much better statistical properties has been developed in the past , for a review see ref . . for our purposes, however , most of them have severe drawbacks in that they are either quite slow ( such as , for instance , generators that combine a number of elementary rngs or , instead , drop a certain fraction of random numbers generated as in ranlux ) , that they use a large state to operate on ( for instance 32-bit words for the popular mersenne twister ) , or that it is rather hard to ensure that sequences generated independently are to a reasonable degree uncorrelated ( as for most high - quality generators ) .while a state larger than a few words is usually not a problem for serial implementations , in the framework of gpu computing where fast local ( i.e. , shared ) memory is very limited , it is impossible to use several hundred words per thread of a thread block only for random number generation .a reasonable compromise could be given by generators of the ( generalized ) lagged fibonacci type with recursion which operate on a buffer of length with and have good properties for sufficiently large lags and and choosing . if , the maximal period is .for an implementation in single precision arithmetic , i.e. , , and ( see below ) , this results in a rather large period .the recursion ( [ eq : lagged_fibonacci ] ) can be implemented directly in floating - point arithmetic yielding uniform random numbers ] with replicas with one exchange attempt per 100 sweeps of spin flips . ] through its inherent parallelism of intra - replica updates , this scheme appears to be well suited for the massively parallel architecture of current gpus .it is tested here in a reference implementation for the ferromagnetic ising model along the lines of the previously discussed single spin - flip code .the additional copies of the system are mapped to additional thread blocks running in parallel .since the configurational energy is calculated online from the energy changes of single spin flips via a binary - tree reduction algorithm on tiles , it is automatically available after lattice sweeps and hence its determination does not incur any extra overhead . in the current implementation , replica exchange steps are performed on cpu since the computational effort is low and synchronization between thread blocks is required . as usual , instead of exchanging configurations between replicas only the corresponding temperatures are swapped .the boltzmann factors according to ( [ eq : metropolis ] ) can still be tabulated , and are now implemented on gpu as fetches from a two - dimensional texture .a successful replica exchange then requires an update of the texture which is easily possible in the current setup as the exchange moves are carried out on cpu .the benchmark results of the parallel tempering simulation of the 2d ising model on gpu and cpu is shown in fig .[ fig : parallel ] .i chose to use replicas at equally spaced inverse temperatures in the interval $ ] . in a real - world applicationone would probably want to use a more elaborate scheme for choosing these temperatures , but these questions do not have any bearing on the benchmark calculations performed here .as is clearly visible , the presence of additional copies of the system leads to a much better resource utilization for smaller system sizes than that observed for the single - spin flip simulations .hence , significant speed - ups are observed already for small systems .the cpu code performs at a constant ns per combined spin - flip and replica exchange move ( mixed at a ratio of one exchange move per one hundred lattice sweeps ) .the gpu code arrives at a maximum of around ns for the tesla c1060 and ns for the gtx 480 at .the speed - up reaches up to ( c1060 ) resp . ( gtx 480 ) at , but is already ( c1060 ) resp. ( gtx 480 ) for the system . as for disordered systems due to the severe slowing down usually only rather small systems can be studied , good performance of the code is crucial in this regime . increasing the number of exchange moves to one in ten lattice sweeps reduces the maximum performance of the gpu implementation somewhat to ns ( c1060 ) and ns ( gtx 480 ) , respectively .current gpus have a significant potential for speeding up scientific calculations as compared to the more traditional cpu architecture . in particular, this applies to the simulation of systems with local interactions and local updates , where the massive parallelism inherent to the gpu design can work efficiently thanks to appropriate domain decompositions .the simulation of lattice spin systems appears to be a paradigmatic example of this class as the decomposition remains static and thus no significant communication overhead is incurred .observing substantial speed - ups by factors often exceeding two orders of magnitude as for the case studies reported here requires a careful tailoring of implementations to the peculiarities of the considered architecture , however , in particular paying attention to the hierarchic organization of memories ( including more exotic ones such as texture memory ) , the avoidance of branch divergence and the choice of thread and block numbers commensurate with the capabilities of the cards employed . for achieving good performance ,it is crucial to understand how these devices hide the significant latency of accessing those memories that do not reside on die through the interleaved scheduling of a number of execution threads significantly exceeding the number of available computing cores .it is encouraging that employing such devices with the rather moderate coding effort mediated by high - level language extensions such as nvidia cuda or opencl updating times in the same ballpark as those of special purpose machines such as janus with a vastly higher development effort can be reached .a regularly uttered objection against the systematic use of gpus for scientific computing criticizes them as being a possibly too special and exotic architecture with unknown future availability and architectural stability as compared to the traditional and extremely versatile x86 cpu design .while there is certainly some truth to this argument , there can be no doubt about the fact that massive parallelism is not only the present state of the gpu architecture , but also the future of cpu based computing . as of this writing , intel cpus feature up to eight cores and amd chips up to twelve cores per die , the corresponding road - maps projecting even significantly more cores in the foreseeable future .supercomputers will soon count their number of cores in the millions . due to thisdevelopment , serial computing will not remain a viable option for serious computational science much longer .much effort will need to be invested in the years to come into solving scientific problems employing a _ massive _ number of parallel computational units . in this respect , gpu computing ,apart from currently being more efficient for many problems in terms of flop / s per watt and per euro than cpu based solutions , is rather less exotic than pioneering .an ideal application for gpu computing in the field of the simulation of spin systems appear to be disordered systems , where cluster algorithms are in general not efficient and a natural parallelism occurs from the quenched average over disorder , possibly combined with the parallel tempering algorithm . using asynchronous multi - spin coding for the ising spin glass , spin flip times down to ps can be achieved .systems with continuous spins are particularly well suited for gpu deployment , since one finds a relatively stronger overhead of arithmetic operations over memory fetches as compared to systems with discrete spins there . for the heisenberg model , speed - ups up to a factor of can be obtained when making use of the highly optimized special function implementations in single precision .while these examples of single - spin flip simulations look rather promising , it is clear that other well - established simulation algorithms for spin systems are less well suited for the parallelism of gpus , including the cluster algorithms in the vicinity of critical points , where spin clusters spanning the whole system need to be identified , or multi - canonical and wang - landau simulations , which require access to the current values of a global reaction coordinate ( such as , e.g. , the energy or magnetization ) for each single spin flip , thus effectively serializing all spin updates .it is a challenge for future research to find implementations or modifications of such algorithms suitable for massively parallel computers .i am indebted to t. yavorskii for a careful reading of the manuscript . support by the dfg through the emmy noether programme under contract no . we4425/1 - 1 and by the schwerpunkt fr rechnergesttzte forschungsmethoden in den naturwissenschaften ( srfn mainz ) is gratefully acknowledged .52 natexlab#1#1[2]#2 , , , , , , ( ) . , , , ( ) ., , , ( ) . , , , ( ) ., , , ( ) . , , , , ( ) ., , , , , , , , , , , , , , , , , , , , , ( ) . , , , , , , , , , , , , , , , , , , , , , , , , , ( ) . , , , , , , ( ) . , , , , , , ( ) . , , , , , ( ) . , , , , ( ) . , . ,, , , ( ) . , , ( ) . , , , ( ) .( ed . ) , , , , ., , , , ( ) . , , , , ( ) . , , , , ( ) ., , , ( ) . , , , , , , , , in : , , , , ( eds . ) , , volume of _ _ , , , , pp . . , , , , , , , ( ) . , , , , , . , , , , . , , , ( ) . , , , , ( ) . , , , ( ) . , , , , , , , edition , . , , , , edition , ., , , , edition , ., , , ( ) . , , in : , , pp . ., , , ( ) . , , , , ., , in : ( ed . ) , , , , , p. ., , , ( ) . , , ( ) . ., , , , ( ) . , , , , ( ) ., , in : , , , , p. ., , , , , ( ) ., , , , ( ) . , , , ( ) .
graphics processing units ( gpus ) are recently being used to an increasing degree for general computational purposes . this development is motivated by their theoretical peak performance , which significantly exceeds that of broadly available cpus . for practical purposes , however , it is far from clear how much of this theoretical performance can be realized in actual scientific applications . as is discussed here for the case of studying classical spin models of statistical mechanics by monte carlo simulations , only an explicit tailoring of the involved algorithms to the specific architecture under consideration allows to harvest the computational power of gpu systems . a number of examples , ranging from metropolis simulations of ferromagnetic ising models , over continuous heisenberg and disordered spin - glass systems to parallel - tempering simulations are discussed . significant speed - ups by factors of up to compared to serial cpu code as well as previous gpu implementations are observed . monte carlo simulations , graphics processing units , ising model , heisenberg model , spin glasses , parallel tempering
the study of the hydrodynamics of colloidal suspensions of passive particles is an old yet still active subject in soft condensed matter physics and chemical engineering . in recent yearsthere has been a growing interest in suspensions of active colloids , which exhibit rich collective behaviors quite distinct from those of passive suspensions .there is a growing number of computational methods for modeling active suspensions , which are typically built upon well - developed techniques for passive suspensions in steady stokes flow , i.e. , at zero reynolds number .since active particles typically have metallic subcomponents , they are often significantly denser than the solvent and sediment toward the bottom wall , making it necessary to address confinement and implement non - periodic boundary conditions in any method aimed at simulating experimentally - relevant configurations .furthermore , since collective motions seen in active suspensions involve large numbers of particles , and since hydrodynamic interactions among particles decay slowly like the inverse of the distance , it is crucial to develop methods that can capture long - ranged hydrodynamic effects , yet still scale to tens or hundreds of thousands of particles . for suspensions of passive particles the methods of brownian and stokesian dynamics have dominated in chemical engineering , and related techniques have been used in biochemical engineering .these methods simulate the overdamped ( diffusive ) dynamics of the particles by using green s functions for steady stokes flow to capture the effect of the fluid .while this sort of implicit solvent approach works very well in many situations , it has several notable technical difficulties : achieving near linear scaling for many - particle systems is technically challenging , handling non - trivial boundary conditions ( bounded systems ) is complicated and has to be done on a case - by - case basis , generalizations to non - spherical ( and in particular complex ) particle shapes is difficult , and including thermal fluctuations is non - trivial due to the need to obtain stochastic increments with the desired covariance . in this workwe develop relatively low - accuracy but flexible and simple rigid multiblob methods that address these difficulties .our approach builds heavily on a number of existing techniques , combining elements from several distinct but related methods .we extensively test the proposed methods and study their accuracy and performance on a number of test problems .the continuum formulation of the stokes equations with suitable boundary conditions on the surfaces of a collection of rigid particles is well - known and summarized in more detail in appendix [ sec : continuumformulation ] . due to the linearity of the stokes equations, there is an affine mapping from the applied forces andtorques and any specified _ apparent slip _ velocity due to active boundary layers to the resulting particle motion given by the linear velocities and the angular velocities .specifically , =\m{\mathcal{n}}\left[\begin{array}{c } \v f\\ \v{\tau } \end{array}\right]-\breve{\m{\mathcal{n}}}\breve{\v u},\label{eq : n_def_cont}\ ] ] where is the _ mobility matrix _ , and is an _ active mobility _ linear operator .the _ mobility problem _ consists of computing the rigid - body motion given the applied forces and torques and apparent slip .the inverse of this problem is the _ resistance problem _ , which computes the forces and torques on the body given a specified motion of the body and active slip .solving the mobility problem is a key component of any numerical method for modeling the deterministic or fluctuating ( brownian ) dynamics of the particles . in this paperwe develop a _ mobility solver _ for suspensions of rigid particles immersed in viscous fluid , specifically , we develop novel preconditioners for iterative solvers for the unknown motions of the rigid bodies , given the applied external forces and torques as well as active apparent slip on the surface of the particles . as we discuss in more detail in the body of the paper , our formulation can readily solve the resistance problem ; however , our iterative solvers will prove to be more scalable for mobility computations ( which are of primary interest ) than for resistance computations .key to the success of our iterative solvers is the idea that instead of eliminating variables using _ exact _ schur complements and solving a _ reduced _ system iteratively , as done in the majority of existing methods , one should instead iteratively solve an _ extended _ system that includes all of the variables .this has the key advantage that the matrix - vector product becomes an efficient direct calculation , and the schur complement can be computed only _ approximately _ and used to construct an effective preconditioner .like many other authors , we construct rigid bodies of essentially arbitrary shape as a collection of rigidly - connected collection of `` blobs '' or `` beads '' forming a composite object that we will refer to as a _ rigid multiblob_. the hydrodynamic interactions between blobs are represented using a rotne - prager tensor generalized to the desired domain geometry ( boundary conditions ) , specifically , we use the the rotne - prager - yamakawa ( rpy ) tensor for an unbounded domain , and the rotne - prager - blake ( rpb ) tensor for a half - space domain . in section[ sec : rigidmultiblobs ] we describe how to obtain the hydrodynamic coupling between a large collection of rigid multiblobs by solving a large linear system for lagrange multipliers enforcing the rigidity .a key contribution of our work is to develop an indefinite saddle - point preconditioner for iterative solution of the resulting linear system .this preconditioner is based on a block - diagonal approximation of the blob - blob mobility matrix , in which all hydrodynamic interactions among distinct bodies ( more precisely , among blobs on distinct bodies ) are neglected .the only system - specific component is the implementation of a fast matrix - vector multiplication routine , which in turn requires a scalable method for multiplying the rpy mobility matrix by a vector . for simple geometries such as an unbounded domain or particles sedimented next to a no - slip boundary , simple analytical formulas for the rpy tensorare well - known , and can be used to construct an efficient matrix - vector multiplication routine , for example , using fast multipole methods ( fmms ) , or even direct summation on a gpu .we numerically study the performance and accuracy of the rigid multiblob methods for suspensions in an unbounded domain in section [ sec : resultsunbounded ] , and study particles sedimented near a no - slip boundary in section [ sec : resultswall ] .we find that resolving spherical particles with twelve blobs placed on the vertices of an icosahedron is notably more accurate than the fts ( force - torque - stresslet plus degenerate quadrupole ) truncation typically employed in stokesian dynamics simulations , provided that the effective hydrodynamic radius of the rigid multiblob is adjusted to account for the finite size of the blobs .we also find that a small number of iterations of a krylov method are required to solve the required linear system , and importantly , the number of iterations is constant _ independent _ of the the number of rigid bodies , making it possible to develop a linear or near - linear scaling algorithm . for _ resistance problems _, however , we observe a number of iterations growing at least as fast as the linear dimensions of the system .this is consistent with similar studies of iterative solvers for stokesian dynamics by ichiki . for confined systems , however , even in the simplest case of a periodic system ,the green s function for stokes flow and the associated rpy tensor is difficult to obtain in closed form , and when it is possible to write an analytical result , the resulting formulas are typically based on infinite series that are expensive to evaluate . for periodic systemsthis is commonly addressed by using ewald summation based on the fast fourier transform ( fft ) ; the present state - of - the - art for stokes flow is the spectral ewald method , which has recently been used for stokesian dynamics simulations of periodic suspensions .a key deficiency of most existing methods is that they rely critically on having triply periodic domains and the use of the fft .generalizing these methods to non - periodic domains while keeping their linear scaling requires a large development effort and typically a new implementation for every different geometry .furthermore , in a number of applications involving active particles , there is a surface slip ( e.g. , electrohydrodynamic or osmophoretic flow ) induced on the bottom boundary due to the gradients created by the particles , and this slip drives or at least strongly affects the motion of the particles .accounting for this slip requires solving an additional equation such as a poisson or laplace equation for the electric potential or concentration of chemical fuel with nontrivial boundary conditions on the particle and wall surfaces .the solution of this additional equation provides the slip boundary condition for the stokes equations , which must be solved to find the resulting fluid flow and active particle motion .such nontrivial multi - physics coupling is quite hard to accomplish in existing methods .to address these difficulties , in section [ sec : rigidibamr ] we develop a method for general cuboidal confined domains which does not require analytical green s functions .this relies on an immersed boundary ( ib ) method for obtaining an approximation to the rpy tensor in confined geometries , as recently developed by some of us .this technique has been combined with the concept of multiblob representation of rigid bodies in a follow - up work , but in this work stiff elastic springs were used to enforce the rigidity . by contrast , we ensure the rigidity of the multiblobs via lagrange multipliers which are solved concurrently with solving for the fluid pressure and velocity .our key novel contribution is an effective preconditioner for the rigidly - constrained stokes problem in periodic and non - periodic domains , obtained by combining our recently - developed preconditioner for a rigid - body ib method with a block - diagonal preconditioner for the mobility subproblem . in the ib method developed in section [sec : rigidibamr ] and studied numerically in section [ sec : resultsconfined ] , analytical green s functions are replaced by an `` on the fly '' computation carried out by a standard finite - volume fluid solver .this stokes solver can readily handle nontrivial boundary conditions , for example , slip along the walls can easily be accounted for .furthermore , suspensions at small but nonzero reynolds numbers can be handled with little extra work .additionally , we avoid uncontrolled approximations relying on truncations of multipole expansions to a fixed order , and we can seamlessly handle arbitrary body shapes and deformation kinematics .lastly , and importantly , in the spirit of fluctuating hydrodynamics , it is straightforward to generate the stochastic increments required to simulate the brownian motion of small rigid particles suspended in a fluid by including a fluctuating stress in the fluid equations , as we will discuss in more detail in future work ; here we focus on the deterministic mobility and resistance problems . at the same time, our method also has some disadvantages compared to methods such as boundary integral or boundary element methods .notably , it requires filling the domain with a dense uniform fluid grid , which is expensive at low densities .it is also a low - order method that can not compute solutions as accurately as spectral boundary integral formulations .we do believe , nevertheless , that the method developed here offers a good compromise between accuracy , efficiency , scalabilty , flexibility and extensibility , compared to other more specialized formulations .we apply our methods to a number of test problems for which analytical solutions are known , but also study a few nontrivial problems that have not been properly addressed in the literature . in section [ sub : cylinderwall ]we study the mobility of a cylinder of finite aspect ratio that is parallel to a no - slip boundary and compare to experimental measurements and asymptotic theory based on a slender - body approximation . in section[ sub : activepair ] we study the formation of a stable rotating pair of active `` extensor '' or `` pusher '' nanorods next to a no - slip boundary , and confirm the direction of rotation observed in recent experiments . in section [ sub : boomerang ] we compute the effective diffusion coefficient of a boomerang - shaped colloid in a slit channel , and compare to recent experimental measurements .in section [ sub : binarysedimentation ] we study the mean and variance of the sedimentation velocity in a binary suspension of spheres of size ratio two , and compare to recent stokesian dynamics simulations .in this section we develop the rigid multiblob model of colloidal particles at zero reynolds number .the kind of models we use here are not new , but we present the method in detail instead of relying on previous presentations , the most relevant of which are those of swan _ et al . _ .this is in part to present the formulation in our notation , and in part to explain the differences with other closely - related methods .our key novel contribution in this section is the preconditioned iterative solver described in section [ sub : preconditioner ] ; the performance and scaling of our mobility solver is studied numerically for unbounded domains in section [ sub : convergencefmm ] , and for particles confined near a single wall in section [ sub : convergencewall ] .the modeling of suspensions of rigid spheres at small reynolds numbers is a well - developed field with a long history . a powerful class of methodsare related to brownian dynamics with hydrodynamic interactions ( bdhi ) and stokesian dynamics ( sd ) ( note that these terms are used differently in different communities ) .the difference between these two ( as we define them here ) is that bdhi uses what we call a minimally - resolved model in which each colloid ( for colloidal suspensions ) or polymer bead ( for polymeric suspensions ) is only resolved at the monopole level , more precisely , at the rotne - prager level . by contrast , in sd the next level in a multipole expansion is taken into account and torques and stresslets are also accounted for .it has been shown recently that yet one more order needs to be kept in the multipole expansion to model suspensions of active spheres , and a suitable galerkin truncation of the multipole hierarchy has been developed for active spheres in unbounded domains , as well as for active spheres confined near a no - slip boundary .it is also possible to account for higher - order multipoles , leading to more complicated ( and computationally expensive ) but also more accurate models .it has also been shown that multipole expansions converge very poorly for nearly touching spheres due to the divergence of the lubrication forces , and in most methods for dense colloidal suspensions of hard spheres pairwise lubrication corrections are added in a somewhat _ ad hoc _ manner ; we will refer to this approach as sd with lubrication . given the well - developed tools for modeling sphere suspensions , it is natural to leverage them when modeling suspensions of particles of more complex shapes .here we describe a technique capable of , in principle , modeling passive rigid particles of arbitrary shape .the method can also be used to model , without any extra effort , active particles with active slip layers , i.e. , particles which are phoretic ( e.g. , osmo - phoretic , electro - phoretic , chemo - phoretic , etc . ) due to an apparent slip at their surface . for the purposes of hydrodynamic calculations ,we discretize rigid bodies by constructing them out of multiple rigidly - connected spherical `` blobs '' or beads of hydrodynamic radius .these blobs can be thought of as hydrodynamically minimally - resolved spheres forming a rigid conglomerate that approximates the hydrodynamics of the actual rigid object being studied .we prefer the word `` blob '' over `` sphere '' or `` point '' or `` monopole '' because blobs are not spheres as they do not have a well - defined surface like spheres do , they have a finite size associated with them ( the hydrodynamic blob radius ) unlike points , and they account for a degenerate quadrupole associated to the faxen corrections in addition to a force monopole . the word `` bead '' is also appropriate , but we prefer to reserve that for polymer models ( bead - spring or bead - link models ) .examples of `` multiblob '' models of two types of colloidal particles are illustrated in fig .[ fig : blobmodels ] . in the leftmost panel, we show a minimally - resolved model of a rigid rod , with dimensions similar to active metallic `` nanorods '' used in recent experiments . in this minimally - resolved model the blobs , shown as spheres with radius equal to ,are placed in a row along the axes of the cylinder .such minimally - resolved models are particularly suited for cylinders of large but finite aspect ratio ; for very thin rods such as actin filaments boundary integral methods based on slender - body theory will be more effective . in the more resolved model illustrated in the second panel from the left , a hexagon of blobs is placed around the circumference of the cylinder to better resolve it . a yet more resolved model with a dodecagon of blobs around the cylinder circumference is shown in the third panel from the left . in the rightmost panel of fig .[ fig : blobmodels ] we show a blob model of a colloidal boomerang with a square cross - section , as manufactured using lithography and studied in . similar `` bead '' or `` raspberry '' models appear in a number of studies of hydrodynamics of particle suspensions . in many studies ,stiff elastic springs between the blobs are used to keep the structure rigid ; in some models the fluid or particle inertia is included also . here, we keep the structures _ strictly rigid _ and refer to the resulting structures as _ rigid multiblob _ models .such rigid multiblob models have been used in a number of prior studies , but we refer to for a detailed exposition . our primary focus in this section will be to develop algorithmic techniques that allow suspensions of tens or even hundreds of thousands of rigid multiblob particles to be simulated efficiently .this is in many ways primarily an exercise in numerical linear algebra , but one that is _ necessary _ to make the rigid multiblob approach useful for simulating moderately dense suspensions .a second goal , which will be realized in the results sections of this paper , will be to carefully assess the accuracy of rigid multiblob models as a function of their resolution ( number of blobs per body ) .we now summarize the main equations used to solve the mobility and resistance problems for a collection of rigid multiblobs immersed in a viscous fluid .we first discuss the hydrodynamic interaction between blobs , and then discuss the hydrodynamic interactions between rigid bodies . in the notationused below , we will use the latin indices for individual blobs , and reserve latin indices for bodies .we will denote with the set of blobs comprising body .we will consider a suspension of rigid bodies with a chosen reference _tracking point _ on body having position , and the orientation of body relative to a _ reference configuration _ represented by the quaternion . the linear velocity of ( the chosen tracking point on ) body will be denoted with , and its angular velocity will be denoted with .the total force applied on body is , and the total torque is .the composite configuration vector of position and orientation of body will be denoted with , the composite vector of linear and angular velocity will be denoted with , and the composite vector of forces and torques with .the position of blob will be denoted with , and its velocity will be denoted with .when not subscripted , vectors will refer to the composite vector formed by all bodies or all blobs on all bodies .for example , will denote the linear and angular velocities of all bodies , and will denote the positions of all of the blobs .we will use a superscript to denote portions of composite vectors for all blobs belonging to one body , for example , will denote the vector of positions of all blobs belonging to body .the fact that the multiblob is rigid is expressed by the `` no - slip '' kinematic condition , this no - slip condition can be written for all bodies succinctly as where is a simple geometric matrix .we will denote the apparent velocity of the fluid at point with .for a _ passive blob _ , i.e. , a blob that represents a passive part of the rigid particle , the _ no - slip _ boundary condition requires that .however , for _ active blobs _ an additional apparent slip of the fluid relative to the surface of the body can be imposed , resulting in a nonzero _ slip _ .this kind of active propulsion is termed `` implicit swimming gait '' by swan and brady .an `` explicit swimming gait '' can be taken into account without any modifications to the formulation or algorithm by simply replacing ( [ eq : noslip_rigid ] ) with that is , the only difference between `` slip '' and `` deformation '' is whether the blobs move relative to the rigid body frame dragging the fluid along , or stay fixed in the body frame while the fluid passes by them .one can of course even combine the two and have the blobs move relative to the rigid body while also pushing flow , for example , this can be used to model an active filament where there is slip along the filament but the filament itself is moving . in the end ,the only thing that matters to the formulation is the velocity difference in appendix [ app : permeable ] we explain how to model permeable ( porous ) bodies by making the apparent slip proportional to the fluid - blob force .the fundamental problem tackled in this paper is the solution of the _ mobility problem _ , that is , the computation of the motion of the bodies given the applied forces and torques on the bodies and the slip velocity .because of the linearity of the stokes equations and the boundary conditions , there exists an affine linear mapping where the _ body mobility matrix _ depends on the configuration and is the central object of the computation . the _ active mobility matrix _ is a discretization of the active mobility operator , and gives the active motion of force- and torque - free particles .note that is related to , but different from , the propulsion matrix introduced in .the propulsion matrix is essentially a finite - dimensional projection of the operator that only depends on the choice of basis functions used to express the surface slip velocity , and does not depend on the specific discretization of the body or quadrature rules , as does . in the remainder of this sectionwe develop a method for computing given and , i.e. , a method for computing the combined action of and , for large collections of non - overlapping rigid particles .we will also briefly discuss the _ resistance problem _ , in which we are given the motion of the bodies as a specified kinematics , and seek the resulting drag forces and torques , which have the form where the _ body resistance matrix _ and is the _ active resistance matrix_. the blob - blob translational mobility matrix describes the hydrodynamic interactions between the blobs , accounting for the influence of the boundaries . specifically ,if the blobs are free to move ( i.e. , not constrained rigidly ) with the fluid under the action of set of translational forces , the translational velocities of the blobs will be the mobility matrix is a block matrix of dimension , where is the dimensionality .the block computes the velocity of blob given the force on blob , neglecting the presence of the other blobs in a _ pairwise _ approximation . to construct a suitable , we can think of blobs as spheres of hydrodynamic radius . for two well - separated spheres and of radius we have the far - field approximation * * where is the fluid viscosity and is the green s function for the steady stokes problem with unit viscosity , with the appropriate boundary conditions such as no - slip on the boundaries of the domain .the differential operator is called the faxen operator .note that the form of ( [ eq : mobilityfaxen ] ) guarantees that the mobility matrix is symmetric positive semidefinite ( spd ) by construction since is an spd kernel .for a three dimensional unbounded domain with fluid at rest at infinity , the green s function is isotropic and given by the oseen tensor , using this expression in ( [ eq : mobilityfaxen ] ) yields the far - field component of the rotne - prager - yamakawa ( rpy ) tensor , commonly used in bdhi .a correction needs to be introduced when particles are close to each other to ensure an spd mobility matrix , which can be derived by using an integral form of the rpy tensor valid even for overlapping particles , to give where , and the diagonal blocks of the mobility matrix , i.e. , the self - mobility can be obtained by setting to obtain , which matches the stokes solution for the drag on a translating sphere ; this is an important continuity property of the rpy tensor .we will use the rpy tensor ( [ eq : rpytensor ] ) for simulations of rigid - particle suspensions in unbounded domains in section [ sec : resultsunbounded ] . in principle, it is possible to generalize the rpy tensor to any flow geometry , i.e. , to any boundary conditions ( and imposed external flow ) , including periodic domains , as well as confined domains .however , we are not aware of any tractable analytical expressions for the complete rpy tensor ( including near - field corrections ) even for the simplest confined geometry of particles near a single no - slip boundary . in the presence of a single no - slip wall , an analytic approximation to given by swan and brady ( and re - derived later in ) as a generalization of the rotne - prager ( rp ) tensor to account for the no - slip boundary using blake s image construction .as shown in ref . , the corrections to the rotne - prager tensor ( [ eq : mobilityfaxen ] ) for particles that overlap each other but not the wall are independent of the boundary conditions , and are thus given by the standard rpy expressions ( [ eq : rpytensor ] ) for unbounded domains .therefore , in section [ sec : resultswall ] we compute by adding to the rpy tensor ( [ eq : rpytensor ] ) wall corrections corresponding to the translation - translation part of the rotne - prager - blake mobility given by eqs . ( b1 ) and ( c2 ) in , ignoring the higher order torque and stresslet terms in the spirit of the minimally - resolved blob model .the expressions derived by swan and brady assume that neither particle overlaps the wall and the resulting expressions are not guaranteed to lead to an spd if one or more blobs overlap the wall , as we discuss in more detail in the conclusions .for more complicated geometries , such as a slit or a square ( duct ) channel , analytical computations of the green s function become quite complicated and tedious , and numerical computations typically require pre - tabulations . in section [ sec : resultsconfined ] we explain how a grid - based finite volume stokes solver can be used to obtain the action of the green s function and thus compute the action of the mobility matrix for confined domains , for essentially arbitrary combinations of periodic , free - slip , no - slip , or stress boundary conditions .after discretizing the rigid bodies as rigid multiblobs , we can write down a system of equations that constrain the blobs to move rigidly in a straightforward manner . letting be a vector of forces ( lagrange multipliers ) that acts on each blob to enforce the rigidity of the body , we have the following linear system for , , and for all bodies , the first equation is the no - slip condition obtained by combining ( [ eq : w_mlambda ] ) and ( [ eq : noslip_rigid ] ) .the second and third equations are the force and torque balance conditions for body .note that the physical interpretation of is that of a total force on the portion of the surface of the body associated with a given blob .if one wants to think of ( [ eq : rigidsystem ] ) as a regularized discretization of the first - kind integral equation ( [ eq : first_kind ] ) and obtain a pointwise value of the traction force _ density _ , one should divide by the surface area associated with blob , which plays the role of a quadrature weight ; we will discuss more sophisticated quadrature rules in the conclusions .we can write the _ mobility problem _ ( [ eq : rigidsystem ] ) in compact matrix notation as a _ saddle - point _ linear system of equations for the rigidity forces and unknown motion , \left[\begin{array}{c } \v{\lambda}\\ \v u \end{array}\right]=\left[\begin{array}{c } \slip\\ -\v f \end{array}\right].\label{eq : saddle_m}\ ] ] forming the schur complement by eliminating we get ( see also eq . ( 1 ) in or eq .( 32 ) in ) where the body mobility matrix is and is evidently spd since is .although written in this form using the inverse of , unlike in a number of prior works , we obtain by solving ( [ eq : saddle_m ] ) directly using an iterative solver , as we explain in more detail in section [ sub : preconditioner ] .we note that one can compute a fluid velocity field from using a procedure we describe in appendix [ app : renderflow ] .the _ resistance problem _ , on the other hand , consists of solving for in and then computing , giving at first glance , it appears that solving the resistance system ( [ eq : resistance_problem ] ) is easier than solving the saddle - point problem ( [ eq : saddle_m ] ) ; however , as we explain in more detail in section [ sub : convergencefmm ] , the mobility problem is significantly easier to solve using iterative methods than the resistance problem , consistent with similar observations in the context of stokesian dynamics . observe that the saddle - point formulation ( [ eq : saddle_m ] ) applies more broadly to _ mixed _mobility / resistance problems , where some of the rigid body degrees of freedom are constrained but some are free .an example is a suspension of spheres being rotated by a magnetic field at a specified angular velocity but free to move translationally , or a suspension of colloids fixed in space by strong laser tweezers but otherwise free to rotate , or even a hinged body that can only move in a partially - constrained manner . in cases such as thesewe simply redefine to contain the free kinematic degrees of freedom and modify the definition of the kinematic matrix .much of what we say below continues to apply , but with the caveat that the expected speed of convergence of iterative methods is expected to depend on the nature of the imposed constraints , as we discuss in section [ sub : convergencefmm ] .note that the formula ( [ eq : bodymob ] ) is somewhat formal , and in practice all inverses should be replaced by pseudo - inverses .for instance , in the limit when infinitely many blobs cover the surface of a body , the mobility matrix is not invertible since making perpendicular to the surface will not yield any flow because it will try to compress the ( fictitious ) incompressible fluid inside the body .note that this nontrivial null space of the mobility poses no problem when using an iterative method to solve ( [ eq : saddle_m ] ) because the right hand side is in the proper range due to the imposition of the volume - preservation constraint ( [ eq : slip_solvability ] ) .it is also possible that the matrix is not invertible .a typical example for this is the minimally - resolved cylinder shown in the left - most panel of fig .[ fig : blobmodels ] . because all of the forces are applied exactly on the semi - axes of the cylinder, they can not exert a torque around the symmetry axes of the rod .again , there is no problem with iterative solvers for ( [ eq : saddle_m ] ) if the applied force is in the appropriate range ( e.g. , one should not apply a torque around the semi - axes of a minimally - resolved cylinder ) . for a small number of blobs , the equation ( [ eq : saddle_m ] )can be solved by direct inversion of , as done in most prior works . for large systems , which is the focus of our work ,iterative methods are required . a standard approach used in the literatureis to eliminate one of the variables or . eliminating leads to the equation which requires the action of , which must itself be obtained inside a nested iterative solver , increasing both the complexity and the cost of the method .swan and wang have recently used the conjugate gradient method to solve ( [ eq : symmetric_1 ] ) , preconditioning using the block - diagonal matrix . an alternative is to write an equivalent system to ( [ eq : saddle_m ] ) , for an arbitrary constant , \left[\begin{array}{c } \v{\lambda}\\ \v u \end{array}\right]=\left[\begin{array}{c } \slip\\ -\left(\v f+c\m{\mathcal{k}}^{t}\slip\right ) \end{array}\right],\label{eq : saddle_m_eq}\ ] ] from which we can easily eliminate to obtain an equation for only , in the form * \v{\lambda}=\mathrm{rhs},\label{eq : symmetric_2}\ ] ] * where we omit the full expression for the right hand side for brevity .the system ( [ eq : symmetric_2 ] ) can now be solved using ( preconditioned ) conjugate gradients , and only requires the inverse of the simpler matrix .note that , although not presented in this way , this is the essence of the approach that is followed and recommended by swan _ et al_. ( see appendix and note that is denoted by in that paper ) ; they recommend computing the action of by an iterative method preconditioned by an incomplete cholesky factorization .a similar approach is followed in boundary integral formulations ( which are usually formulated using a double layer density ) , where a continuum operator related to is computed and then discretized using a quadrature rule .in contrast to the approaches taken by swan _et al_. , we have found that numerically the best approach to solving for the unknown rigid - body motions of the particles is to solve the extended saddle - point problem ( [ eq : saddle_m ] ) for _ both _ and _ directly _ , using a preconditioned iterative krylov method .in fact , as we will demonstrate in the results section of this paper , such an approach has computational complexity that is essentially linear in the number of blobs because the number of iterations required to solve ( [ eq : saddle_m ] ) is quite modest when an appropriate preconditioner , described below , is used .this approach does not require computing ( the action of ) and leads to a very simple implementation .a krylov solver for ( [ eq : saddle_m ] ) requires two components : 1 .an efficient algorithm for performing the matrix - vector product , which in our case amounts to a fast method to multiply the dense but low - rank mobility matrix by a vector of blob forces .2 . a suitable preconditioner , which is an approximate solver for ( [ eq : saddle_m ] ) . how to efficiently compute depends very much on the boundary conditions and thus the form of the green s function used to construct . for unbounded domains , in this work we use the fast multipole method ( fmm ) developed specifically for the rpy tensor in ; alternative kernel - independent fmmscould also be used , and have also been generalized to periodic domains .the fmm method has an essentially linear computational cost of for a single matrix - vector multiplication . in the simulations presented here we use a fixed and rather tight relative tolerance for the fmm throughout the iterative solution process .krylov methods , however , allow one to _ lower _ the accuracy of the matrix - vector product as the residual is reduced ; this has recently been used to lower the cost of fmm - based boundary integral methods . we will explore such optimizations in future work . for rigid particles sedimented near a single no - slip wall, we have implemented a graphics processing unit ( gpu ) based direct summation matrix - vector product based on the rotne - prager - blake tensor derived by swan and brady .this has , asymptotically , a quadratic computational cost of ; however , the computation is trivially parallel so the multiplication is remarkably fast even for one million blobs because of the very large number of threads available on modern gpus. gimbutas _ et al_. have recently developed an fmm method for the blake tensor by using a simple image construction ( image stokeslet plus a harmonic scalar correction ) and applying an infinite - space fmm method to the extended system of singularities .however , this construction has not yet been generalized to the rotne - prager - blake tensor , and , furthermore , the fmm will not be more efficient than the direct product on gpus in practice unless a large number of blobs is considered . for fully confined domains , we will adopt an extended saddle - point formulation that will be described in section [ sec : resultsconfined ] . in this workwe demonstrate that a very efficient yet simple preconditioner for ( [ eq : saddle_m ] ) is obtained by neglecting hydrodynamic interactions between different bodies , that is , setting the elements of corresponding to pairs of blobs on _ distinct _ bodies to zero in the preconditioner .this amounts to making a block - diagonal approximation of the mobility defined by only keeping the diagonal blocks corresponding to a single body interacting _ only _ with the boundaries of the domain , we will demonstrate here that the _ indefinite block - diagonal _ preconditioner , ,\label{eq : indef_block_p}\ ] ] is a very effective preconditioner for solving ( [ eq : saddle_m ] ) . applying the preconditioner ( [ eq : indef_block_p ] ) amounts to solving the linear system \left[\begin{array}{c } \v{\lambda}\\ \v u \end{array}\right]=\left[\begin{array}{c } \slip\\ -\v f \end{array}\right],\label{eq : saddle_m_precon}\ ] ] which is quite easy to do since the approximate body mobility matrix ( schur complement ) , is itself a block - diagonal matrix where each block on the diagonal refers to a single body neglecting all hydrodynamic interactions with other bodies , computing requires a dense matrix inversion ( e.g. , cholesky factorization ) of the much smaller mobility matrix , whose size is , where is the number of blobs on body . in the case of an infinite domain, the factorization of can be precomputed once at the beginning of a dynamic simulation and reused during the simulation due to the rotational and translational invariance of the rpy tensor ; one only needs to apply rotation matrices to the right - hand side and the result to convert between the original reference configuration of the body and the current configuration .furthermore , particles of the same shape and size discretized with the same number of blobs as body can share a single factorization of and . in cases where depends in a nontrivial way on the position of the body , as for ( partially ) confined domains, one needs to factorize for all bodies at every time step ; this factorization can still be reused during the iterative solve in each application of the preconditioner . because our preconditioner is indefinite , one can not use the preconditioned conjugate gradient ( pcg ) krylov method to solve ( [ eq : saddle_m ] ) without modification .one of the most robust iterative methods , which we use in this work , is the generalized minimum residual method ( gmres ) .the key advantage of gmres is that it is guaranteed to reduce the residual from iteration to iteration .its main downside is that it requires storing a large number of intermediate vectors ( i.e. , the history of the iterates ) .gmres also can stall , although this can be corrected to some extent by restarts .an alternative to gmres is the ( stabilized ) bi - conjugate gradient ( bicg(stab ) ) method , which works for non - symmetric matrices as well . in our implementationwe have relied on the petsc library for iterative solvers ; this library makes it very easy to experiment with different iterative solvers .the rigid multiblob method described in section [ sec : rigidmultiblobs ] requires a technique for multiplying the blob - blob mobility matrix with a vector .therefore , this approach , like all other green s function based methods , is very geometry - specific and does not generalize easily to more complicated boundary conditions . to handle geometries for which there is no simple analytical expression for the green s function , such as slit or square channels , pre - tabulation of the green s function is necessary , and ensuring a positive semi - definite mobility matrix is in general difficult .another difficulty with green s function based methods is that including a `` background '' flow is only simple when this flow can be computed easily analytically , such as simple shear flows .but for more complicated geometries , such as poiseuille flow through a square channel , computing the base flow is itself not trivial or requires evaluating expensive infinite - series solutions .an alternative approach is to use a traditional stokes solver to solve the fluid equations numerically .this requires filling the domain with a grid , which can increase the number of degrees of freedom considerably over just discretizing the surface of the immersed bodies .however , the number of fluid degrees of freedom can be held approximately constant as more bodies are included , so that the methods typically scale very well with the number of particles and are well - suited to dense particle suspensions .previous work has shown how to use an immersed boundary ( ib ) method to obtain the action of the green s function in complex geometries . in this approach , spherical particlesare minimally resolved using only a single blob per particle . in subsequent workthis approach was extended to multiblob models , but the rigidity constraint was imposed only approximately using stiff springs , leading to numerical stiffness .a class of related minimally - resolved methods based on the force coupling method ( fcm ) can include also torques and stresslets , as well as particle activity , but a number of these methods have relied strongly on periodic boundaries since they use the fast fourier transform ( fft ) to solve the ( fluctuating ) stokes equations . in recent work , some of us have developed an ib method for rigid bodies .this method applies to a broad range of reynolds numbers . in the case of zero reynolds numberit becomes equivalent to the rigid multiblob method presented in section [ sec : rigidmultiblobs ] , but with a blob - blob mobility that is computed by the fluid solver . in ref . only rigid bodies with specified motion ( kinematics ) were considered ; here we extend the method to handle freely - moving rigid bodies in stokes flow .we will present here the key ideas and focus on the new components necessary to solve for the unknown motion of the particles ; we refer the reader interested in more technical details to refs .the key novel contribution of our work is the preconditioner described in section [ sub : preconditionining - algorithm ] ; the performance and scalability of our preconditioned iterative solvers is studied numerically in section [ sub : convergenceibamr ] .to begin , we present a semi - continuum formulation where the relation to section [ sec : rigidmultiblobs ] is most obvious , and then we discuss the fully discrete formulation used in the actual implementation . in appendix[ app : permeable ] we demonstrate how to handle permeable bodies using a small modification of the formulation .numerical results obtained using the method described here are given in section [ sec : resultsconfined ] .we consider here a semi - discrete model in which the rigid body has already been discretized using blobs but a continuum description is used for the fluid , that is , we consider a rigid multiblob model immersed in a continuum stokesian fluid . in the ib literature blobs are referred to as markers , and are often thought of as `` points '' or `` discrete delta functions '' .we use the term `` blob , '' however , to connect to section [ sec : rigidmultiblobs ] and to emphasize that the blobs have a finite physical and hydrodynamic extent . in the ib method ( and also the force coupling method ) , the shape of the blob and its effective interaction with the fluidis captured through a smooth kernel function that integrates to unity and whose support is localized in a region of size comparable to the blob radius . in our rigid multiblobib method , to obtain the fluid - blob interaction forces that constrain the unknown rigid motion of the blobs , we need to solve a constrained stokes problem for the fluid velocity field , the fluid pressure field , the blob constraint forces , and the unknown rigid - body motions and , note that here the velocity and pressure fields contain both the `` background '' and the `` perturbational '' contributions to the flow . in the first equation in ( [ eq : semi_continuum ] ) ,the kernel function is used to transfer ( spread ) the force exerted on the blob to the fluid , and in the third equation the same kernel is used to average the fluid velocity in the region covered by the blob and constrain it to follow the imposed rigid body motion plus additional slip or body deformation .the handling of the spreading of constraint forces and averaging of the fluid velocity near physical boundaries is discussed in appendix d in .we have implicitly assumed that appropriate boundary conditions are specified for the fluid velocity and pressure .notably , we will apply the above formulation to cases where periodic or no - slip boundary conditions are applied along the boundaries of a cubic prism ( recall that periodic boundaries are not actual physical boundaries ) .this includes , for example , a slit channel , a square channel , or a cubical container .it is also relatively straightforward to handle stress - based boundary conditions such as free - slip or pressure valves .it is not difficult to show that ( [ eq : semi_continuum ] ) is equivalent to the system ( [ eq : rigidsystem ] ) with the mobility matrix between two blobs and identified with where we recall that is the green s function for the stokes problem with unit viscosity and the specified boundary conditions .this expression can directly be compared to ( [ eq : mobilityfaxen ] ) after realizing that for a smooth velocity field , \v v\left(\v r\right)\big|_{\v r=\v r_{i}}=\left(\m i+\frac{a_{f}^{2}}{6}\grad^{2}\right)\v v\left(\v r\right)\big|_{\v r=\v r_{i}},\ ] ] where we assumed a spherical blob , .we have defined here the `` faxen '' radius of the blob through the second moment of the kernel function . in multipole expansion based methods , the self - mobility of a body is treated separately by solving the single - body problem exactly (this is only possible for simple particle shapes ) .however , in the type of approach followed here the self - mobility is also given by the same formula ( [ eq : greensmobility ] ) with and does not need to be treated separately .in fact , the self - mobility of a particle in an unbounded three - dimensional domain _ defines _ the effective hydrodynamic radius of a blob , where the oseen tensor is given in ( [ eq : oseentensor ] ) . in general , , but for a suitable choice of the kernel one can accomplish ( for example , for a gaussian ) and thus accurately obtain the faxen correction for a rigid sphere . for an isotropic or tensor product kernel and an unbounded domain , the pairwise blob - blob mobility ( [ eq : greensmobility ] ) will take the form , and hat denotes a unit vector .the functions of distance and depend on the specific kernel ( and in the fully discrete setting on the spatial discretization of the stokes equations ) and will be different from those appearing in the rpy tensor ( [ eq : rpytensor ] ) . nevertheless , as we will show numerically in section [ sub : transinv ] , the functions and for our ib method are quite close in form to those appearing in the rpy tensor .we note that the rpy tensor itself can be seen as a realization of ( [ eq : greensmobility ] ) with the kernel being a surface delta function over a sphere of radius .we have demonstrated above that solving ( [ eq : semi_continuum ] ) is a way to apply the blob - blob mobility for a confined domain . in the method of regularized stokeslets the mobilityis obtained _ analytically _ by averaging the analytical green s function with a kernel or envelope function specifically chosen to make the resulting integrals analytical .note however that in that method the kernel appears only once inside the integral in ( [ eq : greensmobility ] ) because only the force spreading is regularized but not the interpolation of the velocity ; this leads to non - symmetric mobility matrix inconsistent with the faxen formula ( [ eq : mobilityfaxen ] ) .by contrast , our approach is guaranteed to lead to a symmetric positive semidefinite ( spd ) mobility matrix , which is crucial when including thermal fluctuations . to obtain a fully discrete formulation of the linear system ( [ eq : semi_continuum ] ) we need to spatially discretize the stokes equations on a grid .the spatial discretization of the fluid equation used in this work uses a uniform cartesian grid with grid spacing , and is based on a second - order accurate staggered - grid finite volume ( equivalently , finite difference ) discretization , in which vector - valued quantities such as velocity , are represented on the faces of the cartesian grid cells , while scalar - valued quantities such as pressure are represented at the centers of the grid cells .the viscous terms are discretized using a standard -point laplacian ( in three dimensions ) , accounting for boundary conditions using ghost cell extrapolation . in the fully discrete formulation of the fluid - body coupling , we replace spatial integrals in the semi - continuum formulation ( [ eq : semi_continuum ] ) by sums over fluid grid points .the regularized delta function kernel is discretized using a tensor product of one - dimensional immersed boundary kernels of compact support , following peskin .to maximize translational and rotational invariance ( i.e. , improve grid - invariance ) we use the smooth ( three - times differentiable ) six - point kernel recently described by bao _this kernel is more expensive than the traditional four - point kernel because it increases the support of the kernel to grid points in three dimensions ; however , this cost is justified because the new six - point kernel improves the translational invariance by orders of magnitude compared to other standard ib kernel functions .the interaction between the fluid and the rigid body is mediated through two crucial operations .the discrete velocity - interpolation operator averages velocities on the staggered grid in the neighborhood of blob via where the sum is taken over faces of the grid , indexes coordinate directions ( ) as a superscript , and is the position of the center of the grid face in the direction .the discrete force - spreading operator spreads forces from the blobs to the faces of the staggered grid via where now the sum is over the blobs and is the volume of a grid cell .these operators are adjoint with respect to a suitably - defined inner product , and the discrete matrices satisfy , which ensures conservation of energy .extensions of the basic interpolation and spreading operators to account for the presence of physical boundary conditions are described in appendix d in .we note that it is possible to change the effective hydrodynamic and faxen radii of a blob by changing the kernel .such flexibility in the kernel can be accomplished without compromising the required kernel properties postulated by peskin by using shifted or _ split kernels _ , +\phi_{a}\left[q_{\alpha}-\left(r_{k}\right)_{\alpha}+\frac{s}{2}\right]\right\ } , \ ] ] where denotes a shift that parametrizes the kernel . by varying in a certain range , for example , , one can smoothly increase the support of the kernel and thus increase the hydrodynamic radius of the blob by as much as a factor of two .we do not use split kernels in this work but have found them to work as well as the unshifted kernels , while allowing increased flexibility in varying the grid spacing relative to the hydrodynamic radius of the particles . following spatial discretization, we obtain a finite - dimensional linear system of equations for the discrete velocities and pressures and the blob and body degrees of freedom . for the resistance problem, we obtain the following rigidly constrained discrete stokes system , \left[\begin{array}{c } \v v\\ \pi\\ \v{\lambda } \end{array}\right]=\left[\begin{array}{c } \v g=\v 0\\ \v h=\v 0\\ \v w=-\slip \end{array}\right],\label{eq : constrained_stokes}\ ] ] where is the discrete ( vector ) gradient operator , is the discrete ( vector ) divergence operator , and where is a discrete ( vector ) laplacian ; these finite - difference operators take into account the specified boundary conditions . for impermeable bodies , which makes the linear system ( [ eq : constrained_stokes ] ) a nested saddle - point problem in both lagrange multipliers and .as explained in appendix [ app : permeable ] , for permeable bodies is a diagonal matrix with for blob , where is the permeability of body and is a volume associated with blob .the right - hand side could include any external fluid forcing terms , slip , inhomogeneous boundary conditions , etc .the system ( [ eq : constrained_stokes ] ) can be made symmetric by excluding the volume weighting in the spreading operator ( [ eq : s_unweighted ] ) ; this makes have units of force density rather than total force .this nested saddle - point structure continues if one considers impermeable rigid bodies that are free to move , leading to the _ _ discrete mobility problem __ \left[\begin{array}{c } \v v\\ \pi\\ \v{\lambda}\\ \v u \end{array}\right]=\left[\begin{array}{c } \v g\\ \v h=\v 0\\ \v w=-\slip\\ \v z=\v f \end{array}\right].\label{eq : free_kinematics_stokes}\ ] ] after eliminating the velocity and pressure from this system , we obtain the saddle - point system ( [ eq : saddle_m ] ) with the identification of the mobility with its discrete approximation which is spd . here is a discrete stokes solution operator , where we have assumed for now that is invertible ; see for the handling of periodic systems , for which the laplacian is not invertible . unlike for green s function based methods , we never explicitly compute or form or ; rather , we solve the stokes velocity - pressure subsystems iteratively using the preconditioners described in . in this sectionwe describe how to solve the system ( [ eq : free_kinematics_stokes ] ) using an iterative solver , as we have implemented in the immersed boundary adaptive mesh refinement software framework ( ibamr ) .our codes are integrated into the public release of the ibamr library .note that the matrix - vector product is a straightforward and inexpensive application of finite - difference stencils on the fluid grid and summations over blobs .the key to an effective solver is the design of a good preconditioner , i.e. , a good approximate solver for ( [ eq : free_kinematics_stokes ] ) .the basic idea is to combine a preconditioner for the stokes problem with the indefinite preconditioner ( [ eq : indef_block_p ] ) with a block - diagonal approximation of the mobility constructed based on empirical fits of the blob - blob mobility , as we know explain in detail .a preconditioner for solving the resistance problem ( [ eq : constrained_stokes ] ) was developed by some of us in ; readers interested in additional details should refer to this work .the preconditioner is based on approximating the blob - blob mobility with the functional form ( [ eq : m_tilde_ij ] ) , where the functions and are obtained by fitting numerical data for the blob - blob mobility in an _ unbounded _ system ( in practice , a large periodic system ) .this involves two important approximations , the validity of which only affects the _ efficiency _ of the linear solver but does _ not _ affect the _ accuracy _ of the method since the krylov method will correct for the approximations .the first approximation comes from the fact that the true blob - blob mobility for the immersed boundary method is not perfectly translationally and rotationally invariant , so that the form ( [ eq : m_tilde_ij ] ) does not hold exactly .the second approximation is that the boundary conditions are not correctly taken into account when constructing the approximation of the mobility .this approximation is crucial to the feasibility of our method and is much more severe , but , as we will demonstrate numerically in section [ sec : resultsconfined ] , the krylov solver converges in a reasonable number of iterations , correctly incorporating the boundary conditions in the solution .the empirical fits of and are described in appendix a of , and code to evaluate the empirical fits is publicly available for a number of kernels constructed by peskin and coworkers ( three- , four- , and six - point ) at http://cims.nyu.edu/~donev/src/mobilityfunctions.c . as we show in section [ sub : transinv ] , these functions are quite similar to those appearing in the rpy tensor ( [ eq : rpytensor ] ) , and , in fact , it is possible to use the rpy functions and in the preconditioner , with a value of the effective hydrodynamic radius that depends on the choice of the kernel . nevertheless , somewhat better performance is achieved by using the empirical fits for and developed in . in , we considered general fluid - structure interaction problems over a range of reynolds numbers , and constructed as a dense matrix of size , which was then factorized using dense linear algebra .this is infeasible for suspensions of many rigid bodies . in this work ,we use the block - diagonal approximation ( [ eq : m_tilde_block_diag ] ) to the blob - blob mobility matrices , in which there is one block per rigid particle . once is constructed and its diagonal blocks factorized , the corresponding approximate body mobility matrix is easy to form , as discussed in more detail in section [ sub : preconditioner ] .note that these matrices and their factorizations need to be constructed only once at the beginning of the simulation , and can be reused throughout the simulation . a key component of solving the constrained stokes problems ( [ eq : constrained_stokes ] ) or ( [ eq : free_kinematics_stokes ] ) is an iterative solver for the unconstrained discrete stokes sub - problem , \left[\begin{array}{c } \v v\\ \pi \end{array}\right]=\left[\begin{array}{c } \v g\\ \v h \end{array}\right],\ ] ] for which a number of techniques have been developed in the finite - element context .to solve this system , we can use gmres with a preconditioner that assumes periodic boundary conditions so that the various finite - difference operators commute .specifically , the preconditioner for the stokes system that we use in this work is based on a projection preconditioner developed by griffith , where is the dimensionless pressure ( scalar ) laplacian , and and denote approximate solvers obtained by a _ single _v - cycle of a geometric multigrid method , as performed using the _ hypre _ library in our ibamr implementation . in this paperwe will primarily report the options we have found to be best without listing all of the different combinations we have tried . for completeness , we note that we have tried the better - known lower and upper triangular preconditioners for the stokes problem . while these simpler preconditioners are better when solving pure stokes problems than the projection preconditioner ( [ p_stokes ] ) since they avoid the pressure multigrid application , we have found them to perform much worse in the context of suspensions of rigid bodies .a possible explanation is that the projection preconditioner is the only one that is exact for periodic systems if exact subsolvers for the velocity and pressure subproblems are used .observe that one application of is relatively inexpensive and involves only scalar multigrid v - cycles .the number of iterations required for convergence depends strongly on the boundary conditions ; fast convergence is obtained within 10 - 20 iterations for periodic systems , but as many as a hundred gmres iterations may be required for highly confined systems .we emphasize that the performance of this preconditioner is highly dependent on the details of the staggered geometric multigrid method , which is not highly optimized in the _ hypre _ library , especially for domains of high aspect ratios such as narrow slit channels .for periodic boundary conditions , one can use ffts to solve the stokes problem , and this is likely to be more efficient than geometric multigrid especially because ffts have been highly optimized for common hardware architectures .however , such an approach would require 3 scalar ffts for _ each _ iteration of the iterative solver for the constrained stokes problem ( [ eq : constrained_stokes ] ) or ( [ eq : free_kinematics_stokes ] ) , and this will in general be substantially more expensive than using only a few cycles of geometric multigrid as an _ approximate _ stokes solver . the use of an approximate stokes solver instead of an exact one is an important difference between implementing the rigid multiblob method for periodic systems using the spectral ewald method and our approach .the product of the blob - blob mobility with a vector can be computed more accurately and faster using the spectral ewald method , in particular because one can adjust the cutoff for splitting the computation between real and fourier space arbitrarily , unlike in our method where the grid spacing is tied to the particle radius . however , for rigid multiblobs , one must solve the system ( [ eq : saddle_m ] ) , which requires potentially many matrix - vector products , i.e. , many ffts in the spectral ewald approach .by contrast , in our method we solve the extended problem ( [ eq : free_kinematics_stokes ] ) , and only solve the stokes problems approximately using a few cycles of multigrid in each iteration .this will require more iterations but each iteration can be substantially cheaper than performing three ffts each krylov iteration . for non - periodic systems , there is no equivalent of the spectral ewald method , but see for some steps in this direction .our method computes the hydrodynamic interactions in a confined geometry `` on the fly '' without ever actually computing the action of the green s function exactly , rather , it is computed only approximately and the outer krylov solver corrects for any approximations made in the preconditioner .we now have the necessary ingredients to compose a preconditioner for solving ( [ eq : free_kinematics_stokes ] ) , i.e. , to construct an approximate solver for this linear system .each application of our preconditioner involves the following steps : 1 .approximately solve the fluid sub - problem , \left[\begin{array}{c } \tilde{\v v}\\ \tilde{\pi } \end{array}\right]=\left[\begin{array}{c } \v g\\ \v h \end{array}\right],\ ] ] using iterations of an iterative method with the preconditioner ( [ p_stokes ] ) .2 . interpolate to get the relative slip at each of the blobs , , and rotate the corresponding component from the current frame to the reference frame of each body .3 . approximately compute the unknown body kinematics : 1 . calculate and rotate the result back to the fixed frame of reference . here is a block - diagonal approximation to the blob - blob mobility matrix in the reference frame , as described in section [ sub : approximateblobmob ] ; the factorization of the blocks of is performed once at the beginning of the simulation .2 . calculate and transform ( rotate ) to the body frame of reference .3 . compute and transform it back to the fixed frame of reference , where .4 . calculate the updated relative slip velocity at each of the blobs , and transform ( rotate ) it to reference body frame .5 . compute and transform back to the fixed frame of reference if necessary .solve the corrected fluid subproblem to obtain the fluid velocity and pressure : \left[\begin{array}{c } \v v\\ \pi \end{array}\right]=\left[\begin{array}{c } \v g+\m{\mathcal{s}}\v{\lambda}\\ \v h \end{array}\right],\ ] ] using iterations of an iterative method with the preconditioner ( [ p_stokes ] ) .a few comments are in order .the above preconditioner is not spd so the outer krylov solver should be a method such as gmres of bicgstab .we prefer to use right - preconditioned krylov solvers because in this case the residual computed by the iterative solver is the true residual ( as opposed to the preconditioned residual for left preconditioning ) , and therefore termination criteria ensure that the original system was solved to the desired target tolerance .we expect that the long - term recurrence gmres method will require a smaller number of iterations than the short - term recurrence used in bicgstab ( but note that each iteration of bicgstab requires _ two _ applications of the preconditioner ) .however , observe that gmres can require substantially more memory since it requires storing a complete history of the iterative process .floating - point numbers per grid cell , which can make the memory requirements of a gmres - based solver with a large restart frequency quite high for large grid sizes . ] this can be ameliorated by restarts at a cost of slowed convergence .if the iterative solver used for the stokes solver in steps 1 and 6 is a nonlinear method ( most krylov methods are nonlinear ) , then the outer solver must be a flexible method such as fgmres .this flexibility typically increases the memory requirements of the iterative method ( for example , it exactly doubles the number of stored intermediate vectors for fgmres versus gmres ) , and so an alternative is to use a linear method such as richardson s method . note that when a preconditioned krylov method is used for the stokes subsolver , one additional application of the preconditioner is required to convert the system to preconditioned form for both left and right preconditioning , making the total number of applications of the stokes preconditioner ( [ p_stokes ] ) be per krylov iteration .by contrast , if richardson s method is used in the stokes subsolver , the number of preconditioner applications is . since in many practical casesthe cost is dominated by the multigrid cycles , this difference can be important in the overall performance of the preconditioner .we will explore the performance of the preconditioner and the effect of the various choices in detail in section [ sub : convergenceibamr ] .in this section we investigate the accuracy of rigid multiblob models of spheres as a function of the number of blobs .we focus on spheres in an unbounded domain because of the availability of analytical results to compare to , and not because the rigid multiblob method is particularly good for suspensions of spheres , for which there already exist a number of well - developed multipole expansion approaches .we also investigate the performance of the preconditioner developed in sec .[ sub : preconditioner ] for solving ( [ eq : saddle_m ] ) , for suspensions of spheres in an unbounded domain ( e.g. , clusters of colloids formed in a gel ) . for unbounded domains ,we compute the product of the blob - blob mobility matrix with a vector using the fast multipole method ( fmm ) developed specifically for the rpy tensor in ; this software makes four calls to the poisson fmm implemented in the fmmlib3d library ( http://www.cims.nyu.edu/cmcl/fmm3dlib/fmm3dlib.html ) per matrix - vector product . as we will demonstrate empirically , the asymptotic cost of the rigid - multiblob method scales as , where is the total number of blobs , with a coefficient that grows only weakly with density .we note that in this paper we use relatively tight tolerances ( ) when computing the matrix - vector products solving the linear systems in order to test the robustness of the preconditioners ; in practical applications much lower tolerances ( ) would typically be employed , potentially lowering the overall computational effort considerably over what is reported here . in this work , each sphereis discretized with blobs of hydrodynamic radius distributed on the surface of a sphere of _ geometric _ radius .we discretize the surface of a sphere as a shell of blobs constructed by a recursive procedure suggested to us by charles peskin ( private communication ) ; the same procedure is used in .we start with 12 blobs placed at the vertices of an icosahedron , which gives a uniform triangulation of a sphere by 20 triangular faces .then , we place a new blob at the center of each edge and recursively subdivide each triangle into four smaller triangles , projecting the vertices back to the surface of the sphere along the way .each subdivision approximately quadruples the number of vertices , with the -th subdivision producing a model with blobs , leading to shells with 12 , 42 , 162 or 642 blobs , see fig . 2 in for an illustration .in this section we study the optimal choice of for a given resolution ( number of blobs ) and .an important concept that will be used heavily in the rest of this paper is that of an _ effective hydrodynamic radius _ of a blob model of a sphere ( more generally , effective hydrodynamic extent ) .if we approach the rigid multiblob method from a boundary integral perspective , we would assign as the radius and treat the additional enlargement of the effective hydrodynamic radius as a numerical ( quadrature+regularization ) error .this is more or less how results are presented in the recent work of swan and wang ( see for example their fig . 8) , making the accuracy appear low even in the far field for small number of blobs per sphere .however , we instead think of a rigid multiblob as an effective _ model _ of a sphere , whose hydrodynamic response mimics that of an equivalent sphere. a similar effect appears in lattice boltzmann simulations , with being related to the lattice spacing . to appreciate why it is imperative to use an effective radius , observe that even a single blob acts as an approximation of a sphere with radius .similarly , one should not treat a line of blobs ( see left - most panel in fig .[ fig : blobmodels ] ) as a zero - thickness object ( line ) ; rather , such a line of rigidly - connected blobs should be considered to model a rigid cylinder with finite thickness proportional to .we compute the effective hydrodynamic radius of our blob models of spheres next .[ [ subconvergencemomentseffective - hydrodynamic - radii - of - rigid - multiblob - spheres ] ] [ sub : convergencemoments]effective hydrodynamic radii of rigid multiblob spheres ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in this section we consider an isolated rigid multiblob sphere in an unbounded domain , and compute its response to an applied force , an applied torque , and an applied linear shear flow with strain rate .each of these defines an effective hydrodynamic radius by comparing to the analytical results for a sphere , therefore , each model of a sphere will have three distinct hydrodynamic radii .the _ translational radius _ is measured from ( see also ) where is the resulting sphere linear velocity , the _ rotational radius _ is ( see also ) where is the resulting angular velocity , and the effective stresslet radius is here we compute the stresslet induced on the rigid multiblob under an applied shear by setting an apparent slip on blob , and then solving the mobility problem to compute the constraint ( rigidity ) forces . the stresslet is the symmetric traceless component of the * * first moment of the constraint forces in this work , we use as the effective hydrodynamic radius when comparing to theory .this is because the translational mobility is controlled by the most long - ranged hydrodynamic interactions , and therefore the far - field response of a rigid multiblob is controlled by .observe that since we only account for translation of the blobs , only is nonzero for a single blob , while and are zero .therefore , the minimal model of a sphere that allows for nontrivial rotlet and stresslets is the icosahedral model ( 12 blobs ) .since the rigid multiblob models are able to exert a stress on the fluid they can change the viscosity of a suspension , unlike the single - blob models , which do not resist shear . it is important to note that the rigid multiblob models of a sphere are _ not _ perfectly rotationally invariant , especially for low resolutions .therefore , the rigid multiblobs may exhibit a small translational velocity even in the absence of an applied force , or they may exhibit a small rotation even in the absence of an applied torque .in other words , the effective mobility matrix for a rigid multiblob model of a sphere can exhibit small off - diagonal components .similarly , there will in general be small but nonzero components of the stresslet that would be identically zero for a perfect sphere .in general , we find these spurious components to be very small even for the minimally resolved icosahedral rigid multiblob . a key parameter that we need to choose is how to relate the blob hydrodynamic radius with the typical spacing between the blobs .since our multiblob models of spheres are regular the minimal spacing between markers is well - defined , and we expect that there will be some optimal ratio that will make the rigid multiblob represent a true rigid sphere as best as possible . in a number of priorworks the intuitive choice has been used , since this corresponds to the idea that the blobs act as a sphere of radius and we would like them to touch the other blobs .however , as we explained above , it is not appropriate to think of blobs as spheres with a well - defined surface , and it is therefore important to study the optimal spacing more carefully ..[tab : stresslets_3d](left ) effective translational , rotational and stresslet hydrodynamic radii for rigid multiblob models of a sphere , for two choices of the blob - blob relative spacing .( right ) iterations to solve the mobility problem with tolerance for 4096 spheres discretized with 12 blobs , or for 512 spheres discretized with 42 blobs , arranged in a simple cubic lattice at different volume fractions , see sec .[ sub : convergencefmm ] . [ cols="^,^,^,^,^,^,^ " , ] a ortega , d amors , and j garca de la torre .prediction of hydrodynamic and other solution properties of rigid proteins from atomic - and residue - level models ., 101(4):892898 , 2011 .code available at http://leonardo.inf.um.es/macromol/programs/hydropro/hydropro.htm .megan s davies wykes , jrmie palacci , takuji adachi , leif ristroph , xiao zhong , michael d ward , jun zhang , and michael j shelley .dynamic self - assembly of microscale rotors and swimmers ., 12(20):45844589 , 2016 .tri t pham , ulf d schiller , j ravi prakash , and burkhard dnweg .implicit and explicit solvent models for the simulation of a single polymer chain in solution : lattice boltzmann versus brownian dynamics ., 131:164114 , 2009 .simn poblete , adam wysocki , gerhard gompper , and roland g. winkler .hydrodynamics of discrete - particle models of spherical colloids : a multiparticle collision dynamics simulation study ., 90:033314 , 2014 .david j smith .a boundary element regularized stokeslet method applied to cilia - and flagella - driven flow .in _ proceedings of the royal society of london a : mathematical , physical and engineering sciences _ , volume 465 , pages 36053626 . the royal society , 2009 .simon k layton and lorena a barba .inexact krylov iterations and relaxation strategies with fast - multipole boundary element method . , 2015 .software available at https://github.com/barbagroup/fmm-bem-relaxed .satish balay , william d. gropp , lois curfman mcinnes , and barry f. smith .efficient management of parallelism in object oriented numerical software libraries . in e.arge , a. m. bruaset , and h. p. langtangen , editors , _ modern software tools in scientific computing _ , pages 163202 . birkhuser press , 1997 .software available at http://www.mcs.anl.gov/petsc .yuanxun bao , jason kaye , and charles s. peskin .a gaussian - like immersed - boundary kernel with three continuous derivatives and improved translational invariance ., 316:139 144 , 2016 .software available at https://github.com/stochastichydrotools/ibmethod .r. falgout , j. jones , and u. yang .the design and implementation of hypre , a library of parallel high performance preconditioners ., pages 267294 , 2006 .software available at http://www.llnl.gov/casc/hypre .ayan chakrabarty , feng wang , chun - zhen fan , kai sun , and qi - huo wei .high - precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions ., 29(47):1439614402 , 2013 .pmid : 24171648 .yu zhang , juan j de pablo , and michael d graham .an immersed boundary method for brownian dynamics simulation of polymers in complex geometries : application to dna flowing through a nanoslit with embedded nanopits ., 136:014901 , 2012 .pat plunkett , jonathan hu , christopher siefert , and paul j atzberger .spatially adaptive stochastic methods for fluid structure interactions subject to thermal fluctuations in domains with complex geometries ., 277:121137 , 2014 .
we develop a rigid multiblob method for numerically solving the mobility problem for suspensions of passive and active rigid particles of complex shape in stokes flow in unconfined , partially confined , and fully confined geometries . as in a number of existing methods , we discretize rigid bodies using a collection of minimally - resolved spherical blobs constrained to move as a rigid body , to arrive at a potentially large linear system of equations for the unknown lagrange multipliers and rigid - body motions . here we develop a block - diagonal preconditioner for this linear system and show that a standard krylov solver converges in a modest number of iterations that is essentially independent of the number of particles . key to the efficiency of the method is a technique for fast computation of the product of the blob - blob mobility matrix and a vector . for unbounded suspensions , we rely on existing analytical expressions for the rotne - prager - yamakawa tensor combined with a fast multipole method ( fmm ) to obtain linear scaling in the number of particles . for suspensions sedimented against a single no - slip boundary , we use a direct summation on a graphical processing unit ( gpu ) , which gives quadratic asymptotic scaling with the number of particles . for fully confined domains , such as periodic suspensions or suspensions confined in slit and square channels , we extend a recently - developed rigid - body immersed boundary method [ `` an immersed boundary method for rigid bodies '' , b. kallemov , a. pal singh bhalla , b. e. griffith , and a. donev , communications in applied mathematics and computational science , 11 - 1 , 79 - 141 , 2016 ] to suspensions of freely - moving passive or active rigid particles at zero reynolds number . we demonstrate that the iterative solver for the coupled fluid and rigid body equations converges in a bounded number of iterations regardless of the system size . in our approach , each iteration only requires a few cycles of a geometric multigrid solver for the poisson equation , and an application of the block - diagonal preconditioner , leading to linear scaling with the number of particles . we optimize a number of parameters in the iterative solvers and apply our method to a variety of benchmark problems to carefully assess the accuracy of the rigid multiblob approach as a function of the resolution . we also model the dynamics of colloidal particles studied in recent experiments , such as passive boomerangs in a slit channel , as well as a pair of non - brownian active nanorods sedimented against a wall . # 1 # 1 # 1 # 1#1 # 1#1 # 1#1 # 1|#1| # 1#1 # 1 s l
the research of citation networks has drawn increasing attention and been applied to many fields .it can help scientists find useful academic papers , help inventors find interesting patents , or help judges discover relevant past judgements .the scientific citation networks considered in this paper are directed graphs , in which nodes represent papers , while edges stand for the citation relationships between them .since new papers can only cite the published papers , these graphs are acyclic .degree distribution is a fundamental research object of citation networks , and a series of models have been proposed to illustrate it .the price model appears to be the first to discuss about cumulative advantage in the context of citation networks and their in - degree distributions .the idea lies in that the rate at which a paper gets new citations should be proportional to the citations that it already has .this can lead to a power - law distribution according to the price model .a copy mechanism in which a new node attaches to a randomly selected target node as well as all its ancestors has been proposed by krapivsky et al . based on their viewpoints ,an author may be familiar with a few primary references and may simply copy the secondary references from the primary ones .this rule also leads to a power - law distribution .in addition , the cumulative advantage is also known as the preferential attachment in other literatures .jeong et al have measured the rate at which nodes acquire links on four kinds of real networks , and found that it depends on the node s degree .their results offer direct quantitative support for the presence of preferential attachment .moreover , an investigation has been conducted by eom et al on the microscopic mechanism for the evolution of citation networks by raising a linear preferential attachment with time - dependent initial attractiveness .the model reproduces the tails of the in - degree distributions of citation networks and the phenomenon called burst " : the citations received by papers increase rapidly in the early years since publication .the above - mentioned models have studied the tail of the in - degree distribution only , while the two - mechanism model proposed by george et al characterizes the properties of the overall in - degree distributions .with respect to the research of networks from the real world ( e.g. citation networks ) , using random geometric graph has become a hot topic in recent years .xie et al the academic influence scope as a geometric area and present an influence mechanism , which means that an existing paper will be cited by a new paper located in its influence zone .based on this mechanism , they further propose the concentric circles model ( cc model ) , which can fit the power - law tails of the in - degree distributions of the citation networks with the exponentially growing nodes .nevertheless , the forepart of the in - degree distribution and the out - degree distribution can not be well fitted by this model . in reality , node - increment in many current citation networks enjoys a linear growth , e.g. cit - hepph , cit - hepth ( fig [ fig1 ] ) and pnas including articles published during 2000 - 2015 ( shown in our later study ) .therefore , a model with linearly growing node - increment is proposed .the edges in the model are still linked according to the influence mechanism , whereas they are revised in that the influence scopes of papers are determined by their topics and ages ( the time that has passed since publication ) .different from the previous models that only focus on the tails of in - degree distributions , the improved model can well predict the overall in - degree distributions of the empirical data well . in consideration of the citations among different disciplines in real citation networks , a mechanism that is referred to as the interdisciplinary citation mechanism is proposed . under appropriate parameters, these mechanisms can reproduce a range of properties of citation networks , including the power - law tail of the out - degree distribution , giant component and clear community structure .meanwhile , some other properties can also be obtained like the relationship between in - degree and local clustering coefficient as well as in- and out - assortativity .these results show that our model can be used as a medium to study the intrinsic mechanism of citation networks .-2.25in0 in .*the first two networks extract from arxiv which cover paper from january 1993 to april 2003 ( 124 months ) in high energy physics theory and in high energy physics phenomenology . *the last network is generated according to the generating process of the model , where parameters are , , , , , , , , , , . in the header of the table ,cc , ac , ac - in , ac - out , pg and mo denote the clustering coefficient , the assortative coefficient , the in - assortative coefficient , the out - assortative coefficient , the node proportion of giant component and modularity , respectively . [ cols="<,^,^,^,^,^,^,^,^",options="header " , ] [ table1 ] the structure of this paper is as follows . the model is described in section 2 .the degree distributions , clustering and assortativity are analyzed in section 3 to section 5 , and finally the conclusion is provided in the last section .-2.25in0 in . * panels ( a , b ) show the trends for the papers of cit - hepth and cit - hepph . panel ( c ) shows the trend for the modeled network .they are fitted by linear functions .the coefficient of determination is used to measure the goodness of fits ., title="fig:",width=576,height=144 ]-2.25in0 in is bigger than other s . the edges in the modelare linked according to the influence mechanism and the interdisciplinary citation mechanism : node belongs to the zone of node , then cites ; node is an interdisciplinary paper " , so it could cite node , even though does not belong to the influence zone of ., title="fig:",width=249,height=220 ] since many journals and databases publish papers monthly or yearly and papers in the same issue can not cite each other normally , models like the price model or the copy model that publish one paper at each time step do not consider the growing trends of papers .xie et al pay attention to the citation networks in which the annual numbers of papers grows exponentially , such as the citation network collected by tang et al for papers ( which are published in the period from 1936 - 01 - 01 to 2013 - 09 - 29 ) in dblp dataset .however , in some real citation networks ( e.g. cit - hepph and cit - hepth ) , the monthly or annual numbers of papers published grow linearly ( fig [ fig1 ] ) .for purpose of the evolution and features of these networks , a geometric graph model , in which the node - increment in specific time unit experiences a linear growth , is proposed here . in our model , some spatial coordinatesare given to the nodes to represent the research contents of papers ( the differences of research contents are illustrated by the geometric distances between nodes ) .besides , a simple spacetime , ( 2 + 1)-dimensional minkowski spacetime of two spatial dimensions , along with one temporal dimension is considered in this model , so that the time characteristics of the nodes in citation networks can be modeled .the nodes in the model are uniformly and randomly sprinkled onto a cluster of concentric circles ( the centers of which are on the time axis ) .in addition , the nodes on different circles are generated in different time units , while those in the same circle represent the papers published in the same issue .the number of nodes in a circle is a linearly increasing function of the circle s temporal coordinate . in the spacetime ,nodes are identified by their locations , where is the generated time of the node , is the radius of the circle born at time , and is the angular coordinate . considering that the radius and the time are -to- correspondence , each node is identified by its location only with time coordinate and angular coordinate .the edges in the model are linked according to the influence mechanism and the interdisciplinary citation mechanism , which are displayed in fig [ dia ] .the influence zone of node contains node , and thus a directed edge is drawn from node to node under a given probability . as node is an interdisciplinary paper " , it could cite node , even though the influence zone of node does not contain . supposing that a modeled network has papers ( ) published in the unit of time , including some interdisciplinary papers , the generating process of the model is listed as follows . 1 .generate a new circle with radius ( ) centered at point at each time , sprinkle nodes ( papers ) on it randomly and uniformly , and fix nodes with their coordinates , e.g. node with .+ 2 . for each node with coordinate ,the influence zone ( academic influence scope ) of the node is defined as an interval of angular coordinate with center and arc - length , where is used to tune the exponent of power - law tail of in - degree distribution , and is used to make the in - degree distribution of papers published in each time unit have a power - law tail .+ 3 . for node and node ,the coordinates of which are and respectively , if the distance of angular coordinates and , a directed edge is drawn from to under a probability .select percent nodes as interdisciplinary papers to continually cite a number of existing papers randomly to make the reference lengths ( out - degrees ) of those papers to be random variables drawn from a power - law distribution .the function in step 2 is a staircase function of \theta\in[\theta_{1},\theta_{2}]\theta\in[\theta_{s-1},2\pi] ] are a specific partition of $ ] satisfying , , , , , and the aging of the papers influences is ignored here due to the short time span of the empirical data ( around ten years ) ( table [ table1 ] ) . in this paper ,the model is developed to fit cit - hepth and cit - hepph ( table 1 ) .the evolutionary trends of the monthly numbers of papers in this two networks are sufficiently fitted by linear functions ( figs [ fig1]a , [ fig1]b ) . to make the modeled time span ( around ten years ) and the modeled size of nodes match with the empirical data, parameters are properly selected and listed in the end of table [ table1 ] . especially , the unit of the parameter is set as month ( fig [ fig1]c ) , while the rise rate of node - increment and the number of circles are set to be and , respectively .-2.25in0 in for the foreparts of the in- and out - degree distributions , and the power - law distribution for the tails of the in - degree distributions in panels ( a , b , c ) ( fitted by the method in ref ( ) ) .the root mean squared error ( rmse ) and coefficient of determination are used to measure the goodness of fits.,title="fig:",width=576,height=288 ]the out - degree distributions of the empirical data ( table [ table1 ] ) take the form of fat tails and curves in the forepart ( figs [ fig3]e,[fig3]f ) .the curves in the forepart of the out - degree distributions can be well fitted by the generalized poisson distribution . in reality , the behavior that paper cites paper is influenced by the number of the citations and the popularity of paper author . at the same time, it can be viewed as a low - probability event ( the reference length of paper is very small compared with the large number of papers ) .these settings are suitable for the use of the generalized poisson distribution .now the formulas of the forepart and tail of the out - degree distribution of the modeled network ( table [ table1 ] ) are derived to show how our model generates the similar curve and fat tail ( fig [ fig3]d ) .the edges in the model are linked according to the influence mechanism and the interdisciplinary citation mechanism .firstly , the non - interdisciplinary paper with coordinate is considered . for prior node ,its coordinate is , where . if , node is located in the influence zone of node .when is small enough , , because is a staircase function .then the expected out - degree of node is as follows : which is an increasing function of the temporal coordinate .when is large enough , , indicating that the reference length of the paper denoted by node is approximately a constant .this is in accordance with the actual situation that the reference length of papers can not grow infinitely .since the process of sprinkling nodes follows the poisson point process , the actual out - degree of node is not exactly equal to the expected out - degree .therefore , in order to obtain the correct out - degree distribution , it is necessary to average the poisson distribution , which is the probability that node has out - degree , with the temporal density . in this model , so the out - degree distribution is it is a mixture poisson distribution similar to that of the empirical data .moreover , the curve in the forepart of the modeled in - degree distribution can be well fitted by the generalized poisson distribution ( fig [ fig3]d ) .the interdisciplinary papers make the tail of the modeled out - degree distribution fat ( step 4 ) ( fig [ fig3]d ) .thus , in combination with the non interdisciplinary papers , the out - degree distribution is where denotes the proportion of interdisciplinary papers , and refers to the power - law distribution defined in step 4 .the in - degree distributions of the empirical data have been investigated with the result showing that the curves in the forepart of the in - degree distributions can be well fitted by the generalized poisson distribution ( figs [ fig3]b,[fig3]c ) .actually , the citations of one paper are affected by the new papers of its authors , and the probability of one paper receiving citations ( be selected from plenty of papers ) is small and not equal to that of other papers .these are the conditions in which the generalized poisson distribution can be applied . besides ,the in - degree distributions of the empirical data have a fat tail ( figs [ fig3]b,[fig3]c ) which can be interpreted as a consequence of the cumulative advantage . in this model, this phenomenon is caused by the highly cited papers with large influence zones .now , an expression of the forepart and tail of the modeled in - degree distribution is derived to show how our model generates the similar curve and fat tail(fig [ fig3]a ) . for the modeled paper , it can receive citations from the papers located inside or outside of its influence zone .therefore , the expected in - degree of paper with coordinate is when is small , the first item in formula ( [ eq6 ] ) is larger than the second item , so averaging the poisson distribution , the in - degree distribution in the large in - degree region is where , . the laplace approximation and the stirling s approximationare used in this approximation .it can be proven that the integral term of is approximately independent of .the derivation process is as follows : when the in - degree is large enough , which is satisfied by the small ( formula ( [ eq7 ] ) ) , the integration is approximately equal to a constant . in this way ,the modeled in - degree distribution in the large- has a power - law tail with exponent .when is large , the time derivative of the influence zone of paper is considered , means that the influence zone of paper in this model is approximately a constant when is large .hence , it is assumed that ( d is a constant ) .the expected in - degree of paper is and thus the in - degree distribution in the small in - degree region is it indicates that the in - degree distribution of the modeled network in the small in - degree region is a mixture poisson distribution similar to that of the empirical data .also , the in - degree distribution of the modeled network in the small in - degree region can be well fitted by the generalized poisson distribution ( fig [ fig3]d ) .the local clustering coefficient is equal to the probability that two vertices , both neighbors of the third vertex , will be the neighbors of one another .also , it is found that the highly cited papers often have low local clustering coefficients in empirical data ( figs [ fig4]b,[fig4]c ) . due to the short time span of the empirical data ( only ten years ) ( table [ table1 ] ) ,the highly cited papers get lots of citations from the new published papers with few citations . in this paper ,the highly cited papers are considered and the formula of the relation between the highly cited papers and their local clustering coefficients is derived to show how well our model fits the tail of the local clustering coefficient .-2.25in0 in suppose is a highly cited paper and is small enough .paper and paper are the new published papers , which are the neighbors of paper . if has coordinate , a reasonable assumption is made that the overlap of the influence zones of and in circle ( is the current time ) is approximately because of the small and large .particularly , if the connection probability equals to 1 , the probability that paper is the common neighbor of paper and paper is approximately equal to .thus , for the general connection probability , the conditional probability . the effect of the interdisciplinary papers is ignored here owing to the low probability that paper is an interdisciplinary paper and connects to paper and paper simultaneously .summing over the possible values of , it can be found that where denotes the number of the papers in the influence zone of paper at time .since paper is a highly cited paper , the papers citing dominate the neighbors of and the effect of papers cited by can be ignored .moreover , the expected in - degree of the highly cited paper is . by substituting it into eq ( [ eq13 ] ) , we get which is inversely proportional to the in - degree of paper .thus , the local clustering coefficient of the highly cited paper in this model is also small . to show the similarity more clearly, the range of is divided into equal small intervals and for each interval is also averaged to reduce the noises caused by random factors ( fig [ fig4 ] ) .it can be seen that the highly cited papers tend to cite the same highly cited papers in the empirical data , which means that they are in - assortative ( table [ table1 ] ) .moreover , it is intuitive that researchers are often wild about tracing back to hot topics .if the topics of papers have great research value , numerous researchers will focus on them and publish a large number of papers that will cite each other . as a result , these papers become highly cited as well . the empirical data are also out - assortative ( table [ table1 ] ) , which refers to the tendency of papers to cite other papers with similar out - degrees to themselves . actually , the researchers often put emphasis on the new published papers that have novel contents .-2.25in0 in -2.25in0 in our model also has these two properties . to show the performance of this model ,the formulas of the scaling relations between the in- and out - degree of a node and the mean in- and out - degree of the neighbors pointing to and pointed at by the node are derived .the relations are denoted by and , respectively . for node , the coordinate of whichis , the in - degrees of all nodes pointing to it are averaged , and found in this formula , some approximations are made , and they hold for small , meaning that formula ( [ eq15 ] ) can only fit the tail of the scaling relation .therefore , substituting the in - degree of node born early into formula ( [ eq15 ] ) , we get where and are constants . formula ( [ eq16 ] ) is an increasing function , suggesting that the model is in - assortative .in addition , when in - degree is large enough , is approximately equal to a constant ( fig [ fig5]a ) .it is close to the actual situation that a hot topic will be out of fashion .then is considered .if node is not an interdisciplinary paper , we could get substituting the expected out - degree of node into formula ( [ eq17 ] ) , we get where is a constant . formula ( [ eq18 ] ) is an increasing function about out - degree , whereas will not satisfy the result given by formula ( [ eq18 ] ) ( fig [ fig6]a ) if the out - degree of node is large , which shows that most nodes with large out - degrees represent interdisciplinary papers ( fig [ fig3]d ) . if node is an interdisciplinary paper , the nodes that are pointed at by it are also interdisciplinary papers due to the out - assortativity of the model .so where is a constant .it indicates that the average out - degree of nodes pointed at by node fluctuates around a constant ( fig [ fig6]a ) .meanwhile , the model is out - assortative , as the number of interdisciplinary papers in the model is small .a model of scientific citation networks with linearly growing node - increment is proposed , in which the influence mechanism and the interdisciplinary citation mechanism are involved . under appropriate parameters , the formula of the modeled network s in - degreedistribution is derived , and it shows a similar behavior to the empirical data in the small in - degree region and a power - law tail in the large in - degree region .different from most previous models that just study the forepart of the out - degree distribution of the empirical data , this model also captures the fat tails .the model can also predict some other typical statistical features like clustering , in- and out - assortativity , giant component and clear community structure .for example , it vividly characterizes the academic influence power of papers by geometric zones , and interprets the power - law tails of citation networks in - degree distributions by the papers inhomogeneous influence power .therefore , it is believed that this model is a suitable geometric tool to study the citation networks .however , some shortcomings still need to be overcome in future work : how to design a mechanism to characterize the citations of the interdisciplinary papers rather than randomly and uniformly select the existing papers ; and how to model the out - degree distribution better .the authors would like to thank pengyuan zhang , zonglin xie and han zhang for helpful discussions and dan zhuge for proofreading this paper .conceived and designed the experiments : ql .performed the experiments : ql .analyzed the data : ql zx ed . contributed reagents / materials / analysis tools : ql zx ed . wrote the paper : ql zx ed jl .10 brooks ta ( 1986 ) evidence of complex citer motivations .j am soc inf sci technol 37(1 ) : 34 - 36 . radicchi f , fortunato s , vespignani a , citation networks ( 2012 ) in : scharnhorst a , brner k , besselaar pvd editors . models of science dynamics .233 - 257 .radicchi f , castellano c ( 2015 ) understanding the scientific enterprise : citation analysis , data and modeling . in social phenomena .springer pp .135 - 151 .harzing aw ( 2010 ) the publish or perish book .melbourne : tarma software research .abbas a , zhang l , khan s u ( 2014 ) a literature review on the state - of - the - art in patent analysis .world patent information 37 : 3 - 13 .martin pw ( 2007 ) introduction to basic legal citation .legal information institute .newman me ( 2003 ) the structure and function of complex networks .siam review 45(2 ) : 167 - 256 .price dj de solla ( 1965 ) networks of scientific papers .science 149(3683 ) : 510 - 515 .cad , c a d ( 1976 ) a generai theory of bibiiometric and other cumulative advantage processes .j am soc inf sci technol : 293 .peterson gj , steve press s , dill ka ( 2010 ) nonuniversal power law scaling in the probability distribution of scientific citations .proc natl acadsci usa 107 : 16023 - 16027 .krapivsky pl , redner s ( 2005 ) network growth by copying .phys rev e 71 : 036118 .barabsi al , albert r ( 1999 ) emergence of scaling in random networks .science 286(5439 ) : 509 - 512 .newman me ( 2001 ) clustering and preferential attachment in growing networks .phys rev e 64(2 ) : 025102 .jeong h , nda z , balabsi al ( 2001 ) measuring pre - ferential attachment for evolving network .arxiv preprint cond - mat/0104131 .eom yh , fortunato s ( 2011 ) characterizing and modeling citation dynamics .xie z , ouyang zz , zhang py , yi dy , kong dx ( 2015 ) modeling the citation network by network cosmology .plos one 10 : e0120687 .leskovec j , kleinberg j , faloutsos c ( 2007 ) graph evolution : densification and shrinking diameters .acm tkdd 1(1 ) : 2 .doi : 10.1145/1217299.1217301 gehrke j , ginsparg p , kleinberg j ( 2003 ) overview of the 2003 kdd cup .sigkdd explorations 5 : 149 - 151 .doi : 10.1145/980972.980992 .tang j , zhang j , jin rm , yang z , cai kk , zhang l , et al .( 2011 ) topic level expertise search over heterogeneous networks .mach learn j 82 : 211 - 237 .doi : 10.1007/s10994 - 010 - 5212 - 9 .clauset a , shalizi cr , newman mej ( 2009 ) power - law distributions in emprical data .siam rev 51 : 661 - 703 .tuenter hj ( 2000 ) on the generalized poisson distribution .statistica neerlandica 54(3 ) : 374 - 376. krioukov d , kitsak m , sinkovits rs , rideout d , meyer d , bogum ( 2012 ) network cosmology .sci rep 2 : 793 .krapivsky pl , redner s , leyvraz f ( 2000 ) connectivity of growing random networks .phys rev l 85(21 ) : 4629 .dorogovtsev sn , mendes jff , samukhin an ( 2000 ) structure of growing networks with preferential linking .phys rev l 85(21 ) : 4633 .watts dj , strogatz sh ( 1998 ) collective dynamics of ` small - world ' networks .nature 393(6684 ) : 440 - 442 .
due to the fact that the numbers of annually published papers have witnessed a linear growth in some citation networks , a geometric model is thus proposed to predict some statistical features of those networks , in which the academic influence scopes of the papers are denoted through specific geometric areas related to time and space . in the model , nodes ( papers ) are uniformly and randomly sprinkled onto a cluster of circles of the minkowski space whose centers are on the time axis . edges ( citations ) are linked according to an influence mechanism which indicates that an existing paper will be cited by a new paper located in its influence zone . considering the citations among papers in different disciplines , an interdisciplinary citation mechanism is added to the model in which some papers with a small probability of being chosen will cite some existing papers randomly and uniformly . different from most existing models that only study the power - law tail of the in - degree distribution , this model also characterizes the overall in - degree distribution . moreover , it presents the description of some other important statistical characteristics of real networks , such as in- and out - assortativity , giant component and clear community structure . therefore , it is reasonable to believe that a good example is provided in the paper to study real networks by geometric graphs .
uplink co - ordinated multi - point ( comp ) is a promising technique for increasing the capacity of 4 g networks .the uplink is gaining increasing attention due to the dramatic increase of user - generated data in the form of photos , videos and file - sharing . in practice ,sharing of uplink received signals across cells is limited by backhaul bandwidth .in addition , the receiver _ aperture _ , or number of signals from antennas at neighboring cell sites that can be processed at a particular cell , may be limited due to hardware constraints .several approaches for signal sharing and combining have been proposed for uplink comp .a performance analysis of different combining methods with different backhaul bandwidth requirements is presented in . to reducethe amount of sharing , dynamic clustering of cells has been proposed , e.g. , in .in addition to limiting the amount of information that can be shared across cells , the limited backhaul bandwidth also introduces latency , which is addressed in .previous work on comp has generally assumed that the set of cells that share information is fixed _ a priori _ by the topology and the bandwidth of the backhaul links . in this paperwe relax this assumption and consider the problem of _ optimizing _ _ sets _ of _ helper _ cells that pass along their uplink signals to other cells .( a cell can both share its signals as a helper cell and receive signals , or help from other cells . )we account for architectural constraints that limit the set of potential helper cells ( which differs from cell to cell ) , backhaul constraints that limit the number of cells to which a helper cell can send its received signals ( _ egress _ constraint ) , and hardware constraints at the cell site , which may limit the number of incoming signals the cell can combine ( _ ingress _ constraint ) .related work in has considered the problem of mmse receiver estimation under compression and backhaul constraints .we focus here instead on a simpler sharing formulation that introduces explicit constraints on egress bandwidth , to arrive at a convex weighted sum rate formulation with guaranteed convergence . imposing these explicit egress bandwidth and ingress aperture constraintscaptures important architectural limitations in centralized or distributed co - operative networks .our problem is then to maximize a sum rate objective subject to these architectural constraints .since the cell sites are assumed to have multiple antennas , we refer to this problem as _ multi - antenna aperture selection ( maas ) _ for joint reception ( jr)-comp ( see also ) .the optimization of sets of helper cells is an integer program . assuming max - ratio combining of received signals and relaxing the integer constraints, the optimization problem becomes convex .we present a distributed algorithm in which each helper cell announces an _ egress price _ , indicating the demand for its signals to help other cells , and each assisted cell computes an _ ingress price _ , indicating the potential improvement from adding a helper cell .the prices are used to compute an ordering of users / cells , which is then used to allocate the helper antennas across assisted cells .this assignment is iterated with updates for the prices .we refer to this algorithm as _ liquidmaas _ due to its ability to flexibly allocate help based on network load conditions , and show that it converges to the optimal allocation of helper cells across the network .numerical results are presented that show that the algorithm converges in relatively few iterations even for a system with a large number of cells .furthermore , the gains relative to an _ a priori _ fixed allocation of helper cells can be substantial .consider a network of cells , each with multiple antennas .each cell serves a set of users .the uplink signal from each user is typically strongest at its serving cell ; however , the signal could also be received with significant strength at other cells depending on the user location and network topology .we assume the antenna - combined signal for a particular user can be shared with other cells , and is , of course , provided to that user s serving cell for comp combining .figure ( [ cell - topology ] ) shows an example of a network with uplink data sharing .there are four cells with seven users , where , , , and .let denote the set of cells with data that can be requested by the cell serving user ( _ ingress _ neighborhood ) .that typically corresponds to the set of cells where the user s sinr is above a minimum threshold value ( usually -10 db ) .that is , , where is the sinr at cell for a user in cell ( i.e. , ) , and is the serving cell for user .note that may not be the actual set of helper cells for user due to backhaul and aperture constraints . also shown infigure ( [ cell - topology ] ) are the possible sharing variables along with the inter - connect .note that cells 1 and 3 , and cells 2 and 4 do not exchange information .the sharing variable indicates whether or not cell shares data for user in cell over the backhaul link .for the example shown in figure ( [ cell - topology ] ) , the local neighborhood sets for the different users are the maximum receiver aperture size , or number of helper cells , for user is denoted as , and is the number of cells in the neighborhood . for a cell , its _ egress _ neighborhood to cell is denoted by , and is the set of users in cell it can potentially help , ignoring backhaul and aperture constraints .that is : for the example in figure ( [ cell - topology ] ) , .the egress neighborhoods are : note that if there is no possibility of sharing between two cells , then the corresponding egress set is null , _i.e. , _ .the notation is summarized in table [ table ] ..variable definitions [ cols="^,^",options="header " , ] [ table2 ] users were allocated bandwidth equally based on the number of users connected to a given cell .a user s sinr to all cells in the network was calculated based on this bandwidth allocation , and open - loop fractional power control was assumed .the update for and used a step - size of .four algorithms were evaluated : _ ( 1 ) no comp , ( 2 ) maas without egress constraints , ( 3 ) maas with a randomized egress bandwidth control mechanism , and ( 4 ) liquidmaas_. in case _( 2 ) _ , each cell requests help based on their aperture limit , and helper cells grant any requested help . in _ ( 3 ) _ , helper cells randomly grant requests for help until their egress constraint limit is reached .the ingress aperture constraint limit was set to , so that each user being gets a maximum of helper cells data .the egress bandwidth limit was varied to see the effect on performance and convergence .the top part of figure ( [ fig2 ] ) shows the convergence behavior of all cells egress bandwidth demand ( for ) , from which we observe that the algorithm converges in iterations .the convergence time is dependent on the step - size for updating the prices .it was observed that if the aperture and egress limits are similar , the step - size should be reduced to enable smoother convergence .the bottom part of figure ( [ fig2 ] ) shows the distribution of weighted sum rate ( wsr ) gain obtained by the various comp approaches .this shows that liquidmaas gives substantial gain over no comp and also over the randomized egress bandwidth control strategy .figure ( [ fig3 ] ) shows the distribution of the egress bandwidth and ingress apertures at convergence .we observe that the liquidmaas algorithm maintains tight control over the egress bandwidth , and adapts the ingress aperture to ensure that the constraints are met .finally , figure ( [ fig4 ] ) shows the average wsr gain ( over no comp ) as a function of the egress limit ( ) .liquidmaas gives significant gains across the range of egress limits compared to the randomized strategy , and converges to the unconstrained maas solution for larger values of ..,height=359 ] .,height=359 ]in this paper , we presented an approach for joint algorithm and architecture optimization for co - operative communication networks .we considered the problem of uplink joint reception comp with backhaul bandwidth constraints , and showed that the resulting network utility maximization problem is convex . a distributed algorithm to solve this problemwas presented , which consisted of local nodes computing their desired helper requests , followed by helper cells computing and updating an egress price for their bandwidth . at convergence ,these iterations give a set of connections between helper and recipient cells , such that egress and ingress constraints are met at all cells , and overall network utility is maximized .simulation results for a 57-cell system were presented to illustrate the efficacy of the approach , and show that the distributed dual decomposition - based gradient algorithm converges within tens of iterations .the authors acknowledge useful discussions with r. agrawal , m. r. raghavendra , p. rasky , and c. schmidt .k. m. karakayli , g. j. foschini , and r. a. valenzuela , `` network coordination for spectrally efficient communications in cellular systems , '' _ ieee wireless commun ._ , vol . 13 , no .4 , pp . 5661 , aug 2006 .c. liao , h. huang , `` performance evaluation of intra - site coordinated multi - point transmission with inter - cell phase information in 3gpp lte - a , '' _ proc .ieee wireless personal multimedia communications ( wpmc ) _ , pp .241 - 245 , 2012 .a. papadogiannis , d. gesbert , and e. hardouin , `` a dynamic clustering approach in wireless networks with multi - cell cooperative processing , '' _ proc .ieee international conference on communications , ( icc ) _ , pp .4033 - 4037 , may 2008 .y. zhou and w. yu , optimized beamforming and backhaul compression for uplink mimo cloud radio access networks , " _ ieee journal on selected areas in communications , special issue on 5 g wireless communication systems , _ vol .32 , no . 6 , pp.1295 - 1307 , june 2014 .k. zeineddine , m. l. honig , s. nagaraj and p. j. fleming , `` antenna selection for uplink comp in dense small - cell clusters , '' _ proc .ieee signal processing advances in wireless communications ( spawc ) _ , pp . 81 - 85 , 2013
we address the problem of uplink co - operative reception with constraints on both backhaul bandwidth and the receiver aperture , or number of antenna signals that can be processed . the problem is cast as a network utility ( weighted sum rate ) maximization subject to computational complexity and architectural bandwidth sharing constraints . we show that a relaxed version of the problem is convex , and can be solved via a dual - decomposition . the proposed solution is distributed in that each cell broadcasts a set of _ demand prices _ based on the data sharing requests they receive . given the demand prices , the algorithm determines an antenna / cell ordering and antenna - selection for each scheduled user in a cell . this algorithm , referred to as _ liquidmaas _ , iterates between the preceding two steps . simulations of realistic network scenarios show that the algorithm exhibits fast convergence even for systems with large number of cells .
predictive analysis of networked data is a fast - growing research area whose application domains include document networks , online social networks , and biological networks . in this workwe view networked data as weighted graphs , and focus on the task of node classification in the transductive setting , i.e. , when the unlabeled graph is available beforehand .standard transductive classification methods , such as label propagation , work by optimizing a cost or energy function defined on the graph , which includes the training information as labels assigned to training nodes .although these methods perform well in practice , they are often computationally expensive , and have performance guarantees that require statistical assumptions on the selection of the training nodes .a general approach to sidestep the above computational issues is to sparsify the graph to the largest possible extent , while retaining much of its spectral properties see , e.g. , .inspired by , this paper reduces the problem of node classification from graphs to trees by extracting suitable _ spanning trees _ of the graph , which can be done quickly in many cases .the advantage of performing this reduction is that node prediction is much easier on trees than on graphs .this fact has recently led to the design of very scalable algorithms with nearly optimal performance guarantees in the online transductive model , which comes with no statistical assumptions . yet, the current results in node classification on trees are not satisfactory .the treeopt strategy of is optimal to within constant factors , but only on _ unweighted _ trees .no equivalent optimality results are available for general weighted trees . to the best of our knowledge , the only other comparable result is wta by , which is optimal ( within log factors ) only on weighted lines .in fact , wta can still be applied to weighted trees by exploiting an idea contained in .this is based on linearizing the tree via a depth - first visit . since linearization loses most of the structural information of the tree , this approach yields suboptimal mistake bounds .this theoretical drawback is also confirmed by empirical performance : throwing away the tree structure negatively affects the practical behavior of the algorithm on real - world weighted graphs .the importance of weighted graphs , as opposed to unweighted ones , is suggested by many practical scenarios where the nodes carry more information than just labels , e.g. , vectors of feature values . a natural way of leveraging this side information is to set the weight on the edge linking two nodes to be some function of the similariy between the vectors associated with these nodes . in this work ,we bridge the gap between the weighted and unweighted cases by proposing a new prediction strategy , called shazoo , achieving a mistake bound that depends on the detailed structure of the weighted tree .we carry out the analysis using a notion of learning bias different from the one used in and more appropriate for weighted graphs .more precisely , we measure the regularity of the unknown node labeling via the weighted cutsize induced by the labeling on the tree ( see section [ s : lower ] for a precise definition ) .this replaces the unweighted cutsize that was used in the analysis of wta .when the weighted cutsize is used , a cut edge violates this inductive bias in proportion to its weight .this modified bias does not prevent a fair comparison between the old algorithms and the new one : shazoo specializes to treeopt in the unweighted case , and to wta when the input tree is a weighted line . by specializing shazoo s analysis to the unweighted casewe recover treeopt s optimal mistake bound .when the input tree is a weighted line , we recover wta s mistake bound expressed through the weighted cutsize instead of the unweighted one .the effectiveness of shazoo on any tree is guaranteed by a corresponding lower bound ( see section [ s : lower ] ) .shazoo can be viewed as a common nontrivial generalization of both treeopt and wta . obtaining this generalization while retaining and extending the optimality properties of the two algorithmsis far from being trivial from a conceptual and technical standpoint . since shazoo works in the online transductive model, it can easily be applied to the more standard train / test ( or `` batch '' ) transductive setting : one simply runs the algorithm on an arbitrary permutation of the training nodes , and obtains a predictive model for all test nodes .however , the implementation might take advantage of knowing the set of training nodes beforehand .for this reason , we present two implementations of shazoo , one for the online and one for the batch setting. both implementations result in fast algorithms .in particular , the batch one is linear in .this is achieved by a fast algorithm for weighted cut minimization on trees , a procedure which lies at the heart of shazoo . finally , we test shazoo against wta , label propagation , and other competitors on real - world weighted graphs . in _almost all _ cases ( as expected ) , we report improvements over wta due to the better sensitivity to the graph structure . in some cases , we see that shazoo even outperforms standard label propagation methods .recall that label propagation has a running time per prediction which is proportional to , where is the graph edge set . on the contrary , shazoo can typically be run in _ constant _amortized time per prediction by using wilson s algorithm for sampling random spanning trees . by disregarding edge weights in the initial sampling phase , this algorithm is able to draw a random ( unweighted ) spanning tree in time proportional to on most graphs .our experiments reveal that using the edge weights only in the subsequent prediction phase causes in practice only a minor performance degradation .let be an undirected and weighted tree with nodes , positive edge weights for , and for .a binary labeling of is any assignment of binary labels to its nodes .we use to denote the resulting labeled weighted tree .the online learning protocol for predicting is defined as follows .the learner is given while is kept hidden .the nodes of are presented to the learner one by one , according to an unknown and arbitrary permutation of . at each timestep node is presented and the learner must issue a prediction for the label . then is revealed and the learner knows whether a mistake occurred .the learner s goal is to minimize the total number of prediction mistakes .following previous works , we measure the regularity of a labeling of in terms of -edges , where a -edge for is any such that .the overall amount of irregularity in a labeled tree is the * weighted cutsize * , where is the subset of -edges in the tree .we use the weighted cutsize as our learning bias , that is , we want to design algorithms whose predictive performance scales with . unlike the -edge count , which is a good measure of regularity for unweighted graphs , the weighted cutsize takes the edge weight into accounttypically encodes the strength of the connection .in fact , when the nodes of a graph host more information than just binary labels , e.g. , a vector of feature velues , then a reasonable choice is to set to be some ( decreasing ) function of the distance between the feature vectors sitting at the two nodes and .see also remark [ r:2 ] . ] when measuring the irregularity of a -edge . in the sequel , when we measure the distance between any pair of nodes and on the input tree we always use the resistance distance metric , that is , , where is the unique path connecting to .graph cluster : maximal connected subgraph of ( or ) uniformly labeled .`` closest '' : the default metric distance is the resistance distance metric on the tree .hinge - node : labeled node or fork ( come in treeopt ) hinge - line : path connecting two hinge nodes such that no internal node is a hinge - node ( come in treeopt ) hinge - tree : each of the component of the forest obtained by removing all edges incident to the hinge nodes .( quasi come in sel - pred . in sel - pred la foresta era ottenuta eliminando anche gli hinge - nodes ; definendo le cose in questo modo semplifichiamo le cose per questo paper ) : hinge - tree containing .a connection node of a hinge - tree is any hinge node adjacent to one of node of .( come in sel - pred ) fork : unlabeled node connected by ( at least ) three edge - disjoint path to ( at least ) three distinct labeled nodes ( come in sel - pred ) .connection fork : connection node which is fork . = path connecting node and .-free edge . : predicted label of .in this section we show that the weighted cutsize can be used as a lower bound on the number of online mistakes made by any algorithm on any tree . in order to do so ( and unlike previous papers on this specific subject see , e.g. , ) ,we need to introduce a more refined notion of adversarial budget " .given , let be the maximum number of edges of such that the sum of their weights does not exceed , we have the following simple lower bound ( all proofs are omitted from this extended abstract ) .[ th : lb ] for any weighted tree there exists a randomized label assignment to such that any algorithm can be forced to make at least online mistakes in expectation , while .specializing ( * ? ? ?* theorem 1 ) to trees gives the lower bound under the constraint .the main difference between the two bounds is the measure of label regularity being used : whereas theorem [ th : lb ] uses , which depends on the weights , ( * ? ? ?* theorem 1 ) uses the weight - independent quantity .this dependence of the lower bound on the edge weights is consistent with our learning bias , stating that a heavy -edge violates the bias more than a light one .since is nondecreasing , the lower bound implies a number of mistakes of at least .note that for any labeled tree .hence , whereas a constraint on implies forcing at least mistakes , a constraint on allows the adversary to force a potentially larger number of mistakes . in the next sectionwe describe an algorithm whose mistake bound nearly matches the above lower bound on any weighted tree when using as the measure of label regularity .in this section we introduce the shazoo algorithm , and relate it to previously proposed methods for online prediction on unweighted trees ( treeopt from ) and weighted line graphs ( wta from ) .in fact , shazoo is optimal on any weighted tree , and reduces to treeopt on unweighted trees and to wta on weighted line graphs .since treeopt and wta are optimal on _ any _ unweighted tree and _ any _ weighted line graph , respectively , shazoo necessarily contains elements of both of these algorithms . in order to understand our algorithm, we now define some relevant structures of the input tree .see figure [ fig : hinge - trees_shazoo ] ( left ) for an example .these structures evolve over time according to the set of observed labels .first , we call * revealed * a node whose label has already been observed by the online learner ; otherwise , a node is * unrevealed*. a * fork * is any unrevealed node connected to at least three different revealed nodes by edge - disjoint paths . a * hinge node *is either a revealed node or a fork .a * hinge tree * is any component of the forest obtained by removing from all _ edges _ incident to hinge nodes ; hence any fork or labeled node forms a -node hinge tree . when a hinge tree contains only one hinge node , a * connection node * for is the node contained in . in all other cases ,we call a connection node for any node outside which is adjacent to a node in . a * connection fork * is a connection node which is also a fork .finally , a * hinge line * is any path connecting two hinge nodes such that no internal node is a hinge node .* left : * an input tree .revealed nodes are dark grey , forks are doubly circled , and hinge lines have thick black edges .the hinge trees not containing hinge nodes ( i.e. , the ones that are not singletons ) are enclosed by dotted lines . the dotted arrows point to the connection node(s ) of such hinge trees . *middle : * the predictions of shazoo on the nodes of a hinge tree .the numbers on the edges denote edge weights . at a given time , shazoo uses the value of on the two hinge nodes ( the doubly circled ones , which are also forks in this case ) , and is required to issue a prediction on node ( the black node in this figure ) .since is between a positive hinge node and a negative hinge node , shazoo goes with the one which is closer in resistance distance , hence predicting . *right : * a simple example where the mincut prediction strategy does not work well in the weighted case . in this example, mincut mispredicts all labels , yet , and the ratio of to the total weight of all edges is about .the labels to be predicted are presented according to the numbers on the left of each node .edge weights are also displayed , where is a very small constant . ]given an unrevealed node and a label value , the * cut function * is the value of the minimum weighted cutsize of over all labelings consistent with the labels seen so far and such that . define if is unrevealed , and , otherwise .the algorithm s pseudocode is given in algorithm [ alg : shazoo ] . at time , in order to predict the label of node , shazoo calculates for all connection nodes of , where is the hinge tree containing .then the algorithm predicts using the label of the connection node of which is closest to and such that ( recall from section [ sec : prel ] that all distances / lengths are measured using the resistance metric ) .ties are broken arbitrarily .if for all connection nodes in then shazoo predicts a default value ( in the pseudocode ) .* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * let be the set of the connection nodes of for which let be the node of closest to set set [ alg : shazoo ] * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * if is a fork ( which is also a hinge node ) , then . in this case, is a connection node of , and obviously the one closest to itself .hence , in this case shazoo predicts simply by .see figure [ fig : hinge - trees_shazoo ] ( middle ) for an example .on unweighted trees , computing for a connection node reduces to the fork label estimation procedure in ( * ? ? ?* lemma 13 ) . on the other hand ,predicting with the label of the connection node closest to in resistance distance is reminiscent of the nearest - neighbor prediction of wta on weighted line graphs .in fact , as in wta , this enables to take advantage of labelings whose -edges are light weighted .an important limitation of wta is that this algorithm linearizes the input tree .on the one hand , this greatly simplifies the analysis of nearest - neighbor prediction ; on the other hand , this prevents exploiting the structure of , thereby causing logaritmic slacks in the upper bound of wta .the treeopt algorithm , instead , performs better when the unweighted input tree is very different from a line graph ( more precisely , when the input tree can not be decomposed into long edge - disjoint paths , e.g. , a star graph ) .indeed , treeopt s upper bound does not suffer from logaritmic slacks , and is tight up to constant factors on any unweighted tree .similar to treeopt , shazoo does not linearize the input tree and extends to the weighted case treeopt s superior performance , also confirmed by the experimental comparison reported in section [ s : exp ] . in figure[ fig : hinge - trees_shazoo ] ( right ) we show an example that highlights the importance of using the function to compute the fork labels .since predicts a fork with the label that minimizes the weighted cutsize of consistent with the revealed labels , one may wonder whether computing through mincut based on the number of -edges ( rather than their weighted sum ) could be an effective prediction strategy .figure [ fig : hinge - trees_shazoo ] ( right ) illustrates an example of a simple tree where such a mispredicts the labels of all nodes , when both and are small . [ r:1 ] we would like to stress that shazoo can also be used to predict the nodes of an arbitrary _ graph _ by first drawing a random spanning tree of the graph , and then predicting optimally on see , e.g. , .the resulting mistake bound is simply the expected value of shazoo s mistake bound over the random draw of . by using a fast spanning tree sampler , the involved computational overhead amountsto constant amortized time per node prediction on `` most '' graphs .[ r:2 ] in certain real - world input graphs , the presence of an edge linking two nodes may also carry information about the extent to which the two nodes are _ dissimilar _ , rather than similar .this information can be encoded by the sign of the weight , and the resulting network is called a _ signedgraph_. the regularity measure is naturally extended to signed graphs by counting the weight of _ frustrated edges _ ( e.g., ) , where is frustrated if .many of the existing algorithms for node classification can in principle be run on signed graphs .however , the computational cost may not always be preserved .for example ,mincut is in general np - hard when the graph is signed .since our algorithm sparsifies the graph using trees , it can be run efficiently even in the signed case .we just need to re - define the function as , where is the minimum total weight of frustrated edges consistent with the labels seen so far . the argument contained in section [ s : impl ] for the positive edge weights ( see , e.g. , eq .( [ eq : cut ] ) therein ) allows us to show that also this version of can be computed efficiently .the prediction rule has to be re - defined as well : we count the parity of the number of negative - weighted edges along the path connecting to the closest node , i.e. , . in the authors note that treeopt approximates a version space ( halving ) algorithm on the set of tree labelings .interestingly , shazoo is also an approximation to a more general halving algorithm for weighted trees .this generalized halving gives a weight to each labeling consistent with the labels seen so far and with the sign of for each fork .these weighted labelings , which depend on the weights of the -edges generated by each labeling , are used for computing the predictions .one can show ( details omitted due to space limitations ) that this generalized halving algorithm has a mistake bound within a constant factor of shazoo s .we now show that shazoo is nearly optimal on every weighted tree .we obtain an upper bound in terms of and the structure of , nearly matching the lower bound of theorem [ th : lb ] .we now give some auxiliary notation that is strictly needed for stating the mistake bound . given a labeled tree , a * cluster * is any maximal subtree whose nodes have the same label .an * in - cluster line graph * is any line graph that is entirely contained in a single cluster .finally , given a line graph , we set , i.e. , the ( resistance ) distance between its terminal nodes .[ th : ub ] for any labeled and weighted tree , there exists a set of edge - disjoint in - cluster line graphs such that the number of mistakes made by shazoo is at most of the order of the above mistake bound depends on the tree structure through .the sum contains terms , each one being at most logarithmic in the scale - free products .the bound is governed by the same key quantity occurring in the lower bound of theorem [ th : lb ] .however , theorem [ th : ub ] also shows that shazoo can take advantage of trees that can not be covered by long line graphs .for example , if the input tree is a weighted line graph , then it is likely to contain long in - cluster lines .hence , the factor multiplying may be of the order of .if , instead , has constant diameter ( e.g. , a star graph ) , then the in - cluster lines can only contain a constant number of nodes , and the number of mistakes can never exceed . this is a log factor improvement over wta which , by its very nature , can not exploit the structure of the tree it operates on . ) and lower ( theorem [ th : lb ] ) bounds exists due to the extra factors depending on .one way to get around this is to follow the analysis of wta in .specifically , we can adapt here the more general analysis from that paper ( see lemma 2 therein ) that allows us to drop , for any integer , the resistance contribution of arbitrary non- edges of the line graphs in ( thereby reducing for any containing any of these edges ) at the cost of increasing the mistake bound by .the details will be given in the full version of this paper . ] as for the implementation , we start by describing a method for calculating for any unlabeled node and label value .let be the maximal subtree of rooted at , such that no internal node is revealed .for any node of , let be the subtree of rooted at .let be the minimum weighted cutsize of consistent with the revealed nodes and such that .since , our goal is to compute .it is easy to see by induction that the quantity can be recursively defined as follows , where is the set of all children of in , and if is revealed , and , otherwise : now , can be computed through a simple depth - first visit of . in all backtracking steps of this visitthe algorithm uses ( [ eq : cut ] ) to compute for each node , the values for all children of being calculated during the previous backtracking steps .the total running time is therefore linear in the number of nodes of .next , we describe the basic implementation of shazoo for the on - line setting .a batch learning implementation will be given at the end of this section .the online implementation is made up of three steps .* 1 . find the hinge nodes of subtree * . recall that a hinge - node is either a fork or a revealed node .observe that a fork is incident to at least three nodes lying on different hinge lines .hence , in this step we perform a depth - first visit of , marking each node lying on a hinge line . in order to accomplish this task , it suffices to single out all forks marking each labeled node and , recursively , each parent of a marked node of . at the end of this processwe are able to single out the forks by counting the number of edges of each marked node such that has been marked , too .the remaining hinge nodes are the leaves of whose labels have currently been revealed .* 2 . compute * * for all connection forks of *. from the previous step we can easily find the connection node(s ) of . then, we simply exploit the above - described technique for computing the cut function , obtaining for all connection forks of . *3 . propagate the labels of the nodes of ( only if is not a fork)*. we perform a visit of starting from every node . during these visits, we mark each node of with the label of computed in the previous step , together with the length of , which is what we need for predicting any label of at the current time step . the overall running time is dominated by the first step and the calculation of .hence the worst case running time is proportional to .this quantity can be quadratic in , though this is rarely encountered in practice if the node presentation order is not adversarial .for example , it is easy to show that in a line graph , if the node presentation order is random , then the total time is of the order of . for a star graphthe total time complexity is always linear in , even on adversarial orders . * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * let be the maximal subtree of containing node such that none of its internal node labels have been revealed before time . hence ,the leaves of are the only nodes of whose label may be revealed before time .we now consider tree as rooted at and we denote by the subtree of formed by and all its descendants .notice that , by definition of , at time the algorithm does not know any label of its internal nodes .let be the minimal cutsize of consistent with the labels of its leaves that have been revealed before time , conditioned on the the fact that label is revealed and it is equal to .let now the set of all children of in .it is easy to see that depends on and for each .in fact , one can formally give a recursive definition of .for each tree , , label and node of , satisfies when is a leaf of . when is an internal node of , where if is already revealed , and otherwise . by induction on the height of .term takes into account the contribution of edge to the cutsize .let now be the set of labels that one can assign to node for minimizing the cutsize contribution of ( i ) the edge set of , together with ( ii ) the edge connecting with its parent in , if we set , i.e. let be the version space of consistent with all labels revealed at time , setting . as stated by the following lemma, the cardinality of can be expressed in terms of for all .for each tree , , label and unlabeled node of , satisfies when is a leaf of . when is an internal node of .by induction on the height of the basic implementation of this algorithm consists of a simple depth - first visit of tree . in all backtracking steps of this visitthe algorithm calculates , for each node , and using equation [ cutsize ] and [ card_s ] respectively .in fact , equation [ cutsize ] defines in terms of for all children of , which have therefore been already calculated during the previous backtracking steps .similarly , equation [ card_s ] defines in terms of and for all children of ( see figure [ f : halving_online ] ) .label is predicted with if , or otherwise .[ f : halving_online ] * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * + in many real - world scenarios , one is interested in the more standard problem of predicting the labels of a given subset of _ test _ nodes based on the available labels of another subset of _ training _ nodes .building on the above on - line implementation , we now derive an implementation of shazoo for this train / test ( or `` batch learning '' ) setting .we first show that computing and for all unlabeled nodes in takes time .this allows us to compute for all forks in time , and then use the first and the third steps of the on - line implementation .overall , we show that predicting _ all _ labels in the test set takes time. consider tree as rooted at .given any unlabeled node , we perform a visit of starting at . during the backtracking steps of this visit we use ( [ eq : cut ] ) to calculate for each node in and label .observe now that for any pair of adjacent unlabeled nodes and any label , once we have obtained , and , we can compute in constant time , as .in fact , all children of in are descendants of , while the children of in ( but ) are descendants of in .shazoo computes , we can compute in constant time for all child nodes of in , and use this value for computing . generalizing this argument , it is easy to see that in the next phase we can compute in constant time for all nodes of such that for all ancestors of and all , the values of have previously been computed .the time for computing for all nodes of and any label is therefore linear in the time of performing a breadth - first ( or depth - first ) visit of , i.e. , linear in the number of nodes of .since each labeled node with degree is part of at most trees for some , we have that the total number of nodes of all distinct ( edge - disjoint ) trees across is linear in .finally , we need to propagate the connection node labels of each hinge tree as in the third step of the online implementation . since also this last step takes linear time , we conclude that the total time for predicting all labels is linear in .let now any child of in and let be the subtree obtained by eliminating and edge from , where is a child of .let be the minimal cutsize of consistent with the labels of its leaves that have been revealed at time , setting , i.e. notice that can be easily calculated in constant time once we obtain , and .since all children of in are descendants of , we can calculate for each label reusing the value of for all nodes of previously computed , in the following way : where in the second equality we used the fact that ( i ) and ( ii ) the children of in are descendants of in .term takes into account the contribution of edge to the cutsize .hence we can calculate the value of in constant time for all children of . applying the same formulas it is easy to verify that , in a subsequent phase, we can compute in constant time for all nodes of such that .more precisily we can calculate for each node in constant time once we obtain for all nodes such that .the time for computing for all nodes of is therefore linear in the time of performing a breadth - first visit of , i.e. linear in the number of nodes of .since each labeled node with degree is part of at most tree for some , we deduce that the sum of the number of nodes of all trees for is linear in .hence , we can conclude that the total time for predicting all labels is linear in .we tested our algorithm on a number of real - world weighted graphs from different domains ( character recognition , text categorization , bioinformatics , web spam detection ) against the following baselines : * online majority vote * ( omv ) .this is an intuitive and fast algorithm for sequentially predicting the node labels is via a weighted majority vote over the labels of the adjacent nodes seen so far .namely , omv predicts through the sign of , where ranges over such that . both the total time and space required by omv are .* label propagation * ( labprop ) . is a batch transductive learning method computed by solving a system of linear equations which requires total time of the order of .this relatively high computational cost should be taken into account when comparing labprop to faster online algorithms .recall that can be viewed as a fast `` online approximation '' to labprop . * weighted tree algorithm * ( wta ) .as explained in the introductory section , wta can be viewed as a special case of shazoo .when the input graph is not a line , wta turns it into a line by first extracting a spanning tree of the graph , and then linearizing it . the implementation described in runs in constant amortized time per prediction whenever the spanning tree sampler runs in time .the graph perceptron algorithm is another readily available baseline .this algorithm has been excluded from our comparison because it does not seem to be very competitive in terms of performance ( see , e.g. , ) , and is also computationally expensive . in our experiments , we combined shazoo and wta with spanning trees generated in different ways ( note that omv and labprop do not need to extract spanning trees from the input graph ) .* random spanning tree * ( rst ) .following ch . 4 of , we draw a weighted spanning tree with probability proportional to the product of its edge weights. we also tested our algorithms combined with random spanning trees generated uniformly at random ignoring the edge weights ( i.e. , the weights were only used to compute predictions on the randomly generated tree ) we call these spanning trees nwrst ( no - weight ) . on most graphs, this procedure can be run in time linear in the number of nodes .hence , the combinations shazoo+nwrst and wta+nwrst run in time on most graphs . * minimum spanning tree * ( mst ) .this is the spanning tree minimizing the sum of the resistors on its edges .this tree best approximates the original graph in terms of the trace norm distance of the corresponding laplacian matrices . following , we also ran shazoo and wta using committees of spanning trees , and then aggregating predictions via a majority vote .the resulting algorithms are denoted by *shazoo and *wta , where is the number of spanning trees in the aggregation .we used either or , depending on the dataset size .for our experiments , we used five datasets : rcv1 , usps , krogan , combined , and webspam .webspam is a big dataset ( 110,900 nodes and 1,836,136 edges ) of inter - host links created for the web spam challenge 2008 .krogan ( 2,169 nodes and 6,102 edges ) and combined ( 2,871 nodes and 6,407 edges ) are high - throughput protein - protein interaction networks of budding yeast taken from see for a more complete description .finally , usps and rcv1 are graphs obtained from the usps handwritten characters dataset ( all ten categories ) and the first 10,000 documents in chronological order of reuters corpus vol . 1 ( the four most frequent categories ) , respectively . in both cases, we used euclidean -nearest neighbor to create edges , each weight being equal to .we set , where is the average squared distance between and its nearest neighbours .following previous experimental settings , we associate binary classification tasks with the five datasets / graphs via a standard one - vs - all reduction . each error rate is obtained by averaging over ten randomly chosen training sets ( and ten different trees in the case of rst and nwrst ) .webspam is natively a binary classification problem , and we used the same train / test split provided with the dataset : 3,897 training nodes and 1,993 test nodes ( the remaining nodes being unlabeled ) . in the below table, we show the macro - averaged classification error rates ( percentages ) achieved by the various algorithms on the first four datasets mentioned in the main text . for each dataset we trained ten times over a random subset of 5% , 10% and 25% of the total number of nodes and tested on the remaining ones .in boldface are the lowest error rates on each column , excluding labprop which is used as a `` yardstick '' comparison .standard deviations averaged over the binary problems are small : most of the times less than 0.5% . [ cols="<,^,^,^,^,^,^,^,^,^,^,^,^ " , ] our empirical results can be briefly summarized as follows : * 1 .* without using committees , shazoo outperforms wta on all datasets , irrespective to the type of spanning tree being used . with committees ,shazoo works better than wta almost always , although the gap between the two reduces .the predictive performance of shazoo+mst is comparable to , and sometimes better than , that of labprop , though the latter algorithm is slower .* *shazoo , with ( or on webspam ) seems to be especially effective , outperforming labprop , with a small ( e.g. , 5% ) training set size .nwrst does not offer the same theoretical guarantees as rst , but it is extremely fast to generate ( linear in on most graphs e.g. , ) , and in our experiments is only slightly inferior to rst . 10 n. alon , c. avin , m. kouck , g. kozma , z. lotker , and m.r .many random walks are faster than one . in _ proc .20th symp . on parallel algo . and architectures _ , pages 119128 .springer , 2008 .m. belkin , i. matveeva , and p. niyogi .regularization and semi - supervised learning on large graphs . in _ proceedings of the 17th annual conference on learning theory _ , pages 624638 .springer , 2004 .y. bengio , o. delalleau , and n. le roux . label propagation and quadratic criterion . in _ semi - supervised learning _ , pages 193216 .mit press , 2006 .a. blum and s. chawla .learning from labeled and unlabeled data using graph mincuts . in _ proceedings of the 18th international conference on machine learning_. morgan kaufmann , 2001 . n. cesa - bianchi , c. gentile , and f.vitale .fast and optimal prediction of a labeled tree . in _ proceedings of the 22nd annual conference on learning theory _, 2009 .n. cesa - bianchi , c. gentile , f. vitale , and g. zappella .random spanning trees and the prediction of weighted graphs . in _ proceedings of the 27th international conference on machine learning _, 2010 .n. cesa - bianchi , c. gentile , f. vitale , and g. zappella .active learning on trees and graphs . proc . of the 23rd conference on learning theory ( colt 2010 ) . c. altafini g. iacono .monotonicity , frustration , and ordered response : an analysis of the energy landscape of perturbed large - scale biological networks . , 4(83 ) , 2010 .m. herbster and g. lever .predicting the labelling of a graph via minimum -seminorm interpolation . in _ proceedings of the 22nd annual conference on learning theory_. omnipress , 2009 .m. herbster , g. lever , and m. pontil .online prediction on large diameter graphs . in _ advances in neural information processing systems22_. mit press , 2009 .m. herbster , m. pontil , and s. rojas - galeano . fast prediction on a tree . in _ advances in neural information processing systems 22_. mit press , 2009 .f. r. kschischang , b. j. frey , and h. a. loeliger .factor graphs and the sum - product algorithm ., 47(2):498519 , 2001 .r. lyons and y. peres .probability on trees and networks .manuscript , 2008 .s. t. mccormick , m. r. rao , and g. rinaldi .easy and difficult objective functions for max cut . , 94(2 - 3):459466 , 2003 .g. pandey , m. steinbach , r. gupta , t. garg , and v. kumar .association analysis - based transformations for protein interaction networks : a function prediction case study . in _ proceedings of the 13th acm sigkdd international conference on knowledge discovery and data mining _ ,pages 540549 .acm press , 2007 . yahoo ! research and laboratory of web algorithmics university of milan. web spam collection ./ webspam / datasets/. d. a. spielman and n. srivastava .graph sparsification by effective resistances . in _ proc .of the 40th annual acm symposium on theory of computing ( stoc 2008)_. acm press , 2008 .wilson . generating random spanning trees more quickly than the cover time . in_ proceedings of the 28th acm symposium on the theory of computing _ , pages 296303 .acm press , 1996 .x. zhu , z. ghahramani , and j. lafferty .semi - supervised learning using gaussian fields and harmonic functions . in _ proceedings of the 20th international conference on machine learning _ , 2003 .pick any such that .let be the forest obtained by removing from all edges in .draw an independent random label for each of the components of and assign it to all nodes of that component .then any online algorithm makes in expectation at least half mistake per component , which implies that the overall number of online mistakes is in expectation . on the other hand, clearly holds by construction .we first give additional definitions used in the analysis , then we present the main ideas , and finally we provide full details . recall that , given a labeled tree , a * cluster * is any maximal subtree whose nodes have the same label . let be the set of all clusters of . for any cluster ,let be the subset of all nodes of on which shazoo makes a mistake .let be the subtree of obtained by adding to all nodes that are adjacent to a node of .note that all edges connecting a node of to a node of are -edges .let be the set of -edges in and let .let be the total weight of the edges in .finally , recall the notation , where is any line graph .recall that an * in - cluster line graph * is any line graph that is entirely contained in a single cluster .the main idea used in the proof below is to bound for each in the following way .we partition into groups , where .then we find a set of edge - disjoint in - cluster line graphs , and create a bijection between lines in and groups in .we prove that the cardinality of each group is at most , where is the associated line .this shows that the subset of nodes in which are mispredicted by shazoo satisfies where .then we show that by the very definition of , and using the bijection stated above , this implies thereby resulting in the mistake bound contained in theorem 2 .according to shazoo prediction rule , when is not a fork and , the algorithm predicts using the label of any closest to . in this case , we call an * r - node * ( reference node ) for and the pair , where is the edge on the path between and , an * rn - direction * ( reference node direction ). we use the shorthand notation to denote an r - node for . in the special case when all connection nodes of the hinge tree containing have ( i.e. , ) , and is not a fork , we call any closest connection node to an r - node for and we say that is a rn - direction for .clearly , we may have more than one node of associated with the same rn - direction .given any rn - direction , we call * r - line * ( reference line ) the line graph whose terminal nodes are and the first ( in chronological order ) node for which is a rn - direction , where lies on the path between and .. ] we denote such an r - line by . in the special casewhere and we say that the r - line is associated with the -edge of included in the line - graph .in this case we denote such an r - line by , where .figure [ fig : rn - direction ] gives a pictorial example of the above concepts .we illustrate an example of r - node , rn - direction and r - line .the numbers near the edge lines denote edge weights . in order to predict , shazoo uses the r - node and the rn - direction . after observing ,the hinge line connecting with ( the thick black line ) is created , which is also an r - line , since at the beginning of step the algorithm used . in order to predict , we still use the r - node and the rn - direction . after the revelation of ,node becomes a fork . ]* is the set of all forks in .* is the subset of containing the nodes whose reference node belongs to ( if is a fork , then ) . note that this set may have a nonempty intersection with the previous one* is the subset of containing the nodes such that does not belong to .* is the subset of all forks such that at some step . since we assume the cluster label is ( see below ) , andsince a fork is mistaken only if , we have .* is the subset of all nodes in that , when revealed , create a fork that belongs to .since at each time step at most one new fork can be created , a new fork is created when the number of edge - disjoint paths connecting to the labeled nodes increases .this event occurs only when a new hinge line is created .when this happens , the only node for which the number of edge - disjoint paths connecting it to labeled nodes gets increased is the terminal node of the newly created hinge line .] we have .the proof of the theorem relies on the following sequence of lemmas that show how to bound the number of mistakes made on a given cluster .a major source of technical difficulties , that makes this analysis different and more complex than those of treeopt and wta , is that on a weighted tree the value of on forks can potentially change after each prediction .for the sake of contradiction , assume .let be the maximal subtree of rooted at such that no internal node of is revealed . now , consider the cut given by the edges of belonging to the hinge lines of .this cut separates from any revealed node labeled with .the size of this cut can not be larger than . by definition of , this implies .however , also can not be larger than . because must hold independent of the set of nodes in that are revealed before time , this entails a contradiction .let now be the restriction of on the subtree , and let be the set of all distinct rn - directions which the nodes of can be associated with .the next lemmas are aimed at bounding and .we first need to introduce the superset of .then , we show that for any both and are linear in . in order to do so ,we need to take into account the fact that the sign of for the forks in the cluster can change many times during the prediction process .this can be done via lemma [ lm : delta ] , which shows that when all labels in are revealed then , for all fork , the value does not increase .thus , we get the largest set when we assume that the nodes in are revealed before the nodes of . given any cluster , let be the order in which the nodes of are revealed .let also be the permutation in which all nodes in are revealed in the same order as , and all nodes in are revealed at the beginning , in any order .now , given any node revelation order , can be defined by describing the three types of steps involved in its incremental construction supposing was the actual node revelation order . 1 .after the first steps , contains all node - edge pairs such that is a fork and is an edge laying on a hinge line of .recall that no node in is revealed yet .2 . for each step when a new fork is created such that just after the revelation of , we add to the three node - edge pairs , where the are the edges contained in the three hinge lines terminating at .3 . let be any step where : ( i ) a new hinge line is created , ( ii ) node is a fork , and ( iii ) at time . on each such stepwe add to , for in .it is easy to verify that , given any ordering for the node revelation in , we have .in fact , given an rn - direction , if lies along one of the hinge lines that are present at time according to , then must be included in during one of the steps of type 2 above , otherwise will be included in during one of the steps of type 2 or type 3 .assume nodes are revealed according to .let be the subtree of made up of all nodes in that are included in any path connecting two nodes of . by their very definition ,the forks at time are the nodes of having degree larger than two in subtree .consider as rooted at an arbitrary node of .the number of the leaves of is equal to .this is in turn because now , in any tree , the sum of the degrees of nodes having degree larger than two can not is at most linear in the number of leaves .hence , at time both the number of forks in and the cardinality of are .[ incr_cutsize ] let be a step when a new hinge line is created such that .if just after step we have , then , where is the lightest edge on .since and is completely included in , we must have just before the revelation of .this implies that the difference can not be smaller than the minimum cutsize that would be created on by assigning label to node .[ dc_second ] assume nodes are revealed according to .then the cardinality of and the total number of elements added to during the steps of type 2 above are both linear in .let be the set of forks in such that at some time .recall that , by definition , for each fork there exists a step such that .hence , lemma [ lm : delta ] implies that , at the same step , for each fork we have .since is included in , we can bound by , i.e. , by the number of forks such that , under the assumption that is the actual revelation order for the nodes in .now , is bounded by the number of forks created in the first steps , which is equal to plus the number of forks created at some later step and such that right after their creation . since nodes in are revealed according to , the condition just after the creation of a fork implies that we will never have in later stages .hence this fork belongs neither to nor to .in order to conclude the proof , it suffices to bound from above the number of elements added to in the steps of type 2 above . from lemma [ incr_cutsize ], we can see that for each fork created at time such that just after the revelation of node , we must have , where is the lightest edge in .hence , we can injectively associate each element of with an edge of , in such a way that the sum of the weights of these edges is bounded by . by definition of , we can therefore conclude that the total number of elements added to in the steps of type 2 is . with the following lemma we bound the number of nodes of associated with every rn - direction andshow that one can perform a transformation of the r - lines so as to make them edge - disjoint .this transformation is crucial for finding the set appearing in the theorem statement .observe that , by definition of r - line , we can not have two r - lines such that each of them includes only one terminal node of the other .thus , let now be the forest where each node is associated with an r - line and where the parent - child relationship expresses that ( i ) the parent r - line contains a terminal node of the child r - line , together with ( ii ) the parent r - line and the child r - line are not edge - disjoint . is , in fact , a forest of r - lines .we now use for bounding the number of mistakes associated with a given rn - direction or with a given -edge .given any connected component of , let finally be the total number of nodes of associated with the rn - directions of all r - lines of .* the number of nodes in associated with a given rn - direction is of the order of . *the number of nodes in associated with a given -edge is of the order of .* let be the r - line associated with the root of any connected component of . must be at most of the same order of where is a set of edge - disjoint line graphs completely contained in .we will prove only ( i ) and ( iii ) , ( ii ) being similar to ( i ) .let be a node in associated with a given rn - direction .there are two possibilities : ( a ) is in or ( b ) the revelation of creates a fork in such that for all steps .let now be the next node ( in chronological order ) of associated with .the length of can not be smaller than the length of ( under condition ( a ) ) or smaller than the length of ( under condition ( b ) ) .this clearly entails a dichotomic behaviour in the sequence of mistaken nodes in associated with .let now be the node in which is farthest from such that the length of is not larger than . once a node in is revealed or becomes a fork satisfying for all steps , we have for all subsequent steps( otherwise , this would contradict the fact that the total cutsize of is ) .combined with the above sequential dichotomic behavior , this shows that the number of nodes of associated with a given rn - direction can be at most of the order of part ( iii ) of the statement can be now proved in the following way .suppose now that an r - line , having and as terminal nodes , includes the terminal node of another r - line , having and as terminal nodes .assume also that the two r - lines are not edge - disjoint .if is partially included in , i.e. , if does not belong to , then can be broken into two sub - lines : the first one has and as terminal nodes , being the node in which is farthest from ; the second one has and as terminal nodes .it is easy to see that must be created before and is the only node of the second sub - line that can be associated with the rn - direction .this observation reduces the problem to considering that in each r - line that is not a root is completely included in its parent .consider now the simplest case in which is formed by only two r - lines : the parent r - line , which completely contains the child r - line .let be the step in which the first node of becomes a hinge node .after step , can be vieved as broken in two edge - disjoint sublines having and as terminal node sets , where is one of the terminal of . generalizing this argument for every component of , and using the above observation about the partially included r - lines , we can state that , for any component of , is of the order of where .this entails that we can define as the union of and , which concludes the proof .assume nodes are revealed according to , and let be any type-3 step when a new element is added to .there are two cases : ( a ) at time or ( b ) at time .case ( a ) .lemma [ incr_cutsize ] combined with the fact that all hinge - lines created are edge - disjoint , ensures that we can injectively associate each of these added elements with an edge of in such a way that the total weight of these edges is bounded by .this in turn implies that the total number of elements added to is .case ( b ) . since we assumedthat nodes are revealed according to , we have that is positive for all steps .hence we have that case ( b ) can occur only once for each of such forks .since this kind of fork belongs to , we can use lemma [ dc_second ] and conclude that ( b ) can occur at most times . theorem 2let be the union of over .using lemma [ lm : number_rnd ] we deduce , where the term takes into account that at most one r - line of may be associated with each -edge of .let now be the set of components of .given any tree , let be the r - line root of .recall that , by part ( iii ) of lemma [ lm : m_l ] for any tree we can find a set of edge - disjoint line graphs all included in such that is of the order of .let now be equal to .thus we have observe that is not an edge disjoint set of line graphs included in only because each -edge may belong to two different lines of . by definition of , for any line graphs and , where is obtained from by removing one of the two terminal nodes and the edge incident to it , we have .if , for each -edge shared by two line graphs of , we shorten the two line graphs so as no one of them includes the -edge , we obtain a new set of edge - disjoint line graphs such that .hence , we finally obtain , where in the last equality we used the fact that for all line graphs .
predicting the nodes of a given graph is a fascinating theoretical problem with applications in several domains . since graph sparsification via spanning trees retains enough information while making the task much easier , trees are an important special case of this problem . although it is known how to predict the nodes of an unweighted tree in a nearly optimal way , in the weighted case a fully satisfactory algorithm is not available yet . we fill this hole and introduce an efficient node predictor , shazoo , which is nearly optimal on any weighted tree . moreover , we show that shazoo can be viewed as a common nontrivial generalization of both previous approaches for unweighted trees and weighted lines . experiments on real - world datasets confirm that shazoo performs well in that it fully exploits the structure of the input tree , and gets very close to ( and sometimes better than ) less scalable energy minimization methods .
nuclear mass plays an important role not only in studying the knowledge of nuclear structure , but also in understanding the origin of elements in the universe . with the construction andupgrade of radioactive ion beam facilities , the measurements of nuclear masses have made great progress in recent years . during the last decade, hundreds of nuclear masses were measured for the first time or with higher precisions .the astrophysical calculations involve thousands of nuclei far from -stability line . however , most of these nuclei are still beyond the experimental reach. one could use the local mass relations such as the garvey - kelson ( gk ) relations and the residual proton - neutron interactions to predict unknown masses .however , the intrinsic error grows rapidly when the local mass relations are used to predict the nuclear masses in an iterative way .therefore , the theoretical predictions for nuclear masses are inevitable to astrophysical calculations .the early theoretical studies of nuclear masses are mainly macroscopic models , such as the famous weizscker mass formula .it is known that this kind of mass model neglects the microscopic effects , and hence shows systematic deviations for the nuclei near the shell closure or those with large deformations . in order to better describe the nuclear ground - state properties ,the macroscopic - microscopic and microscopic theoretical models are developed for mass predictions . by including the microscopic correction energy to the macroscopic mass formula , the macroscopic - microscopic mass model can well take into account the important microscopic corrections . during the past decades ,a number of macroscopic - microscopic mass models have been developed , such as the finite - range droplet model ( frdm ) , the extended thomas - fermi plus strutinsky integral ( etfsi ) , and the koura - tachibana - uno - yamada ( ktuy ) .these macroscopic - microscopic mass models have similar accuracy for mass prediction and their the root - mean - square ( rms ) deviation with respect to data in the atomic mass evaluation of 2012 ( ame12 ) is about mev .guiding by the skyrme energy density functional , a semiempirical nuclear mass formula , the weizscker - skyrme ( ws ) model , was proposed based on the macroscopic - microscopic method . for the latest version of ws model ( ws3 ) , the rms deviation with respect to known nuclear masses in ame12 is significantly reduced to mev .on the other hand , great progress has been achieved for microscopic mass models with the rapid development of the computer technology in the new century .based on the hartree - fock - bogoliubov ( hfb ) theory with skyrme or gogny force , a series of microscopic mass models have been proposed with the accuracy comparable with the traditional macroscopic - microscopic mass models .apart from the non - relativistic microscopic model , the relativistic mean - field ( rmf ) model has also received wide attention due to many successes achieved in describing lots of nuclear phenomena as well as successful applications in astrophysics .a systematic study of the ground - state properties for all nuclei from the proton drip line to the neutron drip line with and was performed for such model several years ago , and the rms deviation with respect to known masses is about mev .however , it should be noted that the effective interaction of this rmf mass model was only optimized with the properties of a few selected nuclei . by carefully adjusting the effective interaction of rmf model with the properties of more selected nuclei ,the deviation can be remarkably reduced . for the - even nuclei with , the rms deviation with respect to known masses in atomic mass evaluation of 2003 ( ame03 ) is reduced to mev for the effective interaction pc - pk1 . moreover , the pc - pk1 predictions well reproduce the new and accurate mass measurements from sn to pa with the rms deviation of mev , and also successfully describe the coulomb displacement energies between mirror nuclei .in addition , inspired by the shell model , the duflo - zuker ( dz ) mass model has made considerable success in describing nuclear masses with accuracy of about mev .although these theoretical models can well reproduce the experimental data , there are still large deviations among the mass predictions of different models , even in the region close to known masses .a number of investigations on the accuracy and predictive power of these nuclear mass models have been made so far in the literatures , e.g. refs . . to further improve the accuracy of nuclear mass model , the image reconstruction techniques based on the fourier transformis applied to the nuclear mass models and significantly reduces the rms deviation to the known masses with the clean algorithm .later on , the radial basis function ( rbf ) approach was developed to improve the mass predictions of several theoretical models . comparing with the clean reconstruction ,the rbf approach more effectively reduces the rms deviations with respect to the masses first appearing in ame03 . to improve the mass prediction of a nucleus, thousands of nuclei with known masses are involved in the rbf approach . however , do all the nuclei involved play effective roles in the improvement of mass prediction for this nucleus ? what are the key nuclei that have to be included in the rbf approach ?in other words , how far away from the measured region of nuclear mass could we predict with satisfactory accuracy in the rbf approach ?these questions were not addressed in previous investigations .therefore , it is interesting to investigate the mass correlations between a certain nucleus and those nuclei involved in the rbf approach , and hence to evaluate the predictive power of the rbf approach . in this work , we will carefully evaluate the predictive power of the rbf approach based on widely used nuclear mass models , ranging from macroscopic - microscopic to microscopic types .special attention will be paid to the mass correlations among various nuclei .the paper is organized as follows . in sec .ii , a brief introduction to the rbf approach including numerical details is given . in sec .iii , the mass correlations are first carefully investigated , and then the predictive power of the rbf approach based on different mass models will be evaluated .finally , the summary is presented in sec .the rbf approach has been widely applied in surface reconstruction and its solution is written as where denotes the point from measurement , is the weight of center , is the radial basis function , is the euclidean norm , and is the number of the data to be fitted . given , one wishes to reconstruct the smooth function with , i.e. , where . then the rbf weights are determined to be once the weights are obtained with the samples , the reconstructed function can be calculated with eq .( [ eq : sx ] ) for any point . as in ref . , the euclidean norm is defined to be the distance between nuclei and in nuclear chart : the basis function is adopted in this work , since the mass deviation can be reconstructed relatively better with than other basis functions . then the mass difference between the experimental data and those predicted with nuclear mass models could be reconstructed with eq .( [ eq : dx ] ) .once the weights are obtained , the reconstructed function can be calculated with eq .( [ eq : sx ] ) for any nucleus .then the revised mass for nucleus is given by for training the rbf with eq .( [ eq : dx ] ) , only those nuclei between the minimum distance and maximum distance are involved , i.e. . if the reconstructed function for a nucleus is obtained by training the rbf including itself , i.e. , it is clear that is just the and hence .therefore , to test the predictive power of rbf approach , the function for a known nucleus should be reconstructed with . to evaluate the predictive power of rbf approach , the rms deviation , i.e. , is employed , where and are the theoretical and experimental nuclear masses , respectively , and is the number of nuclei contained in a given set . in this investigation , we only consider nuclei with and and the experimental data are taken from ame12 , unless otherwise specified . for the theoretical mass models , we take rmf , hfb-21 , dz10 , dz31 , etfsi-2 , ktuy , frdm , and ws3 mass models as examples , with the rms deviation spanning from mev to mev with respect to experimental data in ame12 . for convenience ,the and of nuclear mass models are denoted as and , hereafter ..the rms deviations in mev between known masses in ame12 and predictions of various nuclear mass models without ( the second column ) and with ( the third column ) the rbf approach .the fourth column gives the reduction of the rms deviations after combining the rbf approach . [ cols="^,^,^,^,^,^,^",options="header " , ] .the squares and circles represent the experimental uncertainties in ame03 and ame12 , respectively.,title="fig:",width=283 ] + by using the systematic trends in the mass surface and its derivative , the mass evaluation method in ame provides the best short - range mass extrapolation .therefore , it is interesting to compare the accuracy between the rbf approach and the method used in ame . taking the nuclei whosemasses are evaluated values in ame03 ( marked by # " in the mass table of ame03 ) as an example , the rms deviations with respect to the new experimental data in ame12 are mev . for comparison with the method in ame at the same foot , the experimental data employed in training the rbf should be taken from ame03 as well , and the corresponding results based on different mass models are given in the third column of table [ tb2 ] .in addition , the rms deviations for these nuclei are given in the second column of table [ tb2 ] .clearly , the rbf approach also remarkably improves the predictive power of various mass models .it should be pointed out that the rms deviation based on the ws3 mass model is even smaller than that from the method in ame . in fig .[ fig6 ] , the experimental uncertainties in ame12 and ame03 are presented as a function of the isospin asymmetry . it is found that the experimental uncertainties are significantly improved in ame12 comparing with those in ame03 , especially for those nuclei around the border region of experimental data with and . therefore , we further update the data in training the rbf with those in ame12 and the corresponding results are shown in the fourth column of table [ tb2 ] .it is clear that can be significantly reduced for most mass models .based on the ws3 mass model , the predictive accuracy has been reduced to mev .therefore , the new high - precision experimental data are also very important to improve the nuclear mass models with rbf approach . based on the measured masses in the ame12 for the rmf [ panel ( a ) ] and ws3 [ panel ( b ) ] mass models .the dotted lines denote the magic numbers.,title="fig:",width=321 ] + between those based on the measured masses in the ame12 and those based on the measured masses in ame03 .the panels ( a ) and ( b ) correspond to the rmf and ws mass models , respectively .the new " nuclei in ame12 are indicated by the black contours.,title="fig:",width=321 ] + to investigate the radial basis function corrections in detail , the reconstructed functions based on the measured masses in the ame12 are shown in fig .[ fig7 ] by taking the rmf and ws3 mass models as examples .it is clear that the reconstructed functions are sensitive to the the nuclear mass models . for the rmf mass model ,the are about mev for the nuclei near , , , and .this just corresponds to the nuclei whose masses are underestimated in the rmf model .moreover , the overestimation of nuclear masses in the regions near and in rmf model are also well improved by the rbf approach with the mev in these two regions . for the ws3 mass model, it better describes the nuclear masses than the rmf mass model , while there still exists small but systematically correlated errors . with the rbf approach, these systematic correlations can be well extracted as well . to further investigate the influence of the new " masses in ame12 on improving the nuclear mass models with the rbf approach , the differences of the reconstructed functions between those based on the measured masses in the ame12 and those based on the measured masses in ame03 are shown in fig .it is found that the differences of reconstructed functions are generally within kev for most nuclei , while it is relatively larger for those nuclei around the border region of experimental data , especially for those new " nuclei in ame12 . this can be well understood sincesignificant improvements in the mass measurements are made for nuclei near the border region in recent years .therefore , the rbf approach is sensitive to the experimental masses and it is necessary to adopt the high - precision experimental data to improve the nuclear mass models .in this work , the mass correlations in the radial basis function ( rbf ) approach are carefully investigated based on widely used nuclear mass models , ranging from macroscopic - microscopic to microscopic types .the mass correlations usually exist between a nucleus and its surrounding nuclei with distance .however , the correlation distance is dependent on the nuclear mass models , which can go up to the distance of for the mass models with larger rms deviations , such as the rmf model . to extract these mass correlations, it is shown that the nuclei at distance are necessary to include in the training of rbf approach . in this way, the rbf approach can make significant improvements in the mass predictions for different mass models .the ame95 - 03 - 12 test further shows that the rbf approach provides a very effective tool to improve mass predictions significantly in regions not far from known nuclear masses .based on the latest weizscker - skyrme mass model , the rbf approach can achieve an accuracy comparable with the extrapolation method used in atomic mass evaluation , which can be further improved by the incorporation of new measurements . as claimed in the introduction , the effective interaction pc - pk1 remarkably improves the mass prediction of the rmf model .therefore , it is interesting to investigate the predictive power of pc - pk1 mass model with the help of rbf approach when the calculated masses with pc - pk1 for all nuclei in ame12 are available in the future .in addition , considering the success in improving the nuclear mass predictions , the rbf approach has a great potential to improve theoretical calculations of other physical quantities , such as nuclear -decay half - lives , fission barriers , and excitation spectra .this work was partly supported by the national natural science foundation of china ( grants no .11205004 , no .11235002 , no .11175001 , no . 11105010 , no .11128510 , and no .11035007 ) , the 211 project of anhui university under grant no. 02303319 - 33190135 , the program for new century excellent talents in university of china under grant no .ncet-09 - 0031 , the key research foundation of education ministry of anhui province of china under grant no .kj2012a021 , and the natural science foundation of anhui province under grant no .11040606m07 .99 d. lunney , j. m. pearson , c. thibault , rev .phys . * 75 * , 1021 ( 2003 ) .e. margaret burbidge , g. r. burbidge , william a. fowler , and f. hoyle , rev .* 29 * , 547 ( 1957 ) .a. arcones and g. f. bertsch , phys .lett . * 108 * , 151101 ( 2012 ) .g. audi , f. g. kondev , m. wang , b. pfeiffer , x. sun , j. blachot , and m. maccormick , chin .c * 36 * , 1157 ( 2012 ) . g. t. garvey and i. kelson , phys .* 16 * , 197 ( 1966 ) .g. t. garvey , w. j. gerace , r. l. jaffe , i. talmi , and i. kelson , rev .phys . * 41 * , s1 ( 1969 ) .j. y. zhang , r. f. casten , and d. s. brenner , phys .b227 * , 1 ( 1989 ) .g. j. fu , h. jiang , y. m. zhao , s. pittel , and a. arima , phys .c * 82 * , 034304 ( 2010 ) .h. jiang , g. j. fu , y. m. zhao , and a. arima , phys .c * 82 * , 054317 ( 2010 ) .et al_. , phys .c * 85 * , 054303 ( 2012 ) g. j. fu , y. lei , h. jiang , y. m. zhao , b. sun , and a. arima , phys .c * 84 * , 034311 ( 2011 ) .i. o. morales , j. c. lopez vieyra , j. g. hirsch , and a. frank , nucl .a828 * , 113 ( 2009 ) . c. f. von weizscker , z. phys . *96 * , 431 ( 1935 ) .p. mller , j. r. nix , w. d. myers , and w. j. swiateck , atom .data nucl .data tables * 59 * , 185 ( 1995 ) .s. goriely , aip conf529 * , 287 ( 2000 ) .h. koura , t. tachibana , m. uno , and m. yamada , prog .theor . phys . *113 * , 305 ( 2005 ) .n. wang , m. liu , and x. z. wu , phys .c * 81 * , 044322 ( 2010 ) .n. wang , z. y. liang , m. liu , and x. z. wu , phys .c * 82 * , 044304 ( 2010 ) .m. liu , n.wang , y. deng , and x. wu , phys .c * 84 * , 014333 ( 2011 ) .s. goriely , n. chamel , and j. m. pearson , phys .lett . * 102 * , 152503 ( 2009 ) .s. goriely , s. hilaire , m. girod , and s. pru , phys .lett . * 102 * , 242501 ( 2009 ) .s. goriely , n. chamel , and j. m. pearson , phys .c * 82 * , 035804 ( 2010 ) .j. meng , h. toki , s. g. zhou , s. q. zhang , w. h. long , and l. s. geng , prog .* 57 * , 470 ( 2006 ) .d. vretenar , a. v. afanasjev , g. a. lalazissis , and p. ring , phys . rep . * 409 * , 101 ( 2005 ) .p. w. zhao , z. p. li , j. m. yao , and j. meng , phys .c * 82 * , 054319 ( 2010 ) .x. m. hua , t. h. heng , z. m. niu , b. h. sun , and j. y. guo , sci .china phys .astron . * 55 * , 2414 ( 2012 ). s. g. zhou , j. meng , p. ring , and e .-zhao , phys .c * 82 * , 011301(r ) ( 2010 ) .w. h. long , p. ring , n. van giai , and j. meng , phys .c * 81 * , 024308 ( 2010 ) .z. m. niu , q. liu , y. f. niu , w. h. long , and j. y. guo , phys .c * 87 * , 037301 ( 2013 ) .h. mei , j. xiang , j. m. yao , z. p. li , and j. meng , phys .c * 85 * , 034321 ( 2012 ) .h. z. liang , n. van giai , and j. meng , phys .* 101 * , 122502 ( 2008 ) .y. f. niu , n. paar , d. vretenar , and j. meng , phys .* b681 * , 315 ( 2009 ) .d. vretenar , y. f. niu , n. paar , and j. meng , phys .c * 85 * , 044317 ( 2012 ) .h. z. liang , t. nakatsukasa , z. m. niu , and j. meng , phys .c * 87 * , 054310 ( 2013 ) .z. m. niu , y. f. niu , q. liu , h. z. liang , and j. y. guo , phys .rev c * 87 * , 051303(r ) ( 2013 ) .b. sun , f. montes , l. s. geng , h. geissel , yu .a. litvinov , and j. meng , phys .c * 78 * , 025806 ( 2008 ) .b. sun and j. meng , chin .* 25 * , 2429 ( 2008 ) .z. m. niu , b. sun , and j. meng , phys .c * 80 * , 065806 ( 2009 ) .y. f. niu , n. paar , d. vretenar , and j. meng , phys . rev .c * 83 * , 045807 ( 2011 ) .z. m. niu and c. y. gao , int .e * 19 * , 2247 ( 2010 ) .j. meng , z. m. niu , h. z. liang , and b. sun , sci .chin . ser .g * 54 * , 119 ( 2011 ) .w. h. zhang , z. m. niu , f. wang , x. b. gong , and b. sun , acta phys . sin .* 61 * , 112601 ( 2012 ) .x. d. xu , b. sun , z. m. niu , z. li , y .- z .qian , and j. meng , phys .c * 87 * , 015805 ( 2013 ) .z. m. niu , y. f. niu , h. z. liang , w. h. long , t. niki , d. vretenar , and j. meng , phys . lett . *b723 * , 172 ( 2013 ) .l. s. geng , h. toki , and j. meng , prog .. phys . * 113 * , 785 ( 2005 ) .q. s. zhang , z. m. niu , z. p. li , j. m. yao , and j. meng , arxiv : 1305.1736 . l. chen_ et al_. , nucl . phys . *a882 * , 71 ( 2012 ) .p. w. zhao , l. s. song , b. sun , h. geissel , and j. meng , phys .c * 86 * , 064324 ( 2012 ) .b. sun , p. zhao , and j. meng , sci .china phys .astron . * 54 * , 210 ( 2011 ) .j. duflo and a. p. zuker , phys .c * 52 * , r23 ( 1995 ) .a. p. zuker , rev .s * 54 * , 129 ( 2008 ) .j. mendoza - temis , i. morales , j. barea , a. frank , j. g. hirsch , j. c. lpez vieyra , p. van isacker , v. velzquez , nucl* a812 * , 28 ( 2008 ) .a. litvinov , a. sobiczewski , a. parkhomenko , and e. a. cherepanov , int . j. mode * 21 * , 1250038 ( 2012 ) .a. sobiczewski and yu .a. litvinov , phys .scr . * t154 * , 014001 ( 2013 ) .irving o. morales , p. van isacker , v. velzquez , j. barea , j. mendoza - temis , j. c. lpez vieyra , j. g. hirsch , and a. frank , phys .c * 81 * , 024304 ( 2010 ) .n. wang and m. liu , phys .c * 84 * , 051303(r ) ( 2011 ) .g. audi and a. h. wapstra , nucl .* a595 * , 409 ( 1995 ) . g. audi , a. h. wapstra , and c. thibault , nucl .a729 * , 337 ( 2003 ) .n. wang and m. liu , j. phys .* 420 * , 012057 ( 2013 ) .
the radial basis function ( rbf ) approach is applied in predicting nuclear masses for widely used nuclear mass models , ranging from macroscopic - microscopic to microscopic types . a significantly improved accuracy in computing nuclear masses is obtained , and the corresponding rms deviations with respect to the known masses is reduced by up to . moreover , strong correlations are found between a target nucleus and the reference nuclei within about three unit in distance , which play critical roles in improving nuclear mass predictions . based on the latest weizscker - skyrme mass model , the rbf approach can achieve an accuracy comparable with the extrapolation method used in atomic mass evaluation . in addition , the necessity of new high - precision experimental data to improve the mass predictions with the rbf approach is emphasized as well . gbksong
porous poly(lactic acid ) ( pla ) has been developed for tissue engineering scaffolds for decades [ 1 - 3 ] .including poly(l - lactic acid ) ( plla ) , poly(d - lactic acid ) ( pdla ) and pla - based copolymers like poly(lactic - co - glycolic ) acid ( plga ) , these bio - based resins have been proved to be a successful candidate of scaffold materials with excellent biocompatibility and biodegradablity .the tissue engineering requires sufficient interconnecting inner space in the scaffold for biofactor delivery , tissue growth , and the scaffold should be degradable after tissue s growth meanwhile providing proper mechanical strength to support the tissue engineering system . therefore the balance of growth space , degradation behavior and mechanical properties is the main concern of constructing a scaffold . with the respect to the particular requirements of certain tissue engineering ,nowadays designed preparation and modification techniques of porous pla scaffold materials become an intensive interests - drawing subject [ 4 - 13 ] , which requires more understanding of the basic principles of physical and chemical properties , particular in the form of scaffold .the crystallization of pla plays an important role in its mechanical properties and degradability .generally the crystallized polymers have higher strength and mechanical modulus [ 14 ] . in the case of pla, the crystallinity also significantly affects the degradability , with the general behavior that the degradation time is longer with higher crystallinity , as the crystal segments are more stable than amorphous area and prevent water permeation into it . for example , it was reported that plla takes more than 5 years for total degradation , whereas only about 1 year for the amorphous pla or pdlla [ 15 ] .however unlike the crystallinity control for the inorganic components in the tissue engineering scaffold [ 16 ] , very rare reports concerned the crystallinity of the polymer scaffold materials .one probable reason is that the preparation of porous scaffolds is a delicately process , where the control of crystallinity is usually difficult or unavailable . except the solvent casting / particle leaching method , in other widely used preparation methods such as electrospun fiber , phase separation , membrane lamination and gas foaming , the polymers are not able to experience a thermal treating step , i.e. the most common way to control the crystallinity [ 1 ] . in some worksthe crystallinity is controlled by the raw materials itself , i.e. selecting raw materials of different molecular weight associated with different crystallization behaviors , or a particular processing procedure for chain cleavage to control the crystallinity [ 17 ] . and in some methods , even polymer with high crystallinitycan not be served as the raw materials to prepare the scaffold , for example the gas foaming technique reported by david j. mooney et al . [ 18 ] .nevertheless , the study of the crystallization of the scaffold should hold considerable practical merits in tissue engineering , for example to fit the scaffold degradation time with the expected tissue growth time by the control of crystallinity ( if possible ) .also , the scaffold structure may vary with crystallinity and influent the biological behavior of living tissue leaning on it .park et al . reported a research on the sustained release of human growth hormone from semi - crystalline poly(l - lactic acid ) and amorphous poly(d , l - lactic - co - glycolic acid ) microspheres , which reveals that the morphological effect is important on protein release [ 19 ] .crystallinity can be tailored in solvent casting / particulate leaching technique . however in this method the salt as porogens are solved in the solution and flow with the polymers during the casting , therefore without the immobilization of porogens the crystallinity behavior in the space - limited gap can not be tracked [ 1 ] . in this work , we modified the solvent casting / particulate leaching technique by diffusing the pla solution into a steady salt stack instead of solving the porogens .the control of crystallinity was made possible by inserting a thermal treating step before leaching , while maintaining the stable structure of salt stack .we have investigated the morphological effect of limited space on the crystallization of pla , and the porous structure with different crystallinity under thermal treatments .the pla of label 4032d is purchased from natureworks , with l / d ratios from : to : .the porogens is nacl of analytical grade .the : mixture of dichloromethane and chloroform is served as solvent .the pla pellets are solved into dichloromethane and chloroform ( : ) with the concentration of .the nacl powder is thoroughly grounded and sieved with then sieve to screen the particles of sizes in between , and paved onto a petri dish where it forms a thick disc .the pla solution is very slowly poured into the dish at the edge .the pouring is as slow as that the solution diffuses inside the salt stack instead of flowing over the surface , also the slow diffusing guarantees the salt particles are not considerably moved by the liquid flowing to keep the inner structure of the salt stack stable . after pouring , the salt - pla solution composite is then placed in vacuum for for drying out the solvent .the product is a solid dry pla - glued salt composite ready for thermal treatment .the composite is placed in water for to leach out the salts after the thermal treatment for recrystallization .the leached samples are freeze - dried for and stored in vacuum ready for characterizations .the recrystallization of composite is processed by the heating and cooling process .four samples were made to have different crystallinity .one sample was kept as the original composite without thermal treatment for reference ( sample r ) .the other three composites are heated in oven at for , then one composite ( sample a ) was immediately quenched in liquid nitrogen ; sample b was linearly cooled down at the rate . ; sample c was linearly cooled at till , kept at for , then cooled down with the same rate to room temperature . for comparison we also made two bulk pla samples with the same crystallization process of sample b and c and they were labeled as sample b and c. the crystallinity of samples were characterized by x - ray diffraction ( xrd ) ( d / max-2200/pc , rigaku corporation ) .the porous structure of samples were revealed by scanning electron microscope ( sem ) ( nova nanosem 450 , fei ) .the xrd results firstly dispel the doubt about the possibility of crystallization in confined geometry .fig.1(1 ) shows the xrd of three thermal treated samples a , b and c ( note that the sample r is not included because its xrd curve almost overlap with the sample a , i.e. amorphous ) , and the bulk samples of b and c are shown in fig.1(2 ) . with the comparison of the bulk behavior , clear crystal peaks at and other minor peaks indicate that the heat treatment makes the crystallization possible but we can see significant impact of confined geometry , the crystallization of pla chains in porogens slits is harder and the crystallinity is lower than the bulk sample under the same treatment condition .the crystallinity is calculated to be and in b and c , comparing to and in b and c. it is clear that with the same thermal treatment the bulk samples are much easier to crystallize with higher crystallinity .this agrees with our expectation that the crystallization is more difficult in confined geometry , even the size of fibers and walls confining the pores is still in the magnitude of micrometers ( see the sem paragraph ) which far overweighted the diameter of chain segments , the chain movement and rearrangement is obstructed by the limited space .another evidence that confined space impairs the crystallibility is the comparable crystallinity of b and c. the small difference is trivial due to many factors such as sample preparation or the baseline selection .it implies that linear cooling process may reach the upper limit crystallinity for the porogens confined environment , while for bulk sample , the effects of staying at for longer time is significant on crystallinity .although the samples are leached for there are unavoidable porogens residues left in the sample .the peaks on and are nacl crystals [ 20 ] , whereas these two peaks do not exist in the bulk samples b and c. the sem results in fig.2 show the macro - structures of unheated original casting sample r and three samples with heat treatment a , b and c. a general observation of macro - structure confirms that the thermal treatment does not impressively affect the porous structure forming .the pores and caves structure in each sample can be clearly observed with the pore size of around , which accords to the sieving process .nevertheless the morphological effect of heat treatment is obvious . in fig.2(2 ) , ( 3 ) and ( 4 ) the reheated samples present the features of thinner pore walls , branches and fragments , while in the reference sample the pore wall is thicker with rod - like branches .regardless of the crystallinity , heating the samples to ( the melting temperature of pla ) enables the polymer chains to remobilize and diffuse into the slits between salt particles where the solved chains had not diffused into and occupied .the thinner pore wall indicates that the chain remobilization also moves the porogens and makes narrower space among them .although we employ the steady salt stack to confine the recrystallization within the limited space , the porogens are only relatively `` stable '' comparing to typical solvent casting technique .the quenching sample a has more fragmental structures than b and c , it is clear to understand the phenomenon that in sample a the polymer melt diffused into thinner slits is quenched to solidify its diffusing state of fragmental features . for sample b and c ,the recrystallization process offers sufficient time for the diffused chains to mobilize and rearrange themselves to be more ordered , crystal structure .this rearrangement provides a less fragmental structure on the macro - scale .no obvious difference of macro - structures between sample b and c is observed , i.e. hours linear cooling is sufficient for recrystallization to achieve this structural effect , longer recrystallization time plays very little more effects on the crystallinity and the macro - structure , and this observation also agrees to the crystallinity results of b and c as indicated above .the pla porous matrix for potential use in tissue engineering have been prepared by modified salt casting and particulate leaching technique .the pla solution is diffused into relative stable salt stack , instead of solving salts with polymers in typical method . because the raw salt casting solidifies the salt - pla composite , we are able to insert a step of thermal treatment to recrystallize the polymer matrix before leaching process . in this way we are able to : 1 ) investigate the crystallization behavior of pla confined in limited space ;2 ) develop an available crystallinity control option in porous pla scaffold preparation .the xrd results indicate the crystallization of porous foams , in a manner of lower crystallibility than the bulk materials .the marco - structure of porous samples are observed by sem , by obtaining the pores of around , it is revealed that the polymer foam may crystallize without significant structure damage .the features of thinner pore walls , branches and fragments confirmed the effect of heating treatment .both xrd and sem results of sample b and c indicate that 7 hours linear cooling is sufficient to achieve certain crystallinity and marco - structure .thomson rc , shung ak , yaszemski mj , mikos ag .polymer scaffold processing . in :lanza rp , langer r , vacanti jp , editors ._ principles of tissue engineering _ , 2nd ed . san diego : academic press , 2000 .[ chapter 21 ] .chihiro mochizuki , yuji sasaki , hiroki hara , mitsunobu sato , tohru hayakawa , fei yang , xixue hu , hong shen , shenguo wang . journal of biomedical materials research part b : applied biomaterials volume 90b , issue 1 , pages 290301 , july 2009 .
the porous pla foams potential for tissue engineering usage are prepared by a modified solvent casting / particulate leaching method with different crystallinity . since in typical method the porogens are solved in the solution and flow with the polymers during the casting and the crystallinity behavior of pla chains in the limited space can not be tracked , in this work the processing is modified by diffusing the pla solution into a steady salt stack . with a thermal treatment before leaching while maintaining the stable structure of the porogens stack , the crystallinity of porous foams is made possible to control . the characterizations indicate the crystallization of porous foams is in a manner of lower crystallibility than the bulk materials . pores and caves of around 250 size are obtained in samples with different crystallinity . the macro - structures are not much impaired by the crystallization nevertheless the morphological effect of the heating process is still obvious . porous materials , biomaterials , poly(lactic acid ) , tissue engineering scaffold , crystallization
cryptography underpins all secure communications , whether it is used for transferring credit card information between a buyer and a seller through the internet , or relaying classified information over a military network .classical cryptography is unsatisfactory in several respects .its security is generally based on assumptions about computation which are believed true but for which no absolute proof exists .that is , the assumptions depend on relative results which say that if certain computational problems are not efficiently solvable , then specific cryptosystems are secure . in practical terms , classical cryptosystems frequently suffer from difficulties in secure key distribution and often need to be revised and updated to maintain their security in response to strong advances in the capabilities of modern computer systems .in contrast , quantum cryptography was devised in the mid-1980 s in order to create cryptographic protocols that are guaranteed to be secure by the laws of nature [ 1 ] ( i.e. physics , and specifically quantum mechanics ) .the bb84 protocol was the first practical quantum cryptography protocol to be proven secure using the laws of quantum mechanics [ 2 , 3 ] , and it has provided the framework for most implementations of quantum cryptography . one major shortcoming of bb84 is its inefficient use of quantum mechanical information in that the key generation rate of a shared secret key is less than half of the repetition rate of the single photon source used for the protocol [ 1 ] . in 1995 ,goldenberg and vaidman proposed the first orthogonal states - based protocol which combined principles of special relativity with quantum mechanics to allow for secure communication [ 4 ] . since then , variations of their protocol have been proposed [ 5 , 6 ] .the way in which a relativistic quantum key distribution protocol exploits the laws of quantum mechanics is by creating a situation in which an eavesdropper must cause a detectable time delay if she wishes to obtain information about the secret key .the eavesdropper can not undo this delay without using faster - than - light communication - a task which is impossible according to special relativity . in this paperwe present a related but more efficient and flexible cryptographic protocol for which security is similarly guaranteed by nature and which expands the reach of provably secure cryptography .our qkd protocol has the unique feature that it can achieve a key generation rate greater than the repetition rate of the single photon source used for the protocol .according to common nomenclature , the sender of a cryptographic message is referred to as alice , the receiver as bob , and the eavesdropper as eve .the proposed protocol generates a secret key between alice and bob by encoding information on the phase between four states of a coherent superposition of single photons .the state of each photon carries two bits of information .the protocol is designed so that if eve attempts to gain information about the secret key by manipulating the coherent superposition of any photon , either the photon will reach bob detectably late , or the photon will be in a detectably different state of superposition . in order to describe the protocol, we will follow the path of a single photon on its journey from alice to bob .we will assume that communications are noiseless and lossless , meaning that no additional photons are introduced into the system and no photons are taken out of the system .further , we will assume that the channel between alice and bob is the shortest path between them ( we will revisit this assumption later ) .let us also say that alice and bob share synchronized clocks .consider the setup in figure 1 .+ + fig .the setup of the protocol . in alice s domain ,a single photon is emitted from alice s single photon source .since single photon sources emit photons probabilistically , the photon is emitted at a random time .essential to the protocol is the window of time or `` time bin '' in which the photon will ultimately be received by bob at one of his detectors .the amount of time it takes for a photon to reach bob s detectors depends upon how much the photon is delayed during its travels .next , the photon will pass through a 50 - 50 beam - splitter and will become a coherent superposition of a state on two separate paths .afterwards , the states on each of the paths pass through 50 - 50 beamsplitters , putting the photon into a coherent superposition of a state on four separate paths. we will denote the component of the state on the bottom path by , the component on the second to bottom path by , the component on the second to top path by , and the component on the top path by .the components of the photon are then phase - shifted randomly by alice so that the photon is in one of the four states \ ] ] \ ] ] \ ] ] \ ] ] note that there are loops in fig . 1 which symbolize additional lengths in the path of each component of the photon .if the distance between alice and bob is , we will take the length of each loop to be .we will later show that the length of each loop can be made very short .if the length is equal to , no two components of the state are accessible to eve at the same time . on bobs receiving end of the setup , he adds length to the paths of the components of the state so that he can bring them all to one point in space and time .bob then interferes the and components with a 50 - 50 beamsplitter , and interferes the and components with a second 50 - 50 beamsplitter .if alice sent the state or , the two interfered components of the state will end up on branch 1 .alternatively , if alice sent the state or , the two interfered components of the state will end up on branch 2 .whichever branch the components end up on , the two interfered components of the state are then interfered with one another by a final 50 - 50 beamsplitter . notethat if alice sent either the state or , the photon will be received by one of the two detectors marked detector 1 , and if alice sent either the state or the photon will be received by one of the two detectors marked detector 2 .once bob has received all of the states that alice has sent , he broadcasts to alice the time at which he received each state ( i.e. , the time that each photon reached one of his detectors ) .additionally , for each state that he received , bob decides randomly whether to communicate on which branch he received the state ( either branch 1 or branch 2 ) or at which detector he received the state ( either detector 1 or detector 2 ) . note thatsince alice is randomly choosing the states which she sends to bob , there is no correlation between the which - branch " information and the which - detector " information . as a result ,if for a particular state bob broadcasts to alice the which - branch information , they will use the undisclosed which - detector information as a bit for their shared secret key . on the other hand ,if for a particular state bob broadcasts to alice the which - detector information , they will use the undisclosed which - branch information as a bit for their shared secret key .if alice tells bob that he either received a state at an incorrect time or that some which - branch or which - detector information was incorrect , then bob knows that an eavesdropper must have been present . in the protocol, alice and bob can achieve a key generation rate equal to the repetition rate of alice s single photon source since two bits are encoded onto the spatio - temporal modes of a single photon , and only one of those bits is discarded for security . to achieve a key generation rate greater than the repetition rate of the source ,the protocol may be modified by increasing the number of paths into which the photon is split . in the modified protocol, alice will put each photon in an equal superposition of a state on separate paths where one component at a time is sent to bob s domain .alice can choose to add a relative phase of to combinations of components of the state , leading to states that she can send to bob .once bob has received all of the components of the photon , he interferes them such that the photon will end up at one of detectors , each one corresponding to one of the states that alice can send .it follows that in this modified protocol , the superposition state of each photon encodes bits , and some subset of those bits can be crosschecked in order to detect eve .setting , we recover the protocol which is the subject of this paper . in practice ,single photon sources are expensive and very difficult to fabricate while channels , such as fiber optic cables , are an abundant resource .therefore , it is most efficient for quantum key distribution protocols to optimize the number of secure bits that can be encoded per each emitted photon .our protocol provides a significant practical advantage over other existing protocols since it optimizes the number of bits that are encoded per each photon by using higher - dimensional channels . the security proof which follows is inspired by the structure of a proof given by goldenberg and vaidman for their own protocol [ 4 ] .we will prove that if eve tries to gain any information about the secret key from any photon sent from alice to bob , then either bob will receive the photon at the wrong time , or the photon will be in a detectably different state of superposition .eve can not add or subtract photons from the system without inducing detectable noise or loss .the only way that eve could attempt to avoid detection is by preserving both the phase and the timing of a photon s wavefunction .we will consider two times and , the first of which is before any component of the photon has left alice s domain , and the second of which is after all components of the photon have entered bob s domain .the most general operation that eve can perform on the state is that of a superoperator . in this case , we take to be one of the kraus components of eve s superoperator which takes a state at time to a state at time . for all four possible states that alice can send , the free time - evolution with no eavesdropper is \ ] ] \ ] ] \ ] ] \ ] ] since eve does not know whether bob will communicate to alice the which - branch or which - detector information , she must prepare for either scenario . in the casebob tells alice the which - branch information for a given state , eve must make the absolute squares of the amplitudes of the and components have the same value , and the absolute squares of the amplitudes of the and components also have the same value .additionally , the relative phase between the and components as well as the relative phase between the and components must remain unchanged . in the casebob shares the which - detector information for a given state , eve must make sure that the sum of the absolute squares of the amplitudes of the and components equals the sum of the absolute squares of the amplitudes of the and components .further , the relative phase between the and components as well as the relative phase between the and components must remain unchanged .so as to account for both scenarios , it follows that eve must make the magnitude of each of the final amplitudes equal to a constant .eve must also preserve the relative phases between any two components of the state .let be the state of an auxiliary system which eve uses to extract information .it follows that for , the general form of the evolution from time to is where is the state of eve s auxiliary system at time after it has been acted upon by in the case that alice sends the state .note that if it is impossible for eve to extract information . consider a photon at time in the state it follows that can be written as \end{aligned}\ ] ]this equation expresses that unless the photon could be measured in a state , , or .however , the photon can not be measured to be in any of these three states .the reason is that is an operator applied by eve that takes a photon in a state to a coherent superposition of the states , , and . buta photon in a state , , or must have exited eve s domain before the component in the state has entered it .thus , for eve to preserve causality , the coefficients of , and in eq .( 15 ) must be zero which immediately leads to eq .( 16 ) , or equivalently and so eve can not extract any information .this completes the proof .note that a security proof of the modified protocol presented at the end of section ii.a is a straightforward extension of the above proof .in the above explanation of the protocol , we assumed that the length of each loop in the setup was where is the distance between alice and bob . in practice ,having such long loops would decrease the key generation rate dramatically , and would also lead to photon loss due to the large distances that each component of the photon would have to travel .the solution to this problem was originally proposed by goldenberg and vaidman in the context of their own protocol [ 4 ] .let us say that the accuracy of the time of alice and bob s measurements is .therefore , if bob receives a photon that has been delayed by any amount of time greater than or equal to relative to alice s sending time , he and alice will be able to detect this delay .let us now say that the length of each loop is for some positive .it is easy to see that the terms , , or still can not appear in .therefore , the security proof will still hold for loop lengths of , and it is practical to incorporate this change into the protocol . in implementing the protocol experimentally ,it is not practical to have components of a photon travel along four separate paths over the large distances that may separate the domains of alice and bob .optical fibers tend to expand and contract with small temperature changes , which can change the relative phase between the components of the photon thereby affecting how they will eventually interfere with one another in bob s domain .atmospheric turbulence provides similar difficulties for free - space transmission of photons .both problems can seriously degrade the effectiveness of the protocol , and potentially reduce the secret key generation rate .the problem can be solved by using switches to couple the four paths to four temporal modes of one fiber or one free - space transmission line .the photon then travels in a single spatial mode from alice s domain to bob s domain . by analogy , imagine four trains traveling on four separate tracks . by using track switches to bring the trains onto a single track, one can align the trains so that they follow one another .the process can later be reversed , allowing the trains to be switched back onto four separate tracks .similarly , the four components of the photon traveling along the four paths are coupled to one spatial mode using switches s , s , and s in alice s domain ( figure 2 ) .the photons then travel one after the other in a single fiber or free - space path until reaching bob s domain where switches s , s , and s couple their temporal modes back into four spatial modes .+ + fig .2 . coupling spatial modes to temporal modes .recall that the protocol assumes that the path of the photons between the domains of alice and bob is a straight line . in satellite communication ,this assumption is correct since the communication is line - of - sight .however , in fiber optic communications , fibers are never laid down in straight lines .if the path of the photons is not a straight line , an eavesdropper could create a shorter straight - line path between alice and bob thereby reducing the travel time of the photons . as a result, the eavesdropper can allow herself extra time in which to measure and delay alice s photons and still send them on to bob such that he will receive the photons at the expected time .therefore , using a suboptimal path could mask the presence of an eavesdropper .the protocol can be modified to address the problem of a suboptimal path .for such a modification , alice and bob must take into account the difference in length between their suboptimal path and an optimal straight - line path between them .the difference in distance between the two paths will be designated as . to make the protocol secure and account for this extra distance , each loop in the setupmust be increased in length by at least . however , this change need not decrease the efficiency of the protocol since a setup in which the temporal modes are coupled to the spatial mode of a single fiber could be designed such that components of one photon are spatially interlaced with the components of other photons .it is important to note that alice and bob s switches and setup of interferometers have to be designed to function in accordance with the way that the photon components are interlaced .in our information age , large amounts of data need to be transmitted securely and rapidly for commercial and military purposes .the increasing power of computers makes conventional non - quantum cryptography protocols susceptible to attack .quantum cryptography offers benefits over standard methods of cryptography because it provides fundamental security .the protocol presented in this paper is a new method for relativistic orthogonal states quantum key distribution which provides security and achieves a key generation rate higher than other known qkd protocols .the efficiency of the proposed protocol makes it an attractive alternative to existing qkd protocols for both secure fiber optic as well as secure satellite communication .this research was funded by the undergraduate research opportunities program at the massachusetts institute of technology .the authors wish to thank joseph altepeter and steven homer for their useful discussions and support .000 c. h. bennett and g. brassard ( 1984 ) , _ quantum cryptography : public key distribution and coin tossing _ , proceedings of ieee international conference on computers systems and signal processing , pp .175 - 179 .k. yamazaki , t .matsui and o. hirota , ( 1997 ) , _ properties of quantum cryptography based on orthogonal states : goldenberg and vaidman scheme _ , quantum communication , computing , and measurement , pp .139 - 146 .
we introduce a new relativistic orthogonal states quantum key distribution protocol which leverages the properties of both quantum mechanics and special relativity to securely encode multiple bits onto the spatio - temporal modes of a single photon . if the protocol is implemented using a single photon source , it can have a key generation rate faster than the repetition rate of the source , enabling faster secure communication than is possible with existing protocols . further , we provide a proof that the protocol is secure and give a method of implementing the protocol using line - of - sight and fiber optic channels .
the study of subblock - constrained codes has recently gained attention as they are suitable candidates for varied applications such as simultaneous energy and information transfer , powerline communications , and design of low - cost authentication methods . a special class of subblock - constrained codesare codes where each codeword is partitioned into equal sized subblocks , and every subblock has the same fixed composition .such codes were called _ constant subblock - composition codes _ ( csccs ) in , and were labeled as multiply constant - weight codes ( mcwc ) in . _ subblock energy - constrained codes _ ( seccs ) were proposed in for providing real - time energy and information transfer from a powered transmitter to an energy harvesting receiver . for binary alphabet , seccsare characterized by the property that the weight of every subblock exceeds a given threshold .the cscc and secc capacities , and computable bounds , were presented in for discrete memoryless channels . in this paper , we study bounds on the size and asymptotic rate for binary csccs and seccs with given error correction capability , i.e. , minimum distance of the code .the input alphabet is denoted which comprises of symbols .an -length , -ary _ code _ over is a subset of .the elements of are called _ codewords _ and is said to have _ distance _ if the _ hamming distance _ between any two distinct codewords is at least . a -ary code of length and distance is called an -code , and the largest size of an -code is denoted by . for binary alphabet ( ) , an -code is just called an -code , and the largest size for this code is simply denoted . a _ constant weight code _ ( cwc ) with parameter is a binary code where each codeword has weight exactly .we denote a cwc with weight parameter , blocklength , and distance by -cwc , and denote its maximum possible size by . a _ heavy weight code _ ( hwc ) with parameter is a binary code where each codeword has weight _ at least _we denote a hwc with weight parameter , blocklength , and distance by -hwc , and denote its maximum possible size by . since an -cwc is an -hwc , we have that .a subblock - constrained code is a code where each codeword is divided into subblocks of equal length , and each subblock satisfies a fixed set of constraints . for a subblock - constrained code , we denote the codeword length by , the subblock length by , and and the number of subblocks in a codeword by . for the binary alphabet , a cscc is characterized by the property that each subblock in every codeword has the same _ weight _ , i.e. each subblock has the same number of ones . a binary cscc with codeword length , subblock length , distance , and weight per subblock is called an -cscc .we denote the maximum possible size of -cscc by . since an -cscc is an -cwc ,we have that . for providing regular energy content in a codeword for the application of simultaneous energy and information transfer from a powered transmitter to an energy harvesting receiver , the use of csccs was proposed in .when _ on - off keying _ is employed , with bit-1 ( bit-0 ) represented by the presence ( absence ) of a high energy signal , regular energy content in a cscc codeword can be ensured by appropriately choosing the weight per subblock .a natural extension of binary csccs are binary seccs , which allow the weight of each subblock to exceed , thereby ensuring that the energy content within every subblock duration is sufficient . a binary secc with codeword length , subblock length , distance , and weight at least per subblock is called an -secc .we denote the maximum possible size of an -secc by .since an -secc is an -hwc , we have that .also , since an -cscc is an -secc , we have that .the relation among code sizes is summarized below . for all , and , we have {90}{}}}&&{\mathbin{\rotatebox[origin = c]{90}{}}}\\ h(ml , d , mw_s ) & \ge & a(ml , d , mw_s ) \end{array}\ ] ] we also analyze bounds on the rate in the asymptotic setting where the number of subblocks tends to infinity , scales linearly with , but and are fixed . in the following , the base for is assumed to be 2 .formally , for fixed , the asymptotic rates for csccs and seccs with fixed subblock length , subblock weight parameter , number of subblocks in a codeword , and distance scaling as are defined as these rate can be compared with related exponents : the relation between asymptotic rates can be obtained by using the relation among code sizes in , and the above rate definitions . for all and ,we have {90}{ } } } & & \text{\scriptsize ( d)}~{\mathbin{\rotatebox[origin = c]{90}{}}}\\ \eta(\delta , w_s / l ) & \overset{\text{(a)}}{\ge } & \alpha(\delta , w_s / l ) \end{array}\ ] ] we summarize our notation on code size and asymptotic rates for cwcs , hwcs , csccs and seccs in table [ table : variables ] . .95|c|l|& + & subblock length + & number of subblocks in a codeword + & codeword length ( ) + & minimum distance of the code + & relative distance of the code ( ) + & weight per subblock + & weight per codeword ( ) + & fraction of ones in a codeword ( ) + & binary entropy function + -code & general -ary code + & maximum size of -code + -code & general _ binary _ code + & maximum size of -code + & asymptotic rate of -code + -cwc & constant weight code ( each codeword has weight ) + & maximum size of -cwc + & asymptotic rate of -cwc + & lower bound on using gilbert varshamov bound for -cwc + & upper bound on using sphere packing bound for -cwc + -hwc & heavy weight code ( each codeword has weight _ at least _ ) + & maximum size of -hwc + & asymptotic rate of -hwc + -cscc & binary constant subblock - composition code ( each subblock has weight ) + & maximum size of -cscc + & space of all cscc words + & asymptotic rate of -cscc + & lower bound on using gilbert varshamov bound for -cscc + & upper bound on using sphere - packing bound for -cscc + -secc & binary subblock energy - constrained code ( each subblock has weight _ at least _ ) + & maximum size of -seccc + & space of all secc words + & asymptotic rate of -secc + & lower bound on using gilbert varshamov bound for -secc + & upper bound on using sphere - packing bound for -secc + & asymptotic rate gap between cwc and cscc , + & lower bound on the asymptotic rate gap between cwc and cscc + & asymptotic rate gap between hwc and secc , + & lower bound on the asymptotic rate gap between hwc and secc + & asymptotic rate gap between secc and cscc , + & lower bound on the asymptotic rate gap between secc and cscc + among the codes discussed above , although cwcs have been widely studied , the exact characterization of , for , has remained elusive . a good upper bound for given in , by using a linear programming bound for the cwc code size .the class of hwcs was introduced by cohen _ , motivated by certain asynchronous communication problems .the asymptotic rates for hwcs was later established by bachoc __ .[ thm : bachoc ] let . then if view of the above theorem , the inequality ( a ) in is in fact an equality for .chee _ et al . _ introduced the class of csccs and provided rudimentary bounds for .later , constructions of csccs were proposed by various authors . the asymptotic rate for csccswas also studied in .however , an inconsistent asymptotic rate definition in led to an erroneous claim regarding the cscc rate ( see ( * ? ? ?6.1 ) ) . in this paper, we also provide a correct statement for the cscc rate in the scenario where the subblock length tends to infinity via proposition [ prop : chee ] in section [ sec : rates ] .seccs were proposed in , owing to their natural application in real - time simultaneous energy and information transfer .as shown in section [ sec : secc_codesize ] , the secc space , comprising of words where each subblock has weight exceeding a given threshold , has an interesting property that different balls of same radius may have different sizes .the lower bound on the code size for such spaces , where balls of same radius may have different sizes , was studied in , where a _ generalized gilbert - varshamov bound _ was presented .the _ generalized sphere - packing bound _ , providing an upper bound on the code size in such spaces , has been recently presented in , using graph - based techniques .the contributions of this paper are as follows : 1 . by studying the space of cscc and secc codewords ,we compute both upper and lower bounds for the optimal cscc code size and the optimal secc code size in section [ sec : bounds ] .we analyze the limiting behavior of ball sizes for these spaces in high dimensions , to derive both upper and lower bounds on the asymptotic rates for cscc and secc in section [ sec : rates ] .3 . for fixed and , we demonstrate the existence of an such that the inequalities ( b ) , ( c ) , and ( d ) in are _ strict _ for all ( refer section [ sec : penalty ] ) .this result implies the following : ( i ) relative to codeword - based weight constraint for cwcs ( resp .hwcs ) , the stricter subblock - based weight constraint for csccs ( resp .seccs ) , lead to a rate penalty .( ii ) relative to csccs , higher rates are provided by seccs due to greater flexibility in choosing bits within each subblock ( in contrast to theorem [ thm : bachoc ] ) .we quantify the rate penalty due to subblock - based constraints in section [ sec : numerical ] , by numerically evaluating the corresponding rate bounds .we also provide a correction to a result by chee _ , on the asymptotic cscc rate in the scenario where the subblock length tends to infinity ( see proposition [ prop : chee ] in section [ sec : rates ] ) .we derive novel bounds for and .while bounds for the former were also discussed in , those results are insufficient to provide good bounds on the asymptotic rates . among other bounds ,we derive the gilbert - varshamov ( gv ) bound and the sphere - packing bound for both csccs and seccs in this section , and their respective asymptotic versions in section [ sec : rates ] . for an -cscc ,it is easy to see from symmetry , via complementing bits in codewords , that we have the relation let denote the space of all binary words comprising of subblocks , each subblock having length , with weight per subblock . for , we define a ball centered at and having radius as the following lemma shows that the size of the cscc ball , , is independent of choice of .we will see later in sec .[ sec : secc_codesize ] that this is not true for the space comprising of secc words .[ lemma : cscc_balls_equal ] if and are two words in , then . for ,let } ] ) denote the subblock of ( resp . ) . as } ] have constant weight , there exists a permutation on letters such that } = \pi_i(\mathbf{x}_{[i]}) ] be the subblock of .then the distance of } ] is when ( and 0 otherwise ) .now , if , and distance between subblocks of and is , then if and only if .hence , the size of cscc ball of radius is given by the denominator in .the following proposition provides the sphere - packing bound for csccs .if and , then [ prop : cscc_sp ] the claim follows from the standard sphere - packing argument that for any -cscc , the balls of radius around codewords should be non - intersecting , and the fact that the denominator in is equal to .let denote the space of all binary words comprising of subblocks , each subblock having length , with weight per subblock _ at least _ . for , we define a ball centered at and having radius as unfortunately , in contrast to csccs , the size of depends on .take for example , , , and .we have that , while .we denote the smallest and the average ball size in the secc space as follows : latexmath:[\ ] ] where follows using ( as the constraint is stricter than the constraint ) , and in is defined as .note that asymptotically we get the following limits using , , and , we have the theorem is proved by combining proposition [ prop : cscc_sp ] , , and ._ remark _ : for , we have , which also follows from and then applying the standard sphere - packing bound for unconstrained binary codes . _ remark _ : for , simplifies to recall the definitions of and given by and .we have the following inequality the gap denotes the rate penalty on hwc due to the additional constraint on sufficient weight within every _subblock duration_. the asymptotic rates of hwcs were studied in where theorem [ thm : bachoc ] was established .therefore , it follows that for we have in the following , we present the asymptotic gv bound and the sphere - packing bound on .[ prop : secc_gv_rate ] we have where a simple upper bound on the average secc ball size of radius is given by using proposition [ prop : secc_gv_codesize ] and , we get the proposition now follows by combining and .the above proposition presents a lower bound on .next , in theorem [ thm : secc_sp_rate ] we present the sphere - packing upper bound on for relatively small values of .we will use the following lemma towards proving this theorem .[ lemma : secc_subblocks_d12 ] let be a binary vector of length whose weight satisfies . then the number of binary vectors with length , weight at least , which are at a distance of either 1 or 2 from is lower bounded by .let ( resp . ) be the number of length vectors of weight at least which are at a distance 1 ( resp .2 ) from .we consider three different cases : 1 . : in this case .if , then , else . : in this case .if , then , else . : in this scenario , and . for all the above three cases, it can easily be verified that .[ thm : secc_sp_rate ] for , we have where the theorem will be proved by using prop .[ prop : secc_sp_codesize ] and providing a lower bound on where and distance scales as .we define and note that the constraint implies that . for a given , let } ] .let be the set of vectors which satisfy the following conditions : a. for every , exactly subblocks of differ from corresponding subblocks of .b. if } \neq \mathbf{x}_{[j]} ] . from the above conditions, it follows that if , then , and hence with ^{\tilde{m } } , \ ] ] where follows from lemma [ lemma : secc_subblocks_d12 ] . because the above inequality holds for all , we have ^{\tilde{m } } .\label{eq : secc_minballsizebound}\ ] ] now , and hence the claim is proved by combining , prop .[ prop : secc_sp_codesize ] , , and . for the casewhere and , the asymptotic sphere - packing bound for seccs reduces to this section , we quantify the penalty in rate due to imposition of subblock constraints , relative to the application of corresponding constraints per codeword . here, we use the notation ^+$ ] to imply . the rate penalty due to constant weight per _ subblock _ , relative to the constraint requiring constant weight per _ codeword _ , is quantified by , defined as a lower bound to this rate gap is given by ^+ , \label{eq : g_alpha_gamma_def}\ ] ] where is defined in and with denoting the asymptotic gv lower bound for cwcs .the sphere - packing upper bound on the asymptotic rate for cwcs is given by , defined as if and , then using we have the strict inequality for . for relatively large values of the subblock length , , the following theorem shows that rate penalty is strictly positive when is sufficiently small .[ thm : gap - cwc - cscc - positive ] for even with , we have the strict inequality for , where is the smallest positive root of defined as using , , and , we have when .we observe from that is a continuous function of with further , when , we have where and follow from5.8 ) . now using , , and the intermediate value theorem , it follows that the equation has a solution in the interval .the theorem now follows by denoting the smallest positive root of by .the following proposition addresses the converse question on identifying an interval for when the rate gap between cwcs and csccs is provably zero .[ prop : cwc_cscc_rate0 ] the rate gap between cwcs and csccs , , is identically zero when . follows from and . in , the gap between cwc capacity and cscc capacity on noisy binary input channels was upper bounded by the rate penalty term , , defined as where .further , it was shown in that the actual capacity gap is equal to for a noiseless channel .the following proposition shows that tends to as tends to 0 .[ prop : cwc_cscc_gap_eq_rlp ] for , we have from we have , while using we obtain the limit , and hence the claim follows from definitions and . the lower bound on the rate gap between cwcs and csccs , , is tight when .an upper bound on is given by .using , , and , we observe that this upper bound on the rate gap also tends to as tends to 0 .the proof is complete by combining this observation with proposition [ prop : cwc_cscc_gap_eq_rlp ] . in seccs ,the fraction of ones in every subblock is at least , and hence the fraction of ones in the entire codeword is also at least .relative to the constraint requiring at least fraction of bits to be 1 for all _ codewords _ , the rate penalty due to the constraint requiring minimum weight per _ subblock _ is quantified by , defined as for , using theorem [ thm : bachoc ] , we note that can equivalently be expressed as .thus , a lower bound for , for , is given by ^+ , \label{eq : g_eta_sigma_def}\ ] ] where and are defined in and , respectively .when , we have , and in this case , the corresponding rate gap lower bound is defined as ^+ .\label{eq : g_eta_sigma_def2}\ ] ] the following theorem shows that rate gap between hwcs and seccs is strictly positive when is sufficiently small .[ thm : gap - hwc - secc - positive ] for even with , we have the strict inequality for , where is the smallest positive root of defined as using , , and , we have for .we observe from that is a continuous function of with as , for we have where follow using ( * ? ? ?5.8 ) . now from and , it follows that the equation has a solution in the interval .the theorem now follows by denoting the smallest positive root of by . when and , it can be verified using that for .theorem [ thm : gap - hwc - secc - positive ] considers the case where . using a similar argument, it can be shown that in a general setting where , the rate gap between hwcs and seccs is strictly positive for sufficiently small .the following proposition addresses the converse question on identifying an interval for when this gap is provably zero .[ prop : hwc_secc_rate0 ] for , the rate gap between hwcs and seccs , is identically zero when , while for , this gap is zero when .the claim for follows from and the asymptotic plotkin bound , while the claim for follows from and .the lower bound on the rate gap between hwcs and seccs , , is tight when .for , from we have that now , from and the relation , an upper bound on is given by . for , from and, we note that this upper bound tends to the right hand side of as .this proves the claim for . for , from we have that for , an upper bound on the rate gap is given by ( using ) , and this upper bound tends to the right hand side of as .the seccs , relative to csccs , provide the flexibility of allowing different subblocks to have different weights . in this subsection , we show that this flexibility leads to an improvement in asymptotic rate when the relative distance of the code is sufficiently small .the gap between secc rate and cscc rate is quantified by , defined as a lower bounded to this rate gap is given by ^+ , \label{eq : g_sigma_gamma_def}\ ] ] where and are given by and , respectively .the following theorem shows that is strictly positive when is small .[ thm : gap - secc - cscc ] for even with , we have the strict inequality for , where is the smallest positive root of defined as using , , and , we have for .from we note that is a continuous function of with further , comparing and , we observe that , in particular , for we have where the last inequality follows from . from and it follows that the equation has a solution in the interval .the proof is complete be denoting the smallest positive root of by . for the casewhen and , we have ^+ , \label{eq : g_sigma_gamma_l2}\ ] ] and is strictly positive for . from proposition[ prop : secc_gv_rate ] and theorem [ thm : secc_sp_rate ] , note that for , we have , and hence it follows from definitions , , and that although theorem [ thm : gap - secc - cscc ] only considers the case , a similar argument can be applied to show that the rate gap between seccs and csccs is strictly positive in a general setting where , provided is sufficiently small .the following converse , providing an interval for which results in zero rate gap , is obtained by using an argument similar to that in proposition [ prop : hwc_secc_rate0 ] .[ prop : secc_cscc_rate0 ] for , the rate gap between seccs and csccs , , is identically zero when , while for , this gap is zero when .the following proposition establishes the tightness of when tends to 0 .the lower bound on the rate gap between seccs and csccs , , is tight when . from and, we have that an upper bound on is given by . from , , and , we note that this upper bound tends to the right hand side of as .in this section , we provide numerical bounds on rate penalties due to weight constraint per subblock , relative to imposing similar constraint per codeword . versus subblock length , .,scaledwidth=55.0% ] as a function of subblock weight , .,scaledwidth=55.0% ]is strictly positive.,scaledwidth=55.0% ] fig .[ fig : g_cwc_cscc_versus_l ] plots the lower bound on the rate gap between cwcs and csccs , , as a function of the subblock length .the upper bound on the gap between cwc capacity and cscc capacity on noisy binary input channels for , given by ( see ) , is also plotted in red . as suggested by proposition [ prop : cwc_cscc_gap_eq_rlp ], the figure shows that tends to as gets close to zero . for a fixed value of , note that is independent of .thus , for a given , the decrease in with increasing is due to an increase in cscc rate .this is intuitively expected , because an increase in allows for greater flexibility in the choice of bits within every subblock .further , from proposition [ prop : chee ] , it follows that as .[ fig : g_cwc_cscc_versus_omega ] plots when the subblock length is fixed at , and varies from to .note that and ( see and , respectively ) , and thus .note that figs .[ fig : g_cwc_cscc_versus_l ] and [ fig : g_cwc_cscc_versus_omega ] illustrate that decreases with .[ fig : delta_l_tilde ] depicts the region where the gap between cwc rate and cscc rate is provably strictly positive .note that is the smallest value of for which the lower bound is zero , when is fixed , and ( see theorem [ thm : gap - cwc - cscc - positive ] ) .the figure shows that decreases with , and from proposition [ prop : chee ] it follows that when .moreover , using proposition [ prop : cwc_cscc_rate0 ] , it is seen that the actual rate gap is provably zero for . as a function of subblock length , .,scaledwidth=55.0% ] as a function of .,scaledwidth=55.0% ] is strictly positive.,scaledwidth=55.0% ] fig .[ fig : g_hwc_secc_versus_l ] plots , lower bound for the rate gap between hwcs and seccs , as a function of , with . for a given ,it is seen from the figure that decreases with .note that for , using proposition [ prop : bachocchee ] , we have as .[ fig : g_hwc_secc_versus_omega ] plots versus , for fixed .the shaded area in fig . [ fig : delta_l_hat ] depicts the region where the rate gap between hwc and secc is provably strictly positive .here , is the smallest value of for which the lower bound is zero , when is fixed , and ( see theorem [ thm : gap - hwc - secc - positive ] ) .the figure shows that decreases with , and from proposition [ prop : bachocchee ] it follows that when .moreover , using proposition [ prop : hwc_secc_rate0 ] , it is seen that the actual rate gap is provably zero for . versus subblock length , .,scaledwidth=55.0% ] as a function of .,scaledwidth=55.0% ]is strictly positive.,scaledwidth=55.0% ] relative to csccs , the seccs allow for greater flexibility in choice of bits within each subblock , by allowing the subblock weight to vary , provided it exceeds a certain threshold .this flexibility results in higher rate for seccs and fig .[ fig : g_secc_cscc_versus_l ] plots , lower bound on the rate gap between seccs and csccs .the figure shows that for a given , the rate gap bound decreases with , and we have as .the last assertion follows by combining theorem [ thm : bachoc ] , proposition [ prop : chee ] , and the fact that . additionally , comparing figs .[ fig : g_cwc_cscc_versus_l ] , [ fig : g_hwc_secc_versus_l ] , and [ fig : g_secc_cscc_versus_l ] , we observe that the inequality in is satisfied . fig .[ fig : g_secc_cscc_versus_omega ] plots versus , for fixed , and . on comparing figs .[ fig : g_cwc_cscc_versus_omega ] , [ fig : g_hwc_secc_versus_omega ] , and [ fig : g_secc_cscc_versus_omega ] , it is observed that lower bounds on respective rate gaps satisfy .[ fig : delta_l_grave ] depicts the region where the rate gap between secc and cscc is provably strictly positive .here , is the smallest value of for which the lower bound is zero , when is fixed , and ( see theorem [ thm : gap - secc - cscc ] ) .[ fig : delta_l_grave ] shows that decreases with , and from thm .[ thm : bachoc ] , prop .[ prop : chee ] , and prop . [ prop : bachocchee ] it follows that when .moreover , using proposition [ prop : secc_cscc_rate0 ] , we see that the true gap is provably zero for .we derived upper and lower bounds for the sizes of csccs and seccs . for a fixed subblock length and weight parameter , we demonstrated the existence of some , , and such that the gaps furthermore , we provide estimates on , , and via theorems [ thm : gap - cwc - cscc - positive ] , [ thm : gap - hwc - secc - positive ] , and [ thm : gap - secc - cscc ] .these gaps then reflect the rate penalties due to imposition of subblock constraints , relative to the application of corresponding constraints per codeword .the converse problem , on identifying an interval for where the respective rate penalties are provably zero , is addressed via propositions [ prop : cwc_cscc_rate0 ] , [ prop : hwc_secc_rate0 ] , and [ prop : secc_cscc_rate0 ] .an interesting but unsolved problem in this regard is to characterize the smallest beyond which the respective rate penalties are zero .we can get some insight from the numerical computations in , which indicate that there is a nonzero gap between cscc and cwc capacities and a nonzero gap between cscc and secc capacities .this suggests that , for a fixed subblock length , the rate penalties are zero if and only if the respective asymptotic rates themselves are zero .however , this remains an open problem .a. tandon , m. motani , and l. r. varshney , `` subblock - constrained codes for real - time simultaneous energy and information transfer , '' _ ieee trans .inf . theory _ , vol .62 , no . 7 , pp . 42124227 , jul .2016 .y. m. chee , z. cherif , j .- l .danger , s. guilley , h. m. kiah , j .- l .kim , p. sole , and x. zhang , `` multiply constant - weight codes and the reliability of loop physically unclonable functions , '' _ ieee trans .inf . theory _ ,60 , no . 11 ,pp . 70267034 , nov .2014 .r. mceliece , e. rodemich , h. rumsey , and l. welch , `` new upper bounds on the rate of a code via the delsarte - macwilliams inequalities , '' _ ieee trans .inf . theory _ ,23 , no . 2 ,157166 , mar .
the study of subblock - constrained codes has recently gained attention due to their application in diverse fields . we present bounds on the size and asymptotic rate for two classes of subblock - constrained codes . the first class is binary _ constant subblock - composition codes _ ( csccs ) , where each codeword is partitioned into equal sized subblocks , and every subblock has the same fixed weight . the second class is binary _ subblock energy - constrained codes _ ( seccs ) , where the weight of every subblock exceeds a given threshold . we present novel upper and lower bounds on the code sizes and asymptotic rates for binary csccs and seccs . for a fixed subblock length and small relative distance , we show that the asymptotic rate for csccs ( resp . seccs ) is strictly lower than the corresponding rate for constant weight codes ( cwcs ) ( resp . heavy weight codes ( hwcs ) ) . further , for codes with high weight and low relative distance , we show that the asymptotic rates for csccs is strictly lower than that of seccs , which contrasts that the asymptotic rate for cwcs is equal to that of hwcs . we also provide a correction to an earlier result by chee et al . ( 2014 ) on the asymptotic cscc rate . additionally , we present several numerical examples comparing the rates for csccs and seccs with those for constant weight codes and heavy weight codes .
a sparse system is defined when impulse response contains only a small fraction of large coefficients compared to its ambient dimension .sparse systems widely exist in many applications , such as digital tv transmission channel and the echo path .generally , they can be further classified into two categories : exact sparse system ( ess ) and near sparse system ( nss ) .if most coefficients of the impulse response are exactly zero , it is defined as an exact sparse system ( fig .[ sparsec ] .a ) ; instead , if most of the coefficients are close ( not equal ) to zero , it is a near sparse system ( fig . [ sparsec ] .b and c ) . otherwise , a system is non - sparse if its most taps have large values ( fig .[ sparsec ] .d ) . for the simplicity of theoretical analysis ,sparse systems are usually simplified into exact sparse . however , in real applications most systems are near sparse due to the ineradicable white noise .therefore , it is necessary to investigate on near sparse system modeling and identification . among many adaptive filtering algorithms for system identification , least mean square ( lms ) algorithm , which was proposed by widrow and hoff in 60s of the past century , is the most attractive one for its simplicity , robustness and low computation cost .however , without utilizing the sparse characteristic , it shows no advantage on sparse system identification . in the past few decades, some modified lms algorithms for sparse systems are proposed .m - max normalized lms ( mmax - nlms ) and sequential partial update lms ( s - lms ) reduces the computational complexity and steady - state misalignment by partially updating the filter coefficients . proportionate lms ( plms ) and its improved ones such as ipnlms and iipnlms accelerate the convergence rate by updating each coefficient iteratively with different step size proportional to the magnitude of filter coefficient .stochastic tap - normalized lms ( st - nlms ) improves the performance on specific sparse system identification where large coefficients appear in clusters .it locates and tracks the non - zero coefficients by adjusting the filter length dynamically . however , its convergence performance largely depends on the span of clusters .if the span is too long or the system has multiple clusters , it shows no advantage compared with standard lms algorithm . and respectively .( d ) is a non - sparse system generated by gaussian distribution ., width=384 ] more recently , inspired by the research of cs reconstruction problem , a class of novel adaptive algorithms for sparse system identification have emerged based on the ( ) norm constraint .especially , zero - point attraction lms ( za - lms ) algorithm significantly improves the performance on exact sparse system identification by introducing a norm constraint on the cost function of standard lms , which exerts the same zero - point attraction force on all coefficients .however , for near sparse systems identification , the zero - point attractor can be a double - edged sword .though it increases the convergence rate because by the norm constraint , it also produces larger steady - state misalignment as it forces all coefficients to exact zero .thus , it possesses less advantage against standard lms algorithm when the system is near sparse . in this paper, firstly generalized gaussian distribution ( ggd) is introduced to model the near sparse system .then two improvements on the za - lms algorithm is proposed .above all , by adding a window on the norm constraint , the steady - state misalignment is reduced without increasing the computational complexity .furthermore , the zero - point attractor is weighted to adjust the zero - point attraction by utilizing the estimation error . by combining the two improvements , the dynamic windowing za - lms ( dwza - lms ) algorithm is proposed which shows improved performance on the near sparse system identification .the rest of the paper is organized as follows : in section ii , za - lms algorithm based on norm constraint is reviewed , and the near sparse system is modeled .the new algorithm is proposed in section iii . in section iv, the mean square convergence performance of dwza - lms is analyzed .the performances of the new algorithm and other improved lms algorithms for sparse system identification are compared by simulation in section v , where the effectiveness of our analysis is verified as well .finally , section vi concludes the paper .let be a sample of the desired output signal where ^{\rm t} ] denotes the input vector , and is the observation noise assumed to be independent of .the estimation error between desired and output signal is defined as where are the filter coefficients and ^{\rm t} ] is the component - wise partial sign function , defined as =-f_{\rm wza}(t)=\left\ { \begin{array}{ll } { \rm sgn}(t ) & \mbox{;}\\ 0 & \mbox{elsewhere.}\label{new sgn } \end{array } \right.\ ] ] where and are both positive constant which denotes the lower and upper threshold of the attraction range , respectively . from fig .3 , it can be concluded that za - lms is the special case of wza - lms when reaches 0 and approaches infinity , respectively .besides , by investigating ( [ 2 ] ) , ( [ wzalms ] ) and ( [ new sgn ] ) , it can be seen that the computational complexity of the two algorithms is approximately the same . and by adopting the new zero - point attractor and properly setting the threshold , the coefficients , whether too small or too large , will not be attracted any more .thus , the steady - state misalignment is significant reduced especially for near sparse system .as mentioned above , the sparse constraint should be relaxed in order to reduce the steady - state misalignment when the updating procedure reaches the steady - state .inspired by the idea of variable step size methods of standard lms algorithm , the magnitude of estimation error , which denotes the depth of convergence , is introduced here to adjust the force of zero - point attraction dynamically .that is , at the beginning of iterations , large estimation error increases zero - point attraction force which also accelerates the convergence .when the algorithm is approaching the steady - state , the error decreases to a minor value accordingly .thus the influence of zero - point attraction force on small coefficients is reduced that produce smaller steady - state misalignment . by implementation of this improvement on za - lms algorithm ,the algorithm is named as dynamic za - lms ( dza - lms ) .finally , by combining the two improvements , the final dynamic windowing za - lms ( dwza - lms ) algorithm can be drew .the new recursion of filter coefficients is as follows , \quad\forall 0\leq i < l.\label{6}\end{aligned}\ ] ] in addition , the new method can also improve the performance of za - nlms , which is known for its robustness .the recursion of dwza - nlms is \right\ } \quad\forall 0\leq i < l.\label{7}\ ] ] where is the regularization parameter .the mean square convergence analysis of dwza - lms algorithm is carried out in this section .the analysis is based on the following assumptions . 1 .the input signal is i.i.d zero - mean gaussian .the observation noise is zero - mean white .the tap - input vectors and the desired response follow the common independence assumption , which are generally used for performance analysis of lms algorithm .2 . the unknown near sparse filter tap follows ggd .as stated in section ii , this assumption is made because ggd is the suitable sparse distribution for near sparse system modeling .the steady state adaptive filter tap follows the same distribution with ( ) .this is a reasonable assumption in that the error between the coefficients of the identified and the unknown real systems are very small when the algorithm converges . under these assumptions , the mean square convergence condition and the steady - state mse of dwza - lms algorithmare derived .also , the choice of parameters is discussed in the end of this section .first of all , the misalignment vector is defined as and auto - covariance matrix of as combining ( [ 1 ] ) , ( [ 3 ] ) , ( [ 6 ] ) and ( [ 8 ] ) , one derives where , ] . with ( [ 8 ] ), one has combining ( [ 12 ] ) , ( [ 13 ] ) and ( [ 14 ] ) , one derives by taking trace on both sides of ( [ 15 ] ) , it can be concluded that the adaptive filter is stable if and only if which is simplified to this implies that the proposed dwza - lms algorithm has the same stability condition for the mean square convergence as the za - lms and standard lms algorithm . in this subsection ,the steady - state mean square error ( mse ) of dwza - lms algorithm is analyzed . by definition, mse is where , then ( [ 18 ] ) can be rewritten as thus , our work is to estimate . in ( [ 15 ] ) , let approach infinity , by observing the ( ) element of the matrix , one obtains with reference to ( [ 11 ] ) , it is obvious that to derive , by multiplying on the right of each item of ( [ 10 ] ) and taking the expectation value on both sides as well as letting approach infinity it yields thus , when or ( ) , it has when ( ) , it has .\label{24}\ ] ] according to assumption ( 3 ) , and combining ( [ 5 ] ) , we have \nonumber\\ & = & \left\{\theta\left[1/\beta,(\frac{|b|}{\lambda})^\beta\right]-\theta\left[1/\beta,(\frac{|a|}{\lambda})^\beta\right]\right\ } /\gamma(1/\beta),\end{aligned}\ ] ] where denotes the probability that the coefficients of adaptive filter will be attracted . on the other hand , the probability that they will not be attracted is . by combining ( [ 23 ] ) and ( [ 24 ] ) and summing up all the diagonal items of matrix , it yields }{\mu\sigma_x^2\left[2-\mu\sigma_x^2(l+2)\right]},\label{25}\ ] ] combining ( [ 19 ] ) and ( [ 25 ] ) , finally one has +l\mu^2\sigma_x^2\sigma_v^2}{2\mu-\mu^2\sigma_x^2(l+2)-\rho^2{\rm p}_al\left({\displaystyle\frac{2}{\mu\sigma_x^2}}-1\right)}.\label{mse}\ ] ] if , equation ( [ mse ] ) is the same with mse of standard lms algorithm , the performance of the proposed algorithm is largely affected by the balancing parameter and the thresholds and .according to ( [ za cost function ] ) and ( [ 2 ] ) , it can be seen that the parameter determines the importance of the norm and the intensity of zero - point attraction . in a certain range , a larger , which indicates stronger attraction intensity ,will improve the convergence performance by forcing small coefficients toward zero with fewer iterations .however , according to ( [ mse ] ) , a larger also results in a larger steady - state misalignment .so the parameter can balance the tradeoff between adaptation speed and quality .moreover , the optimal parameter empirically satisfies . by analyzing steady - state mse in ( [ mse ] ) under such circumstance, it can be seen that according to ( [ compare ] ) , the influence of the last term in the denominator of ( [ mse ] ) can be ignored , which means that the steady - state mse of the proposed algorithm is approximately the same with standard lms for near sparse systems identification .the same conclusion can also be drawn from ( [ 6 ] ) intuitively : when the adaptation reaches steady - state , the small renders the value of trivial compared to , letting the relaxation of zero - point attraction constraint . on the other hand , with large in the beginning and the process of adaptation which indicates larger zero - point attraction force, the zero - point attractor adjusts the small taps more effectively than za - lms , forcing them to zero with fewer iterations , which accelerate the convergence rate significantly . the thresholds and determine the zero - point attraction range together .the parameter is set to avoid forcing all small coefficients to exact zero , it is suggested to be set as the mean amplitude of those near zero coefficients of the real system .specifically , for exact sparse systems , as most coefficients of the unknown system are exactly zero except some large ones , accordingly is set to force most small coefficients to exact zero . for exact sparse systems contaminated by small gaussian white noise, should be set as the standard deviation of the noise .for near sparse systems generated by ggd , as the mean amplitude of the small coefficient is hard to derive , we empirically choose for the proposed algorithm . as a small sparsity indicator in ggd usually means smaller mean amplitude of the small coefficient , we choose smaller when is smaller . according to the simulations , is chosen in the range to for ggd with varying from to .the parameter is chosen to reduce the unnecessary attraction of large coefficients in za - lms , therefore , empirically any constant , which is much larger than the deviation of small coefficients and much smaller than infinity , should be appropriate .various simulations demonstrate that the parameter can be set as a constant around 1 for most near sparse systems .this choice of is quite standard for most applications .in this section , first we demonstrate the convergence performance of our proposed algorithm on two near sparse systems and a exact sparse system in experiment 1 - 4 , respectively .second , experiment 5 - 7 are designed to verify the derivation and discussion in section iv . besides the proposed algorithm , standard nlms , za - lms , ipnlms and iipnlms are also simulated for comparison . to be noticed , the normalized variants of za - lms and the proposed algorithm are adopted to guarantee a fair comparison in all experiments except the fourth , where dwza - lms is simulated to verify the theoretical analysis result . the first experiment is to test the convergence and tracking performance of the proposed algorithm on near sparse system driven by gaussian white signal and correlated input , respectively .the unknown system is generated by ggd which has been shown in fig .[ sparsec ] .b with filter length , it is initialized randomly with and .for the white and correlated input , the system is regenerated following the same distribution after and iterations , respectively .for the white input , the signal is generated by white gaussian noise with power . for the correlated input , the signal is generated by white gaussian noise driving a first - order auto - regressive ( ar ) filter , , and is normalized . besides , the power of observation noise is for both input .the five algorithms are simulated 100 times respectively with parameter in both cases .the other parameters are as follows * ipnlms and iipnlms with white input : , , , , ; * ipnlms and iipnlms with correlated input : , , , , ; * za - nlms and dwza - nlms with white input : , , , .* za - nlms and dwza - nlms with correlated input : , , , ..comparison of computational complexity of ipnlms , iipnlms and dwza - nlms [ cols="^,^,^,^,^ " , ] where $ ] denotes the fraction of coefficients in the attracting range of dwza - nlms .+ all the parameters are particularly selected to keep their steady - state error in the same level .the msd of these algorithms for both white and correlated input are shown in fig .[ experiment_1_a ] and fig .[ experiment_1_b ] , respectively .the simulation results show that all algorithms converge more slowly in the color input driven scenario than in the white noise driven case .however , the ranks or their relative performances are similar and the proposed algorithm reaches the steady state first with both white and correlated input . on the other hand , the performance of za - nlms degenerate to standard nlms as the system is near sparse .furthermore , the computational complexity of the proposed algorithm is also smaller compared with improved pnlms algorithms ( table [ computational complexity ] ) . besides , when the system is changed abruptly , the proposed algorithm also reaches the steady - state first in both cases .the second experiment is to demonstrate the proposed algorithm on near - sparse systems other than ggd .the near - sparse system with 100 taps is generated in the following manner .first , 8 large coefficients following gaussian distribution are generated , where all their tap positions follow uniform distribution .second , white gaussian noise with variance is added to all taps , enforcing the system to be near - sparse .the signal is generated by white gaussian noise with power .five algorithms , the same as in experiment 1 , are simulated 100 times respectively with parameter .the parameters are set to the same values as in the white input case of experiment 1 except for the proposed algorithm . from fig .[ experiment_5 ] , we can conclude that the proposed algorithm reaches the steady - state first in such near sparse system .the third experiment demonstrates the effectiveness of the proposed modification on za - lms algorithm on near sparse system .the system and the signal are generated in the same way as in experiment 2 .the proposed dza - nlms , dwza - nlms are compared with za - nlms for the system with 100 simulations .the step length for all algorithms are set as .we particularly choose parameters and for dza - nlms and za - nlms to ensure their steady - state mean square error in the same level .we set for dwza - nlms algorithm for a fair comparison with dza - nlms .the parameter and in dwza - nlms are chosen as and , respectively . from fig .[ experiment_7 ] , we can see that with the dynamic zero - point attractor dza - nlms convergences faster than za - nlms . by adding another window constraint on the zero - point attractor , the dwza - nlmsnot only preserves the property of fast convergence of dza - nlms , but also shows smaller steady - state mean square error than both za - nlms and dza - nlms .the fourth experiment shows that the proposed improvement is still effective on the exact sparse system identification .the unknown system is shown in fig .[ sparsec ] .a , where filter length . besides , 8 large coefficients is uniformly distributed and generated by gaussian distribution , and all other tap coefficients are exactly zero .the input signal is generated by white gaussian noise with power , and the power of observation noise is .the proposed algorithm is compared with za - nlms and standard nlms , where each algorithm is simulated 100 times with 2000 iterations .the step size is set to for nlms , and for both the za - nlms and the proposed algortihms .the parameter is set to and .all the parameters are chosen to make sure that the steady - state error is the same for comparison . according to fig .[ experiment_4 ] , it can be seen that the convergence performance is also improved compared with za - nlms via the proposed method on exact sparse system , thus the proposed improvement on the algorithms are robust . the fifth experiment is to test the sensitivity to sparsity of the proposed algorithm .all conditions are the same with the first experiment except the sparsity .the parameter is selected as , and , respectively . besides , we also compared our algorithm when the system is non - sparse which is generated by gaussian distribution . for each , both the proposed algorithm and nlms are simulated times with iterations .the step size is for both algorithms , and , , for the proposed algorithm .the simulated msd curves are shown in fig .[ experiment_3 ] .the steady - state msd remains approximately the same for the proposed algorithm with varying parameter which denotes the sparsity of systems , meanwhile the convergence rate decreases as the sparsity decreases .for the non - sparse case , our algorithm degenerates and shows similar behavior with standard nlms .however , for each the proposed algorithm is never slower than standard nlms . it should be noticed that nlms is independent on the system sparsity and behaves similar when varies . the sixth experiment is to test the steady - state mse with different parameters .the coefficients of unknown system follows ggd with filter length , the sparsity and variance are chosen as and , respectively .the input is generated by white gaussian noise with normalized power , and the power of observation noise is . under such circumstance , the steady - state msd is tested . here and are set for each simulation .the step size is varied from 0 to for given . and is changed from 0 to for given .[ experiment_2_mu ] and fig .[ experiment_2_rho ] show that the analytical results accord with the simulated ones of different parameters for variable values . specifically , in fig .[ experiment_2_mu ] , the steady - state mse goes up as the step size increases , whose trend is the same with standard lms .[ experiment_2_rho ] shows that the analytical steady - state mse matches with simulated one as the parameter gets larger , which verifies the result in section v. , width=384 ] the seventh experiment is designed to test the behavior of the proposed algorithm with respect to different parameters of and .all conditions of the proposed algorithm are the same with the second experiment except the parameters and .first , we set and vary as , and .second , we set and vary as , and . from fig .[ experiment_6_a ] , we can see that the optimal is chosen as the variance of the small coefficients .smaller will result in larger misalignment , and larger will cause slow convergence . from fig .[ experiment_6_b ] , we conclude that shows the best performance .either too large or too small will result in slower convergence .in order to improve the performance of za - lms for near sparse system identification , an improved algorithm , dwza - lms algorithm , is proposed in this paper by adding a window to the zero - point attractor in za - lms algorithm and utilizing the magnitude of estimation error to weight the zero - point attractor .such improvement can adjust the zero - point attraction force dynamically to accelerate the convergence rate with no computational complexity increased .in addition , the mean square convergence condition , steady - state mse and parameter selection of the proposed algorithm are theoretically analyzed .finally , computer simulations demonstrate the improvement of the proposed algorithm and effectiveness of the analysis .j. jin , y. gu , s. mei , a stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework , _ ieee journal of selected topics in signal processing _, vol . 4 , pp.409 - 420 , apr .2010 .k. sharifi and a. leon - garcia , estimation of shape parameter for generalized gaussian distributions in subband decompositions of video , _ ieee trans . on circuits syst .video technol ._ , vol . 5 , pp .52 - 56 , 1995 .
the newly proposed norm constraint zero - point attraction least mean square algorithm ( za - lms ) demonstrates excellent performance on exact sparse system identification . however , za - lms has less advantage against standard lms when the system is near sparse . thus , in this paper , firstly the near sparse system modeling by generalized gaussian distribution is recommended , where the sparsity is defined accordingly . secondly , two modifications to the za - lms algorithm have been made . the norm penalty is replaced by a partial norm in the cost function , enhancing robustness without increasing the computational complexity . moreover , the zero - point attraction item is weighted by the magnitude of estimation error which adjusts the zero - point attraction force dynamically . by combining the two improvements , dynamic windowing za - lms ( dwza - lms ) algorithm is further proposed , which shows better performance on near sparse system identification . in addition , the mean square performance of dwza - lms algorithm is analyzed . finally , computer simulations demonstrate the effectiveness of the proposed algorithm and verify the result of theoretical analysis . * keywords : * lms , sparse system identification , zero - point attraction , za - lms , generalized gaussian distribution
in this paper , letting be a bounded smooth domain , we consider the stokes equations subject to the non - homogeneous slip boundary condition as follows : where and are the velocity and pressure of the fluid respectively , and is a viscosity constant .moreover , represents the given body force , the prescribed outgoing flow on the boundary , and the prescribed traction vector on in the tangential direction , with being the cauchy stress tensor associated with the fluid . the outer unit normal to the boundary denoted by . the first term in ( [ eq : stokes slip bc]) is added in order to ensure coercivity of the problem without taking into account rigid body movements .we impose the compatibility condition between ( [ eq : stokes slip bc]) and ( [ eq : stokes slip bc]) which reads the slip boundary condition ( [ eq : stokes slip bc])([eq : stokes slip bc]) ( or its variant the navier boundary condition ) is now widely accepted as one of the standard boundary conditions for the navier - stokes equations .there are many applications of the slip boundary conditions to real flow problems ; here we only mention the coating problem and boundary conditions of high reynolds number flow . for more details on the application side of slip - type boundary conditions , we refer to stokes and carey and references therein ; see also john for generalization combined with leak - type boundary conditions . in the present paper ,our motivation to consider problem ( [ eq : stokes slip bc ] ) consists in dealing with some mathematical difficulties which are specific to its finite element approximation . as shown by solonnikov and adilov in ( see also beiro da veiga for a generalized non - homogeneous problem ) , proving the existence , uniqueness , and regularity of a solution of ( [ eq : stokes slip bc ] ) does not reveal essentially more difficulty compared with the case of dirichlet boundary conditions .then one is led to hope that its finite element approximation could also be treated analogously to the dirichlet case .however , it is known that a naive discretization of ( [ eq : stokes slip bc ] ) , especially when a smoothly curved domain is approximated by a polyhedral domain , leads to a variational crime in which we no longer obtain convergence of approximate solutions .let us describe this phenomenon assuming and considering piecewise linear approximation of velocity .in view of the weak formulation of the continuous problem , see ( [ eq : weak form with constraint ] ) below , a natural choice of the space to approximate velocity would be ( we adopt the notation of section [ sec : finite element approximation ] ) : where denotes the outer unit normal associated to .now suppose that and that any two adjacent edges that constitute are not parallel .then , one readily sees that above reduces to . as a result, the finite element solution computed using is nothing but the one satisfying the dirichlet boundary condition , which completely fails to approximate the slip boundary condition . for or quadratic approximation ( or whatever else ), we may well expect similar undesirable reduction in the degrees of freedom that should have been left for the velocity components , which accounts for the variational crime .one way to overcome the variational crime is to replace the constraint in ( [ eq : vhn ] ) by for each boundary node .this strategy was employed by tabata and suzuki where is a spherical shell ; see also tabata .extension of the idea to the quadratic approximation was proposed by bnsch and deckelnick in using some abstract transformation introduced by lenoir . for of general shape , the exact values for or may not be available . in this regard, some average of s near the boundary node can be used as approximation of those unavailable values. this idea was numerically tested by bnsch and hhn in for and by dione , tibirna and urquiza for ( with penalty formulation ) , showing good convergence property .however , rigorous and systematic evaluation of those approximations is non - trivial and does not seem to be known in the literature . moreover , implementation of constraints like in a real finite element code is also non - trivial and requires special computational techniques ( see e.g. gresho and sani and ( * ? ? ?* section 5 ) ) , which are not necessary to treat the dirichlet boundary condition . in view of these situations , in the present paperwe would like to investigate a finite element scheme to ( [ eq : stokes slip bc ] ) such that : 1 ) rigorous error analysis can be performed ; 2 ) numerical implementation is as easy as for the dirichlet case . with this aimwe adopt a _ penalty approach _ proposed by dione and urquiza ( see also ) which , in the continuous setting , replaces the dirichlet condition ( [ eq : stokes slip bc]) by the robin - type one involving a very small number ( called the penalty parameter ) , i.e. , on . at the weak formulation level , this amounts to removing the constraint from the test function space and introducing a penalty term in the weak form .our scheme transfers this procedure to the discrete setting given on ; see ( [ eq : discrete problem with 2 variables ] ) below . since the test function space for velocity is taken as the whole involving no constraints , this scheme facilitates implementation , which serves purpose 2 ) mentioned above .it is indeed simple enough to be implemented by well - known finite element libraries such as ` freefem++ ` and ` fenics ` , as is presented in our numerical examples .let us turn our attention to the error analysis .the first error estimate was given by verfrth who derived in the energy norm for .the same author proposed the lagrange multiplier approach in for .later , knobloch derived optimal error estimates ( namely , for linear - type approximation and for quadratic - type approximation ) for and for various combinations of finite elements satisfying the lbb condition , assuming the existence of better approximation of than .the convergence ( without rate ) under minimal regularity assumptions was proved by the same author in .a different proof of -estimate for the p2/p1 element was given by for , assuming that is known .the technique using was then exploited to study the penalty scheme in , again for the p2/p1 element and for . in the present paper ,we study the penalty scheme for the p1/p1 element combined with pressure stabilization and also for the p1b / p1 element .our method to establish the error estimate is quite different from those of the preceding works mentioned above .first , we address the non - homogeneous boundary conditions ( [ eq : stokes slip bc])([eq : stokes slip bc]) which were not considered previously .second , concerning the penalty scheme , we directly compare and , whereas dione and urquiza introduced a penalized problem in the continuous setting , dividing the error estimates into two stages .third , we define our error ( for velocity ) to be , where may be arbitrary smooth extension of .this differs from and in which the errors were defined as and , respectively . in view of practical computation ,our choice of the error fits what is usually done in the numerical verification of convergence when .compared with the method of knobloch who also employed as the error , the difference lies in the way the boundary element face on is mapped to a part of .in fact , he exploited the orthogonal projection from to , the image of which is localized to each .then he needed delicate arguments ( see ) to take into account the fact that it is not globally injective when ( this point seems to be overlooked in ) .we , to the contrary , rely on the orthogonal projection from to , which is globally bijective regardless of the space dimension , provided the mesh size is sufficiently small .this enables us to transfer the triangulation of to that on in a natural way , which is convenient to estimate surface integrals .complete proofs of the facts regarding used in this paper , which we could not find in the literature , are provided in appendices [ sec : transformation between gamma and gammah ] and [ sec : error of n and nh ] . finally , we comment on the rate of convergence we obtain in our main result ( theorem [ thm : error estimate ] ) which is not optimal . in our opinion , all the error estimates reported in the preceding works , which use to approximate , remain .verfrth ( * ? ? ?* theorem 5.1 ) claimed ; however , the estimate which was used to derive equation ( 5.12 ) there , seems non - trivial because is not smooth enough globally on ( e.g. it does not belong to ) .if on the right - hand side is replaced by , then one ends up with in the final estimate .dione and urquiza ( * ? ? ?* theorem 4 ) claimed ; however , in equation ( 4.13 ) there , they did not consider the contribution which should appear inside the infimum over even when ( see proposition 4.2 of layton ) .if this contribution is taken into account , one obtains for the final result . to overcome the sub - optimality , in section [ sec : reduced - order integration ] we investigate the penalty scheme in which reduced - order numerical integration is applied to the penalty term .this method was proposed in and was shown to be efficient by numerical experiments for .we give a rigorous justification for this observation in the sense that the error estimate improves to if .our numerical example shows that the reduced - order numerical integration gives better results also for , although this is not proved rigorously .in our numerical results presented in section [ sec : numerical examples ] , we not only provide numerical verification of convergence but also discuss how the penalty parameter affects the performance of linear solvers .we find that too small can lead to non - convergence of iterative methods such as gmres , whereas sparse direct solvers such as umfpack always manage to solve the linear system .we present our notation for the function spaces and bilinear forms that we employ in this paper .the boundary of is supposed to be at least -smooth .the standard lebesgue and sobolev(-slobodetski ) spaces are denoted by and respectively , for ] such that .consequently , the map is well - defined .a direct computation combined with the uniqueness of the decomposition ( [ eq : decomposition ] ) shows that is the inverse of .the continuity of , especially that of , follows from an argument similar to the proof of the implicit function theorem ( see e.g. ( * ? ? ?* theorem 3.2.1 ) ) .proposition [ prop : homeo ] enables us to define an_ exact triangulation of _ by in particular , we can subdivide into disjoint sets as .furthermore , for each we see that and admit the same domain of parametrization , which is important in the subsequent analysis .to describe this fact , we choose a local coordinate such that , and introduce the _ projection to the base set _ by .the domain of parametrization is then defined to be .we observe that the mappings & \phi : s ' \to \pi(s ) ; & & y_r ' \mapsto ( y_r ' , \varphi_{r}(y_r'))^t , \\ & \phi_h : s ' \to s ; & & y_r ' \mapsto \pi^*(y_r ' , \varphi_{r}(y_r ' ) ) = \phi(y_r ' ) + t^*(y_r')\ , n(\phi(y_r ' ) ) , \end{matrix*}\ ] ] are bijective and that is smooth on .if in addition , especially , is also smooth on , then and may be employed as smooth parametrizations for and respectively .the next proposition verifies that this is indeed the case . [ prop : estimate of phi ] under the setting above , we have where and are constants depending only on and .since the first relation is already obtained in proposition [ prop : homeo ] with , we focus on proving the second one . for notational simplicity , we omit the subscript and also use the abbreviation . the fact that is differentiable with respect to can be shown in a way similar to the proof of the implicit function theorem .thereby it remains to evaluate the supremum norm of in , which we address in the following .recall that is determined according to the equation where we have set .because , it follows that therefore , it suffices to prove that ; here and hereafter denotes various constants which depends only on and .applying to ( [ eq : tilde t ] ) gives where . by the same way as we estimated and in the proof of proposition [ prop : homeo ] , we obtain also we see that combining these observations with ( [ eq : nabla prime tilde t ] ) , we deduce the desired estimate .let .since is linear on , further differentiation of ( [ eq : nabla prime tilde t ] ) gives us ; in fact we have .this implies that is a -diffeomorphism between and .however , since is smooth only within , is not globally a diffeomorphism .now we give an error estimate for surface integrals on and .heuristically speaking , the result reads , which may be found in the literature ( see e.g. ) . here and hereafter, we denote the surface elements of and by and , respectively .[ thm : error estimate of surface integral ] let and be an integrable function on .then we have where is a constant depending only on and .let be a local coordinate that contains .we omit the subscript and use the abbreviation .we represent the surface integral using the parametrization as follows : where denotes the riemannian metric tensor given by ( dot means the inner product in ) .similarly , noting that , one obtains where is given by .then we assert that : to prove this , noting that , we compute each component of as follows : for , we notice that is a tangent vector so that . this yields which is estimated by thanks to proposition [ prop : estimate of phi ] . can be bounded in the same manner .to estimate , we observe that which is bounded by .similarly one gets , hence it follows that .therefore , , which proves the assertion .now we use the following crude estimate for perturbation of determinants ( cf .* equation ( 3.13 ) ) ) : if and are matrices such that and for all , then combining this with the assertion above and also with , we obtain in addition , note that .consequently , which proves the theorem .adding up the results of the theorem for all yields it also follows that .choosing in particular as the integrand gives for ] and $ ] .then , where is a constant depending only on . since , where denotes a tubular neighborhood of , it suffices to prove that to this end , using the notation in theorem [ thm : error estimate of surface integral ] , we estimate the left - hand side of ( [ eq : error of trace and transform on s ] ) by here , for fixed we have because , it follows that where we have used hlder s inequality .consequently , } \big| \nabla f\big ( \phi(y ' ) + tn(\phi(y ' ) ) \big ) \big|^p\,dy'dt.\ ] ] on the other hand , we observe that the -dimensional transformation \to \pi(s , \delta_1 ) ; \quad ( y ' , t ) \mapsto \phi(y ' ) + tn(\phi(y'))\ ] ] is bijective and smooth .application of this transformation to the right - hand side of ( [ eq : error of trace and transform on s ] ) leads to } \big| \nabla f\big ( \phi(y ' ) + tn(\phi(y ' ) ) \big ) \big|^p\ , |\mathrm{det}\ , j| \,dy'dt,\ ] ] where denotes the jacobi matrix of .letting , we find that ) } \le c\delta_1,\ ] ] because .this implies ) } \le c\delta_1,\ ] ] which combined with yields is sufficiently small . therefore , } \big| \nabla f\big ( \phi(y ' ) + tn(\phi(y ' ) ) \big ) \big|^p \,dy'dt.\ ] ] the desired estimate ( [ eq : error of trace and transform on s ] ) is now a consequence of ( [ eq : remove process ] ) and ( [ eq : restore process ] ) .this completes the proof .finally , we show that the -norm in a tubular neighborhood can be bounded in terms of its width .such estimate is stated e.g. in ( * ? ? ?* lemma 2.1 ) or in ( * ? ? ?* equation ( 3.6 ) ) .however , since we could not find a full proof of this fact ( especially for ) in the literature , we present it here .[ thm : lp norm in tubular neighborhood ] under the same assumptions as in theorem [ thm : trace and transform ] , we have where is a constant depending only on .we adopt the same notation as in the proofs of theorems [ thm : error estimate of surface integral ] and [ thm : trace and transform ] .then it suffices to prove that to this end , using the transformation given in ( [ eq : psi ] ) we express the left - hand side as } |f(\psi(y ' , t))|^p|\mathrm{det}\,j(y',t)|\,dy'dt \\ & \le c \int_{s'\times [ -\delta_1,\delta_1 ] } \big ( |f(\psi(y ' , t ) ) - f(\phi(y'))|^p + |f(\phi(y'))|^p \big ) \,dy'dt \\ & = : i_1 + i_2 . \end{aligned}\ ] ] for , we see from the same argument as before that which yields } \big| \nabla f(\psi(y',s ) ) \big|^p |\mathrm{det}\,j(y',s)|\,dy'ds = c\delta_1^p \int_{\pi(s,\delta_1 ) } |\nabla f|^p\,dy.\ ] ] for , it follows that we have thus obtained the desired estimate , which completes the proof .let us prove that on and also that , when , it is improved to if the consideration is restricted to the midpoint of edges .[ lem : n and nh ] let and be the outer unit normals to and respectively. then there holds if in addition , , and denotes the midpoint of , then here , is a constant depending only on and .let be arbitrary and let be a local coordinate that contains .we omit the subscript in the following .one sees that and are represented as a direct computation gives this combined with the observation that proves ( [ eq : error between n and nh ] ) . when and , by using taylor expansion , we find that is a point of super - convergence such that . this improves ( [ eq : n - nh by graph representation ] ) to , and thus ( [ eq : error between n and nh , 2d ] ) is proved .in the case , it is known that the barycenter of a triangle is not a point of super - convergence for the derivative of linear interpolations ; see .for this reason , ( [ eq : error between n and nh , 2d ] ) holds only for .the authors thank professor fumio kikuchi , professor norikazu saito , and professor masahisa tabata for valuable comments to the results of this paper .they also thank professor xuefeng liu for giving them a program which converts matrices described by the petsc - format into those by the crs - format .the first author was supported by jst , crest .the second author was supported by jsps kakenhi grant number 24224004 , 26800089 .the third author was supported by jst , crest and by jsps kakenhi grant number 23340023 .
we consider the p1/p1 or p1b / p1 finite element approximations to the stokes equations in a bounded smooth domain subject to the slip boundary condition . a penalty method is applied to address the essential boundary condition on , which avoids a variational crime and simultaneously facilitates the numerical implementation . we give -error estimate for velocity and pressure in the energy norm , where and denote the discretization parameter and the penalty parameter , respectively . in the two - dimensional case , it is improved to by applying reduced - order numerical integration to the penalty term . the theoretical results are confirmed by numerical experiments .
outliers are observations that depart from the pattern of the majority of the data .identifying outliers is a major concern in data analysis because a few outliers , if left unchecked , can exert a disproportionate pull on the fitted parameters of any statistical model , preventing the analyst from uncovering the main structure in the data . to measure the robustness of an estimator to the presence of outliers in the data , introduced the notion of finite sample breakdown point . given a sample and an estimator , this is the smallest number of observations that need to be replaced by outliers to cause the fit to be arbitrarily far from the values it would have had on the original sample .remarkably , the finite sample breakdown point of an estimator can be derived without recourse to concepts of chance or randomness using geometrical features of a sample alone .recently , introduced the projection congruent subset ( pcs ) method .pcs computes an outlyingness index , as well as estimates of location and scatter derived from it .the objective of this paper is to establish the finite sample breakdown of these estimators and show that they are maximal .formally , we begin from the situation whereby the data matrix , is a collection of so called observations drawn from a -variate model with .however , we do not observe but an ( potentially ) corrupted data set that consists of observations from and arbitrary values , with , denoting the ( unknown ) rate of contamination .historically , the goal of many robust estimators has been to achieve high breakdown while obtaining reasonable efficiency .pcs belongs to a small group of robust estimators that have been designed to also have low bias ( see , and ) . in the context of robust estimation , a low bias estimator reliably finds a fit close to the one it would have found without the outliers , when with . to the best of our knowledge , pcs is the first member of this group of estimators to be supported by a fast and affine equivariant algorithm ( fastpcs ) enabling its use by practitioners .the rest of this paper unfolds as follows . in section [ s2 ], we detail the pcs estimator . in section [ s3 ] ,we formally detail the concept of finite sample breakdown point of an estimator and establish the notational conventions we will use throughout .finally , in section [ s4 ] , we prove the finite sample breakdown point of pcs .consider a potentially contaminated data set of vectors , with . given all possible -subsets , pcs looks for the one that is most _ congruent _ along many univariate projections .formally , given an -subset , we denote the set of all vectors normal to hyperplanes spanning a -subset of .more precisely , all directions define hyperplanes that contain observations of . for and , we can compute the squared orthogonal distance , , of to the hyperplane defined by as set of the observations with smallest is then defined as where denotes the - order statistic of a vector .we begin by considering the case in which . for a given subset and direction define the _ incongruence index _ of along as the conventions that .this index is always positive and will be smaller the more members of correspond with , or are similar to , the members of . to remove the dependency of equation on , we measure the incongruence of by considering the average over many directions as the optimal -subset , , is the one satisfying the pcs criterion : then , the _ pcs estimators of location and scatter _ are the sample mean and covariance of the observations with indexes in : finally , we have to account for the special case where . in this case , we enlarge to be the subset of all observations lying on .more precisely , if , then .+ to give additional insight into pcs and the characterization of a cloud of point in terms of congruence , we provide the following example .figure [ fig : dataplot ] depicts a data set of 100 observations , 30 of which come from a cluster of outliers on the right . for this data set , we draw two -subsets of 52 observations each .( ) are depicted as dark blue diamonds ( light orange circles).,scaledwidth=90.0% ] subset ( dark blue diamonds ) contains only genuine observations , while subset ( light orange circles ) contains 27 outliers and 25 genuine observations .finally , the 17 observations belonging to neither -subset are depicted as black triangles . for illustration s sake, we selected the members of so that their covariance has smaller determinant than -subsets formed of genuine observations . consequently, robust methods based on a characterization of -subsets in terms of density alone will always prefer the contaminated subset over any uncontaminated -subset ( and in particular ) . the outlyigness index computed by pcs differs from that of other robust estimators in two important ways .first , in pcs , the data is projected onto directions given by points drawn from the members of a given subset , , rather than indiscriminately from the entire data set .this choice is motivated by the fact that when and/or are high , the vast majority of random -subsets of will be contaminated .if the outliers are concentrated , this yields directions almost parallel to each other . in contrast , for an uncontaminated , our sampling strategy always ensures a wider spread of directions and this yields better results .the second feature of pcs is that the congruence index used to characterize an -subset depends on all the data points in the sample .we will illustrate this by considering all members of . for each , we compute the corresponding value of .then , we sort these and plot them in figure [ fig : icompare ] .we do the same for .we note in passing that . for and , shown as dark blue ( light orange ) lines.,scaledwidth=90.0% ]consider now in particular the values of the -index corresponding to and starting at around 1050 on the horizontal axis of figure [ fig : icompare ] .these higher values of correspond to members of that are aligned with the vertical axis ( i.e. they correspond to horizontal hyperplanes ) , and are much larger than the remaining values of .this is because , for the data configuration shown in figure [ fig : dataplot ] , the outliers do not stand out from the good data in terms of their orthogonal distances to hyperplanes defined by the vertical directions .as a result , this causes many outliers to enter the sets and this deflates the values of the corresponding to these directions .since the set is fixed , there is no corresponding effect on so that the outliers will influence the values of for some directions , even though itself is uncontaminated .this apparent weakness is an inherent feature of pcs . in the remainder of this note , we prove the following counter - intuitive fact : outliers influence the value of , even when is free of outliers , yet , so long as there are fewer than of them , their influence on the pcs fit will always remain bounded . in other words ,breakdown only occurs if ( see section [ s4 ] ) .to lighten notation and without loss of generality , we arrange the observed data matrix with rows so that the of contaminated observations are in the last rows and the uncontaminated observations in the first rows .then , will refer to the set of all corrupted data sets and is the set of all -subsets of , the set of all -subsets of with at least one contaminated observation , and the set of all uncontaminated -subsets of . the following assumptions ( as per , for example ) all pertain to the original , uncontaminated , data set . in the first part of this note , we will consider the case whereby the point cloud formed by lies in _ general position _ in .the following definition of _ general position _ is adapted from : definition 1 : _ general position in . is in general position in if no more than -points of lie in any -dimensional affine subspace . for -dimensional data , this means that there are no more than points of on any hyperplane , so that any points of always determine a -simplex with non - zero determinant . throughout, we will also assume that and that the genuine observations contain no duplicates : latexmath:[\[\begin{aligned } for any -subset and , we will denote the sample mean and covariance of the observations with indexes in as then , given , and an affine equivariant estimator of location , we define the bias of at as furthermore , given , genuine data and an affine equivariant estimator of scatter with positive definite ( denoted from now on by ) , we define the bias of at as where and ( ) denotes the largest ( smallest ) eigenvalue of a matrix .since pcs is affine equivariant ( see appendix 1 ) , w.l.o.g ., we can set so that the expression of bias reduces to furthermore , if that data is in general position and is affine equivariant then we can w.l.o.g .set ( is the rank identity matrix ) so that the expression of the bias reduces to the finite sample breakdown points of and are then defined as finally , for point clouds lying in general position in , gives a strict upper bound for the finite sample breakdown point for any affine equivariant location and scatter statistics , namely : establish the breakdown point of , we first introduce two lemmas describing properties of the -index . both deal with the case where lies in general position in .then , we discuss the case where does not lie in general position . in the first lemma, we show that the incongruence index of a clean -subset is bounded .let and lies in general position in .then for any fixed , positive scalar not depending on the outliers .consider first the numerator of .for a fixed , we can find for each , the observations of that lie furthest away from the hyperplane defined by .the average of their distances ( as given by equation ) to the hyperplane is finite and constitutes an upper bound on the average distance of any observations of to the hyperplane .as we have at most different directions and only uncontaminated subsets , the upper bound of the average distances stays finite for any positive , fixed , finite scalar not depending on the outliers .since the contaminated observations have no influence on the distance with for , we can say that consider now the denominator of . for any and , let denote the subset that consists of the indexes of the observations of the observed data matrix that lie closest to the hyperplane spanned by .as and , contains at least uncontaminated observations . in total , when is not fixed , there are at most different directions defined by a . for any , the smallest value of is attained if the contaminated observations of achieve .as the uncontaminated observations lie in general position , we know that the uncontaminated observations in can not lie within the same -dimensional subspace , i.e. as the number of uncontaminated observations is fixed , we have that for any fixed positive scalar not depending on the outliers .this inequality holds even if the outliers have the smallest average distance that is possible ( i.e. when for the contaminated observations ) .thus , inequality ( [ pcs : l1_aux2 ] ) holds for any -contaminated data set yielding using equation and the inequalities and , we get .the second lemma shows the unboundedness of the incongruence index of contaminated subsets .let and assume that lies in general position in .take a fixed h - subset .then for at least one .in other words , for a given set of indexes , there exists a data set with contaminated observations with indexes in such that is unbounded .consider first the numerator of .for a fixed , denote .since , as already mentioned in lemma 1 above , any -subset contains at least uncontaminated observations , i.e. . let be the set of all directions defining a hyperplane spanned by a -subset of . yields . as the uncontaminated observations lie in general position , the members of are , by definition , linearly independent . as a result, the outliers can belong to ( at most ) the subspace spanned by uncontaminated observations .hence , for every , there exists at least one member of , at least one and at least one such that consider now the denominator of : since the members of all pass through members of only , we have that using equation , and inequalities and , we get . with lemmas 1 and 2 ,we are now able to derive the finite sample breakdown point of the pcs of and . for and in general position ,the finite sample breakdown point of is consider first the situation where .then any -subset of contains at least members of .in particular , for the chosen -subset , denote with .the members of are in general position so that for any . but and so that which implies that .thus for breakdown to occur , the numerator of equation , , must become unbounded . now , suppose that breaks down .this means that for any , we will show that this leads to a contradiction . in appendix 2we show that by equations and , it follows that .then , by lemma 2 we have that with , the number of all directions . in particular , this is also true for , and by lemma 1 , , implying that , which is a contradiction to the definition of . since pcs is affine and shift equivariant , when , we have by equation that breaks down .equation and theorem 1 show that the breakdown point of is maximal .the following theorem shows that the breakdown point of is also maximal . for and in general position ,the finite sample breakdown point of is consider first the situation where . in theorem 1, we showed that under this condition .denote , where .then , we have that does not break down since , by homogeneity of the norm and the triangle inequality , for the case of , equation and affine equivariance imply that breaks down .we now relax the assumption that the members of lie in general position in and substitute it by the weaker condition that they all lie in general position on a common subspace in for some .then pcs has the so - called exact fit property . recall that are hyperplanes defined by points drawn from an -subset .if there are at least points lying on a subspace , then there exists an -subset of points from this subspace .let be this subset .then , for any , both the numerator and denominator of equation equal zero and so . thus , we have without loss of generality that . in summarythis means that if or more observations lie exactly on a subspace , the fit given by the observations in will coincide with this subspace , which is the defintion of the so - called _ exact fit _ property .of course , since , may contain outliers .given , one may proceed with the much simpler task of identifying the at most outliers in this smaller set of observations on a rank subspace spanned by the members of .* appendix 1 : proof of affine equivariance of * + recall that a location vector and a scatter matrix are affine equivariant if for any non - singular matrices and -vector it holds that : consider now affine transformations of : for any non - singular matrix and -vector .the directions ( ) are orthogonal to hyperplanes through -subsets of ( ) . since , we can disregard all duplicated rows of ( and their partner duplicates in ) , so that , w.l.o.g . all -subsets of yield a matrix with unique rows .let be any such -subset of , and and the hyperplanes through and .since equation describes an affine transformation , it preserves collinearity : and the ratio of lengths of intervals on univariate projections : where for readability we denote as .equation and imply equation holds for any -subset of .therefore , denoting all directions perpendicular to hyperplanes through elements of , and the same but for ) , it holds that and in particular for . since , we have that if the members of lie in g.p . in , hence , are affine equivariant .* appendix 2 : proof of equation [ eq : append1 ] * + here , we show that .the first eigenvalue of is defined as for .furthermore , hence , we have that we also have that using cauchy - schwartz , and .thus , +the authors wish to acknowledge the helpful comments from three anonymous referees and the editor for improving this paper .00 adrover , j. g. and yohai , v. j. ( 2002 ) .projection estimates of multivariate location .annals of statistics .30 , number 6 , 17601781 .adrover , j. g. and yohai , v. j. ( 2010 ) . a new projection estimate for multivariate location with minimax bias .journal of multivariate analysis , vol . 101 , issue 6 , 14001411 .davies , p. l. ( 1987 ) .asymptotic behavior of s - estimates of multivariate location parameters and dispersion matrices .annals of statististics .15 12691292 .donoho , d.l .breakdown properties of multivariate location estimators ph.d . qualifying paper harvard university .maronna , r. a. stahel , w. a. yohai , v. j. ( 1992 ) .bias - robust estimators of multivariate scatter based on projections .journal of multivariate analysis , vol .42 , issue 1 , 141161 .maronna , r. a. , martin r. d. and yohai v. j. ( 2006 ) .robust statistics : theory and methods .wiley , new york .rousseeuw , p.j . and leroy , a.m. ( 1987 ) .robust regression and outlier detection .wiley , new york .seber , g. a. f. ( 2008 ) .matrix handbook for statisticians .wiley series in probability and statistics .wiley , new york .tyler , d.e .finite sample breakdown points of projection based multivariate location and scatter statistics .the annals of statistics , vol .2 , pp . 10241044 vakili , k. and schmitt , e. ( 2014 ) . finding multivariate outliers with fastpcs .computational statistics & data analysis , vol .69 , 5466 .weisstein , e. w. ( 2002 ) .concise encyclopedia of mathematics ( 2nd edition ) .chapman & hall crc .
the projection congruent subset ( pcs ) is new method for finding multivariate outliers . pcs returns an outlyingness index which can be used to construct affine equivariant estimates of multivariate location and scatter . in this note , we derive the finite sample breakdown point of these estimators . breakdown point , robust estimation , multivariate statistics .
the fabrication of scalable hardware architectures for quantum annealers to solve discrete optimization problems has sparked interest in quantum annealing algorithms .current research studies focus on both fundamental and practical important questions , including the implementation of real - world applications , defining criteria for detecting quantum speedup and the computational role of quantum tunneling , proposals for error - supression schemes , benchmark studies comparing classical and quantum annealing , and using spin - glass perspectives into the hardness of computational problems studied .the next generation of quantum annealers will likely allow for the exploration of harder and more interesting problems instances .even in the case of a quantum processor with only qubits , one could already see the appearance of some hard to solve instances , for which the optimal solution was not found out of a few thousand annealing cycles .it is precisely for these hard instances that the methods developed in this paper are the most useful , since they allow us to extract the best programming settings enhancing the probability of finding the ground state , therefore reducing significantly the time to solution for both future benchmark studies and for real - world applications .the first step of solving a problem using a quantum annealer is to map the problem to the hardware architecture .the quantum hardware employed consists of 64 units of a recently characterized eight - qubit unit cell .post - fabrication characterization determined that only 509 qubits out of the 512 qubit array can be reliably used for computation ( fig .[ fig : chimera ] in appendix [ sec : progqa ] ) .the array of coupled superconducting flux qubits is , effectively , an artificial ising spin system with programmable spin - spin couplings and transverse magnetic fields .it is designed to solve instances of the following ( np - hard ) classical optimization problem : given a set of local longitudinal and an interaction matrix , find the assignment , that minimizes the objective function , where , , , and .finding the optimal is equivalent to finding the ground state of the corresponding ising hamiltonian , where are pauli matrices acting on the spin. physical realizations of quantum annealing come with certain degrees of freedom affecting the performance of the quantum annealing device . each realization of such degrees of freedom determine a unique _ hamiltonian specification _ or realization of eq .[ eq : eising ] .although there is some understanding of the several factors affecting the performance of quantum annealing devices , there is a need for concrete scalable strategies coping with the analog control error ( ace ) intrinsic to physical hardware implementations .for example , to the best of our knowledge there is no knownrule - of - thumb " in the selection of such parameters , thus motivating our study .( we rule out the only one wide spread in the community , in appendix [ app : thumbrule ] . ) in the absence of noise or any miscalibration , the performance of the quantum device should be the same under any gauge realization .previous studies show that the current generation of d - wave devices with hundreds of qubits is very sensitive to this selection .some other degrees of freedom correspond to parameters we do not yet know _ a priori _ how to set .this is the case for penalty strength in the construction of quadratic unconstrained binary optimization ( qubo ) hamiltonians or penalties associated with the strength of the set of qubits defining a logical qubit in the qubo graph to hardware graph procedure . in sec .[ sec : tuningqa ] we present a strategy for tuning and optimizing a quantum annealing algorithm , finding the best parameters out of a pool of candidates and selecting the hamiltonian specifications with the best performance .it is in this section where most of the new results are presented . for accessibility to the readers, we divided this section into two main threads .readers interested only in gauge selection ( such as those researchers interested in benchmark studies , for example , on random spin - glass instances ) can find the procedures needed in sec .[ subsec : pielite ] - [ subsec : gaugeselect ] . for readers interested in more general real - world application instances , where parameters for embedding procedures and other penalties need to be set , we devote sec .[ subsec : tuneje ] to discussing the adjustments to the technique to deal with these additional challenges . in sec .[ sec : conclusions ] , we delineate some future directions and possible further applications of the present work .as previously discussed , there are many degrees of freedom at the time of programming a quantum annealing device to solve a specific problem instance .each realization of such degrees of freedom determine what we call a _ hamiltonian specification _ for the quantum annealing cycles .for the purpose of generality , we leave the discussion at a very high - level form and in the following sections we will present application examples in different common practical scenarios , e.g. , gauge selection and setting the strength of couplings among physical qubits representing a qubit from the original logical graph , also known as the embedding parameter setting problem . in this general frameworkpresented here , we only need to keep in mind that the performance of the device is determined by the programming degrees of freedom through the different hamiltonian specifications .the main question we discuss next : how do we select the hamiltonian realization that yields the best performance of the device ?it is the focus of this section to answer this question with a procedure requiring a minimum overhead , as described next .assume that for each hamiltonian specification you can easily request a total number of readouts from the quantum annealer . in some cases, these must be obtained in batches due to programming limitations .for example , in the current d - wave two processor hosted at nasa ames , programming the device with an annealing time per cycle of allows for a maximum number of readouts of 10,000 .if the device is operated at this maximum number is 50,000 , the reason why the maximum number of readouts at is while only at ] .therefore , while at a goal of can be obtained in one shot , at we need to request 5 repetitions of 10,000 each .let s denote the number of repetitions needed by and therefore the number of readouts in each repetition is . for each readout , is a corresponding ( eq . [ eq : eising ] ) .let s define by as the array containing the sorted energies , i.e. , such that for all .define as the negative of the mean value of the lowest percent of the energies in . since the array contains sorted energies from lowest to highest , then this expectation value is equivalent to calculating the mean value using the first values in .formally defined , since only a fixed percent of the lowest energy values are included in the calculation , we refer to this score function hereafter as the _elite mean_. this expression can be generalized to the case where several repetitions are used to collect the desirable total number of samples by defining the minus sign in the definition eq .[ eq : pielite ] gives the interpretation of a score function or a performance estimator ; the higher its value , the better the expected performance .suppose one has several quantum annealers or several hamiltonian specifications to choose from .if one is interested in assessing the performance of the device with a number of reads ( where is defined as the number of readouts needed to find the desired solution at least once with a 99% probability ) , we will show that serves as an effective score function or performance estimator that can be used to rank and to select the best of available quantum annealing specifications to solve the problem at hand .the intuition for this score function follows from what is expected of a quantum annealing device : when given a problem to be solved , the quantum annealers ( or the hamiltonian specifications ) that give the lower energy solutions are preferable , since a quantum annealer is designed to sample from the lowest energy configurations .therefore , the quantum annealer specification with the lower _ elite mean energy _ ( or higher elite - mean score ) , will give better performance . in quantum annealing ,the most natural gold standard for assessing performance is the probability of observing the ground state , since it translates into the probability of finding the optimal solution to the optimization problem studied . in more precise terms , let s define the success probability of our quantum annealing algorithms by , where corresponds to the number of observed ground states in the total number of requested readouts . given , the number of repetitions needed to observe the optimal solution at least once with a 99% probability , is given by , the instances that will benefit the most from our selection approach are those hard - to - solve instances with a very low probability of obtaining the ground state , say with an in the order of hundreds of millions or hundreds of millions like the example discussed in sec .[ subsec : tuneje ] ( ) .the purpose of this work is to show a correlation of the performance estimator and the real performance of the machine .but how do we define or assess real performance when the number of ground states is not reasonably attainable for all the hamiltonian configurations we explored ?take for example the instance in sec .[ subsec : tuneje ] .the default setting of the device does not provide even a single ground state solution after readouts ! to calculate for all gauges we would need to run for all 100 gauges being considered at least , which is beyond the scope of this work . in this work we explore two definitions of performance that allow us to rank different hamiltonian configurations even in the case where the ground state is not obtained after a significantly large .the first and natural criteria is a _ greedy - like performance rank_. this method gives a lower ( better ) rank to a hamiltonian specification with a lower energy . in the common case of ties , they are broken by looking at the frequency ( number of occurrences ) of their lowest energy state . in case these are the same , the next lowest energy is compared and if they are still the same , one compares their frequencies .the process continue until ties are broken , providing winners that are accordingly ranked lower .this method allows us to assign a unique ranking to any hamiltonian specification whether or not we measured any ground states . notice that in the particular case where the ground state is obtained for all the hamiltonian configurations explored , the performance rank will still assign lower ranks to hamiltonian specifications with larger values of , as desired , while breaking any ties that exist .benchmark studies assessing the presence or absence of speed - up of quantum annealers compared to classical processors resort to gauge selection as a way of obtaining reliable averaged results of the performance of the device .although it is known that gauge specification can significantly enhance the performance of the device , previous studies are limited to the scaling of the typical gauge since there is no _ a priori _ way to determine the optimal gauge .gauge specification is a particular example of hamiltonian specification discussed above .we present here how our performance estimator can be used to select the optimal gauges . to illustrate the procedure , we used a hard - instance out of a pool of random - spin glass instances similar to the ones reported elsewhere .this instance was provided by sergio boixo , who assessed that this particular instance had a simulated - annealing ( sa ) runtime of the order of hundred times longer than the median instance , from a pool of hundreds of thousands of instances within the same family ( instances with random couplings , with 509 qubits as shown in fig . [fig : chimera ] ) . as an abbreviation, we refer hereafter to this specific random - spin glass example as the rs instance . for qathis instance was shown to also be particularly hard after not obtaining any ground states after trying 16 gauges , with 10,000 readouts each , at 20 .[ fig : pielite_vs_performance](b ) corroborates this assessment ; the median gauge over a set of 100 random gauges has a , resulting in a expected number of repetitions to solution of annealing cycles . since is about two hundred times greater than , applying our performance estimator to select the optimal gauge before engaging in lengthier runs is expected to significantly reduce the computational time .[ fig : pielite_vs_performance](a ) shows there is a strong correlation between the rank obtained with with and , compared to the greedy performance rank described above .the number of total readouts used to estimate the performance rank is million per gauge .the error bars correspond to the rank provided by the first and third quartile out of 40 different experiment each with for each of the 100 random gauges .the middle point corresponds to the median of the set of experiments .[ fig : pielite_vs_performance](b ) shows the same data set but with the raw values for and also serves the purpose of showing the count in the number of ground states , illustrating that gauge selection can have a significant impact in the device performance , an increase as much as one to two orders of magnitude ( see also fig .[ fig : pielite_vs_performance](d ) and fig .[ fig : consist_6f_20us](b ) reflecting how the gauge selection can influence . ) as expected , any performance estimator would be a noisy metric and not expected to have a 100% correlation with the real performance from an extensive number of readouts .the rs section of table [ table : fractions_greedyrank ] ( upper half ) addresses this issue .suppose one utilizes the following strategy .one decides to run 100 gauges with a fixed for each gauge . from this starting data set , andwhile processing the readouts in the search of an optimal solution , one can easily calculate .since , it is unlikely that this initial batch of calculations would contain the optimal solution .the refinement we propose here consists of using the information of the calculated _ on - the - fly _ to select , for example , the top 5 gauges ( gauges with the highest score ) out of the 100 random gauges .since the selected gauges are expected to have a better performance than the typical or average gauge , only the selected ones are used to continue with the remaining runs until the desired solution is found .given this strategy , table [ table : fractions_greedyrank ] answer the question : what is the probability that the absolute top gauge ( that is , ranked number 1 according to the performance rank in sec .[ subsec : ranking ] ) is contained in this set of predicted top 5 gauges ? what is the probability of one finding any of the top 2 gauges in the set of predicted top 5 gauges ? etc , etc . notice that at the level of which is times less than , one obtains a reasonably high % probability of obtaining the top 1 gauge in the set top 5 gauges predicted by .table [ table : fractions_greedyrank ] also addresses the question of the existence of an optimal value of for the performance estimator . in all the examples considered hereit seems to be the case that a value of % or % is optimal , a non - trivial result , since one might think incorrectly that the greater the number of low energy states included in the calculation of the better .this table also shows the expected increase in the probability of choosing the top gauges as becomes larger .note also the inclusion in the table of a greedy " column for each case .this new metric was included because the use of the greedy method for the _ performance rank _ described in section b begs the question of why one could not use an even simpler performance estimator consisting of the same _ greedy approach _ applied here to the case of a small number instead of .however , the table clearly shows that is consistently always as good as , if not much better than , the greedy performance estimator .this is not surprising since as expected the should be a more robust metric than the greedy approach and to be less sensitive to rare - event occurrences .a clear advantage of the is that it provides a score function that can be used for other purposes as will be shown elsewhere .the greedy approach as a score function is expected to be much flatter , due to this ranking relying heavily on the breaking of ties whenever there are only a few low energy states that many gauges reach ) . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] when solving real - world applications there are several additional subtleties to take into consideration . while in the case of the random spin - glass instances there is only one objective function appearing in the quantum annealing implementation , in eq .[ eq : eising ] , in instances derived from real - world applications is obtained from another cost energy function , a quadratic binary expression containing the logical qubits , before these are embedded into the hardware qubits , appearing in . from a practical application perspective, we are interested in the possibility of using our performance estimator to select the best hamiltonian specification with the smallest , therefore reducing the computational time . once the embedding problem is solved and one has a mapping of each logical qubit into a subset of , the first decision to be made is to select the strength of the coupling needed to keep the hardware spins representing a logical spin in alignment with each other . enforcing the embedding are equal .further fine tuning can be done by optimizing each parameter but this is beyond the scope of this work . ]for each value of , and after requesting , we can calculate the fractions of the that do not violate any of these embedding constraints , i.e. , with all the physical spins representing logical spins being properly aligned .these solutions are said to satisfy the _ strict embedding _ ( se ) requirement .the right -axis in fig .[ fig : cycle_needle_20us](a ) shows the fraction of solutions passing this requirement out of the total number of readouts , denoted as .intuitively the magnitude of can not be too weak since it will not achieve the goal of keeping the spins properly aligned ( same readout value for each variable representing the same logical qubit ) . having a large value will certainly help in increasing the probability of not having any misaligned spins but it can not be too strong either , , since after dividing everything by to make all and , the original values and will be well below the precision level and the performance will be significantly affected by noise .therefore , a sweet spot with an optimal value of is expected . from our experience, serves as a guide for selecting the region of interest , denoted here as , with and corresponding to the onset of the plateou region with in the plot vs. .the value can be easily obtained experimentally in one - shot by setting , and one can use that value to search for . for the purpose of our discussion we selected two instances from the fault diagnosis application published elsewhere .the first instance , referred hereafter as 300k - dmf , was selected because despite its implementation with only 81 hardware qubits , it is unusually hard out of set of 100 random gauges ] when compared with other benchmark studies , yet has a success probability just high enough to allow for a sizable number of ground states even in the worst hamiltonian specification , ] within a reasonable number of readouts set to per gauge or considered . as shown in fig .[ fig : consist_6f_20us ] , a finite number of ground states for every single gauge and every value of allows to rank each of these hamiltonian specifications by their gold - standard performance rank and to compare with the rank predicted from our performance estimator . to satisfy the condition , the per gauges was calculated by using only .[ fig : consist_6f_20us ] , shows that even with so few readouts , there is a strong correlation between our score function and the number of ground states observed after .compared to the harder instances where , here we used an of 5% instead of 2% , since the latter would amount to computing the elite mean with only the two lowest values out of .this makes the estimator too noisy and flat , analogous to the greedy " performance estimator that only uses the lowest value to rank gauges . as shown in fig .[ fig : consist_6f_20us ] , calculating the elite mean over the five lowest energies already gives a good correlation with .the second instance , referred hereafter as 50m - dmf instance , serves the purpose of showing how our performance estimator can be used in a practical situation , for instances with probabilities much smaller .these are the instances we expect to surface in the next generation of quantum annealers .the instance 50m - dmf has the property of having a unique optimal solution , making it the most difficult to solve among the family of problem instances with six - faults to be diagnosed .more specifically , although the number of hardware qubits ( 96 qubits ) required to implement this instance is not unusually large , this instance turned out to be extremely difficult for qa ; not even a single - ground state was measured after annealing cycles , even after optimizing for the optimal but under the default no - gauge ! this instance was in large the motivation for defining a quick strategy to find the optimal hamiltonian specifications ( best and best gauge ) capable of finding the ground state ) .[ fig : cycle_needle_20us ] describes the suggested iterative approach used to optimize both the value of and to select the optimal gauges in instances requiring a direct embedding approach . starting withthe no - gauge one can scan for the candidate values of and select the value of with the highest score .in contrast with the case of the rs instance above , here we can not use to calculate the score function since , for example , the lowest energies of will be different for every value of ( because of the energy renormalization to fit all programmable values and within the dynamical range and ) . to circumvent this issuewe compute after error - correcting the solutions with majority voting when going from , and then sorting the states according to before selecting the 2% of the lowest energies used in the computation of .the next step in the procedure is to perform a _ gauge scan _ at the selected value of from the _scans_. considering that we are dealing with instances with , it is not a significant usage of computational resources of perform calculations with a number of gauges on the order of about 100 .notice that there is really no overhead while doing the gauge scans , since for every gauge considered , one needs to post - process all the solution readouts ( e.g. , with majority voting ) while searching for the states with the optimal solutions anyways .since the energies of every single solution needs to be calculated , the only overhead in calculating comes from a cheap sorting of these energies before calculating the elite mean . for np - complete problems ,we can always tell if we have found the desired answer .also , for a large family of problems such as those np - hard problems where the np - complete version is still interesting , one can still stop the search if the desired solution is obtained ( e.g. , we can ask whether there exists a solution with an energy lower than a reference energy , with the latter being for example the best solution attainable with a state of the art classical solver ) .therefore , trying gauges in the search of an optimal gauge is not an unfeasible idea . after calculating for the complete pool of gauge candidates, one can proceed to another set of -scans by using the best gauge with the highest score . as shown in fig .[ fig : consist_6f_20us](c ) and from our experience with other problem instances where the procedure was even applied at different annealing times , in most of the cases the second optimal matched the same from the first scan under the no - gauge setting .even in the cases where moved to a new value , the change was in the neighborhood of the first optimal value . as shown in fig . [fig : consist_6f_20us](a ) and ( c ) , as long as one is near the optimal value of the performance is not significantly affected .the gauge selection , fig .[ fig : consist_6f_20us](b ) , seems to have a much larger impact in the performance . since the first -scan does the job of taking us to the neighborhood of optimality ( out of the set of candidates considered ) , it is reasonable to conclude that a second is not necessary and it is better to focus on the top gauges obtained from a gauge scan of 50 - 100 gauges . for easy instancesa large gauge set would be unnecessary since the optimal solution will likely appear before one finishes going through the target number of 100 or so gauges . as shown in fig .[ fig : pielite_vs_performance ] , the performance estimator proposed here is a noisy metric . for example , there is no guarantee that the top gauge is the same one as the one predicted by .therefore , instead of selecting only the gauge predicted as top 1 , it is advisable to select a handful of the predicted as top gauges as indicated in fig [ fig : cycle_needle_20us](b ) .it is with this selected set that one performs the extensive runs , but where now has been significantly reduced given that we are running with a set that includes the optimal gauge from the random set .table [ table : fractions_greedyrank ] shows that selecting the predicted top five gauges has a high probability of containing the top 1 gauge yielding the largest number of ground states . predicting any of the top 2 gaugeshad a probability .this is very remarkable considering that in this particular 50m - dmf problem running a low - performing gauge would lead to a significantly large time to solution . as mentioned above , the default no - gauge did not find the solution after 50 millions reads , therefore yielding a millions , while any of the top 3 gauges require millions , providing at least an order of magnitude improvement in this hard - to - solve instance for the qa processor .in the case of real - world applications that use ancilla variables in the construction of their post - processing strategies are also possible . in these casesit is more efficient to process the solution , for example , evaluating the problem energy , with only the relevant variables defining the problem . more specifically , and without loss of generality, we can express the set of resulting logical qubits in the expression as = , where corresponds to the set of qubits or binary variables that define completely the problem description and that can be extracted to evaluate the energy of the problem , . in these cases , and with the intention of increasing the chances of finding the optimal solution , it is more efficient to process the solution readouts with and not with .this postprocessing strategy can only help in finding the optimal solution , since for every readout , therefore allowing for the possibiliy of finding optimal solutions in solutions that had been penalized by the ancilla constrains .our preliminary results indicate that for these problem instances , it is advisable to look also at the top 5 gauges obtained from the greedy approach ( same approach described in table [ table : fractions_greedyrank ] but now using instead of ) along with the top 5 from the score function , also calculated with instead of .the strategy proposed here consists of taking as the selected top gauges " the union set of these two sets of top 5 gauges , and perform with these gauges the extensive runs with .we defined a score function intended to estimate the performance of quantum annealers whose applicability does not rely on obtaining ground states corresponding to the desired solution . we observed a strong correlation of our performance estimator with the performance of the device even in the case where the number of readouts used to calculate it was several orders of magnitude less than the number of readouts required to find the desired solutions .the score function is based on a tail conditional expectation value , corresponding to the _ elite mean _ over a small percent representing the readouts with the lowest energies .we showed it can be used to efficiently select the optimal gauges from a large pool of random gauges and in setting hamiltonian parameters appearing in the implementation of real - world applications .although it has been previously shown that the decisions in programming quantum annealing devices can significantly impact the performance of the device , thus far comparison of performance of quantum annealers to algorithms on conventional classical processors was limited to average performance over the selection of parameters explored .this study opens the possibility of revisiting such scaling studies , now with the opportunity to select in advance the best configuration of the device . having the possibility of selecting the specifications ( best gauges or other optimal parameters ) will be indispensable once we start solving instances intrinsically harder with the new generation of quantum annealers .the overhead incurred to apply the selection procedure presented here is constant and it does not scale with the size of the system .we showed that even in cases where , the method still works with a large probability of selecting the top gauges . in the case of real - world applications , the iterative strategy proposed in sec .[ subsec : tuneje ] requires essentially no overhead in calculating the performance estimator and used to rank the random gauges .since the data needs to be processed anyway ( e.g. , calculation of and majority voting while testing whether or not the desired solution has been found ) , the only overhead incurred is the time needed to sort the solution before calculating the elite mean . in the case of other parameter settings such as the one used in the embedding problem , our performance estimator provides a very efficient approach by pinning down the region where the device has its best performance .although our strategy allows us to select the best hamiltonian specification in quantum annealers , we do not expect that it will be enough to change the complexity class seen in scaling studies .certainly , it could easily provide a speed - up of an order of magnitude from the default methods , as seen in some of the examples presented here , and it might be to - date the only feasible way to obtain solutions to hard - to - solve computational problems ( either random spin - glass benchmarks or real - world applications ) in the next generations of quantum annealers .this work was supported in part by the office of the director of national intelligence ( odni ) , intelligence advanced research projects activity ( iarpa ) , via iaa 145483 .we want to acknowledge the support of nasa advanced exploration systems program and nasa ames research center .the authors thank sergio boixo for providing the 509-qubit rs problem instance , and bryan ogorman , eleanor rieffel , and davide venturelli for helpful discussions .a.p - o , j.f , r.b and v.n.s contributed to the ideas presented in the paper .a.p - o and j.f designed and ran the experiments and wrote the manuscript .all the authors revised the manuscript .unit cells that consist of 8 quantum bits each . within a unit cell ,each of the 4 qubits in the left - hand partition ( lhp ) connects to all 4 qubits in the right - hand partition ( rhp ) , and vice versa .a qubit in the lhp ( rhp ) also connects to the corresponding qubit in the lhp ( rhp ) of the units cells above and below ( to the left and right of ) it .edges between qubits represent couplers with programmable coupling strengths .blue qubits indicate the 509 usable qubits , while grey qubits indicate the three unavailable ones out of the 512 qubit array.,scaledwidth=45.0% ]in the case of gauge selection , a commonly used `` rule - of - thumb '' that had persisted in the community is that the gauge maximizing the number of antiferromagnetic couplings , is preferred .the physical motivation behind this rule " is that the precision in the specification of a ( antiferromagnetic coupling ) is more robust than its negative ( ferromagnetic ) counterpart . a more detailed analysis including 100 gauges for several problem applications considered ( see fig .[ fig : jpositives ] ) shows that such rule - of - thumb does not hold in any of the hard instances considered here .notice there is no correlation between the number of positive couplers and the performance of the specified gauge .we did no see any correlation either in any other quantity similar to : parameters studied include the number of ( for the case of real - world applications with direct embedding ) , the number of and the number of that are non- . for all those cases ,still no correlation was found . resulting from a specified gauge and the performance in the device , ruling out the common belief that the larger the number of , the better .shown here are examples from three different application domains . , scaledwidth=48.0% ]
with the advent of large - scale quantum annealing devices , several challenges have emerged . for example , it has been shown that the performance of a device can be significantly affected by several degrees of freedom when programming the device ; a common example being gauge selection . to date , no experimentally - tested strategy exists to select the best programming specifications . we developed a score function that can be calculated from a number of readouts much smaller than the number of readouts required to find the desired solution . we show how this performance estimator can be used to guide , for example , the selection of the optimal gauges out of a pool of random gauge candidates and how to select the values of parameters for which we have no _ a priori _ knowledge of the optimal value . for the latter , we illustrate the concept by applying the score function to set the strength of the parameter intended to enforce the embedding of the logical graph into the hardware architecture , a challenge frequently encountered in the implementation of real - world problem instances . since the harder the problem instances , the more useful the strategies proposed in this work are , we expect the programming strategies proposed to significantly reduce the time of future benchmark studies and in help finding the solution of hard - to - solve real - world applications implemented in the next generation of quantum annealing devices .
kardar , parisi and zhang ( kpz ) proposed on physical grounds that a wide variety of irreversible stochastically growing interfaces should be governed by a single stochastic pde ( with two model dependent parameters ) .namely , let be the height function at time and position , then the kpz equation is where is a local noise term modeled by space - time white noise .since then , it has been of significant interest to make mathematical sense of this spde ( which is ill - posed due to the non - linearity ) and to find the solutions for large growth time .significant progress has been made towards understanding this equation in the one - dimensional case .specifically , it is believed that the dynamical scaling exponent is .this should mean that for any growth model ( also polymer models ) in the same universality class as the kpz equation ( i.e. , the kpz universality class ) , after centering by its asymptotic value and rescaling as large parameter .in the literature also the choice and is used . ] the limit of should exist ( as ) and be independent of .moreover , the limit should , regardless of microscopic differences in the original models , converge to the same space - time process .most of the rigorous work to infinity have been derived , first in the problem of tagged particle in the tasep , and then in more general models .however , the different times are restricted to lie in an interval of width . ] done in studying the statistics associated with this fixed point have dealt with the spatial process ( obtained as the asymptotic statistics of as a process in , as ) and not on how the spatial process evolves with .the exact form of these statistics depend only on the type of initial geometry of the growth process ( e.g. , the process for non - random flat geometries and process for wedge geometries ; see the review ) .computations of exact statistics require a level of solvability and thus have only been proved in the context of certain solvable discrete growth models or polymer models in the kpz universality class . the partially / totally asymmetric simple exclusion process ( p / tasep ), last passage percolation ( lpp ) with exponential or geometric weights , the corner growth model , and polynuclear growth ( png ) model constitute those models for which rigorous spatial fluctuation results have been proved .recently , progress was made on analyzing the solution of the kpz equation itself , though this still relied on the approximation of the kpz equation by a solvable discrete model .the slow decorrelation phenomenon provides one of the strongest pieces of evidence that the above scaling is correct .indeed , slow decorrelation means that converges to zero in probability for any .fix times of the form ( for and ) .then , as long as , the height function fluctuations , scaled by and considered in a spatial scale of , will be asympotically ( as ) the same as those at time .specifically , we introduce a generalized lpp model which encompasses several kpz class models .then we give sufficient conditions under which such lpp models display slow decorrelation .these conditions ( the existence of a limit shape and one - point fluctuation result ) are very elementary and hold for all the solvable models already mentioned , and are believed to hold for all kpz class models .the proof that slow decorrelation follows from these two conditions is very simple it relies on the superadditivity property of lpp and on the simple observation that if and both and converge in law to the same random variable , then converges in probability to zero ( see lemma [ bac_lemma ] ) .previously , the slow decorrelation phenomenon was proved for the png model .therein the proof is based on very sharp estimates known in the literature only for the png .apart from the png , the only other model for which slow decorrelation has been proved is tasep under the assumption of stationary initial distribution . besides being of conceptual interest , the slow decorrelation phenomenon is an important technical tool that allows one to , for instance : ( a ) easily translate limit process results between different related observables ( e.g. , total current , height function representation , particle positions in tasep ; see ) , and more importantly , ( b ) prove limit theorems beyond the situations where the correlation functions are known ( see section [ corner_growth_model_sec ] ) .a further application is in extending known process limit results to prove similar results for more general initial conditions / boundary conditions . in section [ gen_theory_sec ]we introduce the general framework for lpp models in which we prove a set of criteria for slow decorrelation ( theorem [ growth_thm ] ) . in the rest of the paper, we apply theorem [ growth_thm ] to various models in the kpz class , which can be related in some way with a lpp model : the corner growth model , point to point and point to line lpp models , tasep , pasep ( which requires a slightly different argument since it can not be directly mapped to a lpp problem ) and png models .finally we note extensions of the theorem to first passage percolation and directed polymers , provided that ( as conjectured ) the same criteria are satisfied .the authors wish to thank jinho baik for early discussions about this and related problems .i. corwin wishes to thank the organizers of the `` random maps and graphs on surfaces '' conference at the institut henri poincar , as much of this work was done during that stay .travel to that conference was provided through the pire grant oise-07 - 30136 for which thanks goes to charles newman and grard ben arous for arranging for this funding .i. corwin is funded by the nsf graduate research fellowship .s. pch would like to thank herv guiol for useful discussions on tasep and her work is partially supported by the agence nationale de la recherche grant .the authors are very grateful to the anonymous referee for careful reading and a number of constructive remarks .in this section we consider a general class of last passage percolation models ( or equivalently growth models ) . given the existence of a law of large numbers ( lln ) and central limit theorem ( clt ) for last passage time ( or for the associated height function ) , we prove that such models display slow decorrelation along their specific , model dependent , `` characteristic '' directions .we consider growth models in for which may be lattice based or driven by poisson point processes .we define a directed lpp model to be an almost surely sigma - finite random non - negative measure on . for example we could take to be a collection of delta masses at every point of with weights given by random variables ( which need not be independent or identically distributed ) .alternatively we could have a poisson point process such as in the lpp realization of the png model .we will focus on a statistic we call the _ directed half - line to point last passage time_. we choose to study this since , by specifying different distributions on the random measure one can recover statistics for a variety of kpz class models . in order to define this passage time we introduce the half - line where is the coordinate of the point .[ c] [ l] [ l] [ l] [ c] and the space and time axis are label .a directed path from to the point is shown.,title="fig:",height=170 ] it is convenient for us to define a second coordinate system which we call the space - time coordinate system as follows : let be the rotation matrix which takes to . then the space - time coordinate system is applied to the standard basis .the line ( which contains ) is the inverse image of and we call it the -axis ( for `` time '' ) , see figure [ slow_dec_space_time ] for an illustration .the other space - time axes are labeled through ( these are considered to be `` space '' axes ) .call a curve in a directed path if is a function of and is 1-lipschitz .two points are called `` time - like '' if they can be connected by such a path .otherwise they are called `` space - like '' . to a directed pathwe assign a passage time which is the measure , under the random measure of the curve .now we define the last passage time from the half - line to a point as where we understand the supremum as being over all directed paths starting from the half - line and going to . one may also consider point to point last passage time between and which we write as .this is the special case of on . in section [ apps ]we show how , by specifying the random measure differently , this model encompasses a wide variety of lpp models and related processes ( such as tasep and png ) . just to illustrate though , take and let be composed of only delta masses at points in with mass exponentially distributed with rate 1. then is the last passage time for the usual lpp in a corner ( or equivalently the corner growth model considered in section [ corner_growth_model_sec ] ) .we present our result in this more general framework to allow for non - lattice models such as the png model . we can now state a result showing that _ slow decorrelation occurs _ in any model which can be phrased in terms of this type of last passage percolation model _ provided both a lln and a clt hold_. [ growth_thm ] fix a last passage model in dimension with by specifying the distributions of the random variables which make up the environment . consider a point and a time - like direction . if there exist constants ( depending on , , and the model ) : and non - negative ; ; ; distributions , ; and scaling constants such that then we have slow decorrelation of the half - line to point last passage time at , in the direction and with scaling exponent , which is to say that for all , [ non_const_remark ] there are many generalizations of this result whose proofs are very similar . for instancethe fixed ( macroscopic ) point and the fixed direction can , in fact , vary with as long as they converge as .one may also think of the random lpp measure ( and the associated probability space ) as depending on .thus for each the lpp environment is given by defined on the space .the probability will therefore also depend on , however an inspection of the proof below shows that the whole theorem still holds with replaced by .recall the super - additivity property : which holds provided the last passage times are defined on the same probability space .this follows from the fact that , by restricting the set of paths which contribute to to only those which go through the point , one can only decrease the last passage time .the following lemma plays a central role in our proof .[ bac_lemma ] consider two sequences of random variables and such that for each , and are defined on the same probability space .if and as well as then converges to zero in probability .conversely if and converges to zero in probability then as well . from now on , we assume that the different last passage times and are realized on the same probability space . also ,by absorbing the constants and into the distributions , we may fix them to be equal to one .using super - additivity we may write where is a ( compensator ) random variable . rewriting the above equation in terms of the random variables , and and dividing by we are left with by assumption on , and hence we know that the distribution of and separately of converge to the same distribution . however , since is always non - negative , we also know that .therefore , by lemma [ bac_lemma ] their difference , , converges to zero in probability .thus converges to zero in probability . since the theorem immediately follows .the aim of this section is to make a non - exhaustive review of the possible fields of applications of theorem [ growth_thm ] .we introduce a few standard models and explain , briefly , how they fit into the framework of half - line to point lpp and what the consequences of theorem [ growth_thm ] are for these models .consider a set in with initial condition and evolving under the following dynamics : from each _ outer corner _ of a -box is fill at rate one ( i.e. , after exponentially distributed waiting time of mean ) .see figure [ corner_fig ] for an illustration of this growth rule where the model has been rotated by .[ cb] [ c] [ c] [ c] [ c] [ c] [ c] [ c] [ c] are the random , exponentially distributed times it takes to fill an outer corner .the black line is the height function at time .there are two outer corners at that time.,title="fig:",height=188 ] this random variable is well known to define a last passage time in a related directed percolation model , as we now recall .let be the waiting time for the outer corner to be filled , once it appears .a path from to is called directed if it moves either up or to the right along lattice edges from to .to each such path , one associates a passage time .then i.e. , is the last passage times from to .alternatively one can keep track of in terms of a height function defined by the relationship ( together with linear interpolation for non - integer values of ) .note that for given , for large enough .thus the corner growth process is equivalent to the stochastic evolution of a height function with and growing according to the rule that local valleys ( ) are replaced by local hills ( ) at rate one .we will speak mostly about the height function , but when it comes to computing and proving theorems it is often easier to deal with the last passage picture .if we more generally consider the height function arising from a lpp model with an ergodic measure ( e.g. , iid random lattice weights ) , then super - additivity and the kingman s ergodic theorem implies the existence of a ( possibly infinity ) limit growth evolution since lpp is a variation problem , the limiting profile is also given by the solution to a variational problem which , when translated into the limiting height function means that satisfies a hamilton - jacobi pde for a model dependent flux function . such pdes may have multiple solutions , and corresponds to the unique weak solutions subject to entropy conditions .such pdes can be solved via the method of characteristics .characteristics are lines of slope along which initial data for is transported . in our present caseif we set then satisfies the burgers equation and the characteristic lines are of constant velocity emanating out of the origin .it is the fluctuations around this macroscopic profile which are believed to be universal .johansson proved that asymptotic one - point fluctuations are given by where is the tracy - widom distribution defined in . unlike the traditional clt the fluctuations here are in the order of and the limiting distribution is not gaussian .likewise we may consider the fluctuations at multiple spatial locations by fixing here and measures the fluctuations with respect to the limit shape behavior .then , in the large time limit , the joint - distributions of the fluctuations are governed by the so - called process , denoted by .this process was introduced by prhofer and spohn in the context of the png model ( see also ) .a complete definition of the airy process is recalled in .more precisely , it holds that where , and are real numbers .of course , ( [ eq8 ] ) is the special case of ( [ theta_zero_thm ] ) for and .we now consider how fluctuations in the height function are carried through time .for instance , if the fluctuation of the height function above the origin is known at time ( large ) for how long can we expect to see this fluctuation persist ( in the scale ) ?the answer is non - trivial and given by applying theorem [ growth_thm ] : there exists a single direction in space - time along which the height function fluctuations are carried over time scales of order , while for all other directions only at space - time distances of order .indeed , given a fixed velocity , any exponent , and any real number , -[h\big(vt , t\big ) - t\bar{h}(v ) ] \right| \geq m t^{1/3}\right ) = 0,\ ] ] for any .thus , the height fluctuations at time at position and at time and at position differs only of .these fixed velocity space - time lines are the characteristic of the burgers equation above .thus , the right space - time scaling limit to consider is that given in equation ( [ fixedpt ] ) , with and now taken to be the tasep height function and asymptotic shape , rather than that of the kpz equation .as noted below equation ( [ fixedpt ] ) , the value of the velocity should not affect the law of the limiting space - time process .as evidence for this , equation ( [ theta_zero_thm ] ) shows that we encounter the process as a scaling limit regardless of the value of .this amounts to saying that the fixed marginals of the full space - time limit process of equation ( [ fixedpt ] ) are independent of .up to now only the `` spatial - like behavior '' of the space - time process ( [ fixedpt ] ) , i.e. , the process in the variable for fixed ( which one can set to w.l.o.g . ) has been obtained , while the process in the variable remains to be unraveled .a consequence of ( [ eq11 ] ) is that if we look at the fluctuations at two moments of time and with with , it corresponds to taking in the r.h.s . of ( [ fixedpt ] ) .then in the limit , the limit process is identical to the process for fixed .so , if we determine the limit process for any space - time cut such that in the limit , , then , thanks to slow decorrelation , one can extend the result to any other space - time cut with the same property .in the following we refer to this property as _ the process limit extends to general dimensional space - time directions _ , meaning that we have dimension with spatial - like behavior and dimensions in the orthogonal direction .as indicated in the introduction , slow decorrelation also allows for instance ( a ) to translate the limit of different related observables and ( b ) to extend results on fluctuation statistics to space - time regions where correlation functions are unknown .we illustrate these features in the context of the corner growth model . for simplicity, we consider the case where .fix , , real numbers and .then set \(a ) we first show that one can recover ( [ theta_zero_thm ] ) from an analoguous statement in the corresponding lpp model using ( [ height_lpp ] ) and slow decorrelation .we start from a result in lpp .consider the fixed slice of space - time .this is obtained by setting in ( [ eq13 ] ) , for which using the schur process , it is proven that for , , to get the result on the height function ( [ theta_zero_thm ] ) using ( [ height_lpp ] ) , one would need to make the choice : .then we have thus , to obtain ( [ theta_zero_thm ] ) ( for ) from ( [ lpp_limit_thm_eqn ] ) it is actually sufficient to project in ( [ eq16 ] ) on the line along the characteristic line passing through , see figure [ figuredppext ] for an illustration .one finds that this projection gives the scaling ( [ eq14 ] ) but with replaced by some as .the reason is that the characteristics for have slope slightly different from .finally , slow decorrelation ( theorem [ growth_thm ] , see also ( [ eq11 ] ) ) and the union bound imply then that [ cb] [ cb] [ c] for some away from the line . then, the fluctuations of the passage time at the locations of the black dots are , on the scale , the same as those of their projection along the critical direction to the line , the white dots.,title="fig:",height=188 ] \(b ) the results for ( [ theta_zero_thm ] ) and ( [ lpp_limit_thm_eqn ] ) are derived by using the knowledge of ( determinantal ) correlation functions .the techniques used for these models are however restricted to _ space - like paths _ ( in the best case , see ) , i.e. , for sequences of points such that and ( which can not be connected by directed paths ) .now , choose in ( [ eq13 ] ) for some real .then , it means that we look at the height fluctuations at times , with thus , one can cover much more than only the space - like regions . as before , the projection along the characteristic line of on leads to ( [ eq18 ] ) with and with a slightly modified ( i.e. , ) .then , using slow decorrelation , one can extend ( [ theta_zero_thm ] ) to the following result : fix , , real numbers and .set then we have this type of computations can be readily adapted to the other kpz models considered in the sequel .consider the following random measure on : where and are non - negative random variables .one may consider directed paths to be restricted to follow the lattice edges .this is just standard ( point - to - point ) last passage percolation ( as considered for instance in ) .we will restrict ourselves to the case where , i.e. , lpp in the 2-dimensional corner .here we write for weights . the conditions for our slow decorrelation theorem to hold amount to the existence of a lln and clt .presently , for point to point lpp , this is only rigorously know for the two solvable classes of weight distributions exponential and geometric . for general weight distributions , the existence of a lln follows from superadditivity ( via the kingman subadditive ergodic theorem ) , though the exact value of the lln is not known beyond the solvable cases .none the less , universality is expected at the level of the clt for a very wide class of underlying weight distributions .that is to say that , after centering by the lln , and under scaling , the fluctuations of lpp should always be given by the distribution . in the results we now state we will restrict attention to exponential weights , as geometric weights lead to analogous results .define lpp with _ two - sided boundary conditions _ as the model with independent exponential weights such that , for positive parameters , recall that an exponential of rate has mean .this class of models was introduced in ( and for geometric weights in ) and includes the one considered in .a full description of the one - point fluctuation limit theorems for was conjectured in ( see conjecture 7.1 ) and a complete proof was given in .these limit theorems show that the hypotheses for slow decorrelation are satisfied and hence theorem [ growth_thm ] applies .we present an adaptation of theorem 1.3 of stated in such a way that theorem [ growth_thm ] is immediately applicable . as such, we also state the outcome of applying theorem [ growth_thm ] ( see figure [ slow_dec_2_sided_lpp_characteristics ] for an illustration ) . the characteristic direction as well as the exponents and limiting distributions for the fluctuations depend on the location of the point as well as the values of and . due to the radial scaling of it suffices to consider of the form for . in the following theorem , , is a constant depending on the direction , is a constant depending on the direction , and is .we also refer the reader to for the definitions of the distribution functions , and which arise here . 1 . if ( which implies that ) then , , and is either ( in the case of strict inequality ) or ( in the case of either , but not both , equalities ) or ( in the case where all three terms in the inequality are equal ) . then , there is slow decorrelation in the direction for all .2 . if and then , , and is the standard gaussian distribution .then there is slow decorrelation in the direction for all .3 . if and then , , and .then there is slow decorrelation in the direction for all .4 . if and then , , and .then there is slow decorrelation in the direction for all .5 . if and then , , and .then there is slow decorrelation in the direction for all .6 . if and then , , and is distributed as the maximum of two independent gaussian distributions ._ then there is no slow decorrelation ._ this last passage percolation model is related to a tasep model with two - sided initial conditions ( which we discuss in subsection [ asep ] ) . as explained before the characteristics are those for the burgers equation .the first three cases above correspond with a situation that is known of as a _ rarefaction fan _ , while the last three correspond with the _shockwave_. the above result is illustrated in figure [ slow_dec_2_sided_lpp_characteristics ] .the left case displays the rarefaction fan ( the fanning of the characteristic lines from the origin ) and the right case displays a shockwave ( the joining together of characteristic lines coming from different directions ) .[ c](a ) [ c](b ) [ c]case 1 [ c]case 2 [ c]case 3 [ c]case 4 [ c]case 5 [ c] [ l] [ c] ( actually shown ) ; ( b ) ( actually shown ) . as ( the height along the dashed line ) varies, the case of fluctuation theorem changes , as does the direction of slow decorrelation ( given by the direction of the thin lines).,title="fig:",height=226 ] in addition to one - point fluctuation limits , the above two - sided boundary condition lpp model has a fully classified limit process description .the description was given in for ( known as the stationary case ) and in for all other ( non - equilibrium ) boundary conditions .these process limits are obtained using determinantal expressions for the joint distribution of the last passage times for points along fixed directions .thus , initially , one only gets process limits along fixed lines . as explained in section [ corner_growth_model_sec ] and in slow decorrelation ,however , implies that the appropriately rescaled fluctuations at the points which are off of this line ( to order for ) have the same joint distribution as their projection along characteristics to the line ( see figure 1 of for an illustration of this ) .a completely analogous situation arises in the case of geometric , rather than exponential weights ( this model is often called discrete png ) .such a model is described in and the one - point limiting fluctuation statistics are identified .the spatial process limit is characterized in .these results are only proved in a fixed space - time direction , though applying theorem [ growth_thm ] we can extend this process limit away from this fixed direction just as with the exponential weights . a slightly different model with boundary conditionswas introduced in and involves _thick one - sided boundary conditions_. fix a , parameters , and set for .just as above , we define independent random weights on , this time with exponential random variables of rate ( mean ) .section 6 of explains how results they obtain for perturbed wishart ensembles translate into a complete fluctuation limit theorem description for this model . just as in the two - sided boundary case ,those limit theorems show that the hypotheses of theorem [ growth_thm ] are satisfied and therefore there is slow decorrelation .the exponent depends on the point and the strength of the boundary parameters and can either be , with random matrix type fluctuations , or with gaussian type ( more generally for some ) fluctuations ( see theorem 1.1 ) . the exponent and the limiting distribution is .the direction of the slow decorrelation depends on the parameters and the point ( we do not write out a general parametrization of this direction as there are many cases to consider depending on the ) .the fluctuation process limit theorem has not been proved for this model , though the method of would certainly yield such a theorem .also , analogous results for the geometric case have not been proved either but should be deducible from the schur process .tasep is a markov process in continuous time with state space ( think of 1s as particles and 0s as holes ) .particles jump to their right neighboring site at rate , provided the site is empty .the waiting time for a jump is exponentially distributed with mean ( discrete - time versions have geometrically distributed waiting times ) . see for a rigorous construction of this process .tasep with different initial conditions can be readily translated into lpp with specific measures and hence theorem [ growth_thm ] may be applied .slow decorrelation can thus be used to show that fluctuation limit processes can be extended from fixed space - time directions to general dimensional space - time directions .an observable of interest for tasep is the integrated current of particles defined as the number of particles which jumped from to during the time interval ] as the random initial conditions in which particles initially occupy sites to the left of the origin with probability ( independently of other sites ) and likewise to the right of the origin with probability .it was proven in that two - sided tasep can be mapped to the lpp with two - sided boundary conditions model ( [ eq31 ] ) with and . using this connection and slow decorrelation, one can show that all the results stated for the lpp model ( [ eq31 ] ) can be translated into their counterpart for two - sided tasep .this is made in detail in ( which uses some arguments of this paper ) , where we prove a complete fluctuation process limit for which complements the recent result of for .the characteristic line leaving position has slope . on top of this, the entropy condition ensures that if , there will be a rarefaction fan from the origin which will fill the void between lines of slope and . the rankine - hugoniot condition applies to the case where and introduces shockwaves with specified velocities when characteristic lines would cross .these two types of characteristics are illustrated side - by - side in figure [ slow_dec_2_sided_bc ] .another variation is tasep with _slow particles or slow start - up times _ , which is considered in .it may likewise be connected to the lpp with thick one - sided boundary conditions model which we previously introduced . as a resultwe may similarly conclude slow decorrelation .not all initial conditions correspond to lpp with weights restricted to .for example , tasep with _flat ( or periodic ) initial conditions _ corresponds to the case where only the sites of , for are initially occupied . for simplicity ,we focus on the case .then , the height function at time is a saw - tooth , see figure [ slow_dec_flat_tasep](a ) ( though asymptotically flat , from which the name ) .rotating by , it is the growth interface for half - line to point lpp where the measure is supported on points in such that and given by delta masses with independent exponential weights of rate .fluctuation theorems and limit process have been proved for several periodic initial conditions ( in was in discrete time , i.e. , with geometric weights ) .similarly tasep with _half - flat initial conditions _ is defined by letting particles start at .the corresponding last passage percolation model has non - zero weights for points such that and .the limit process for this model was identified in .theorem [ growth_thm ] applies to both of these model and proves slow decorrelation .this implies that the fluctuation process limits extend to general dimensional space - time directions .the characteristics lines are shown in figure [ slow_dec_flat_tasep](b ) .a variant of half flat initial conditions has particles starting at plus a few particles at positive even integers , with a different speed .this is known as _ two speed _ tasep and gives a complete description and proof of the process limit for these initial conditions .as with all of the other examples , this can be coupled with a lpp model and hence theorem [ growth_thm ] applies and prove slow decorrelation and enables us to extend these process limit results as well . the pasep is a generalization of tasep where , particles jump to the right - neighboring site with rate and to the left - neighboring site with rate ( always provided that the destination sites are empty ) . an important tool to study pasepis the _ basic coupling _ . through a graphical construction, one can realize and hence couple together every pasep ( with different initial conditions ) on the same probability space .even though pasep can not be mapped to a lpp model , it still has the same super - additivity properties necessary to prove a version of theorem [ growth_thm ] .the property comes in the form of _attractiveness_. that pasep is attractive means that if you start with two initial conditions corresponding to height functions for all , then for any future time , for all .we now briefly review this graphical construction . above every integer draw a half - infinite time ladder .fix ( and hence ) and for each ladder place right and left horizontal arrows independently at poisson points with rates and respectively .this is the common environment in which all initial conditions may be coupled .particles move upwards in time until they encounter an arrow leaving their ladder .they try to follow this ladder , and hence hop one step , yet this move is excluded if there is already another particle on the neighboring ladder . that this graphical construction leads to attractiveness for the pasepis shown , for instance , in . in a series of three papers tracy and widom show that for step initial conditions with positive drift , pasep behaves asymptotically the same as tasep ( when speeded - up by ) . just as in tasepthe current or height function is of central interest . is defined as the number of particles which jumped from to minus the ones from to during ] .we write for the height function for this specified model , and for the height function for the pasep with step initial conditions .note that the generalizations of remark [ non_const_remark ] apply in this case too .[ asep_thm ] consider a velocity and a second variable .if there exist constants ( depending on and and the model ) : and non - negative ; ; ; and distributions and such that then we have slow decorrelation of the pasep height function at speed , in the direction given by and with scaling exponent , i.e. , for all , rather than the height function we focus on the current which is related via equation ( [ height_current ] ) . is equal to the current up to time , plus the current of particles which cross the space - time line from at time to at time .we consider a coupled system starting at time reset so as to appear to be in step initial conditions centered at position . by attractiveness of the basic coupling , the current across the space - time line from at time to at time for this `` step '' system will exceed that for the original system .denote by the current associated to the coupled `` step '' system and observe that , it is distributed as the current of an independent step initial condition pasep .thus , where . from this point onthe proof follows exactly as in the proof of theorem [ growth_thm ] . using the fluctuation results proved in , reviewed in , we find that the above hypotheses are satisfied for pasep with step initial conditions and also for step bernoulli initial conditions with and .the slow decorrelation directions are given by the characteristics just as in the case of tasep .these two sets of initial conditions are the only ones for which fluctuations theorems are presently known for pasep , but limit process theorems are not yet proven . as mentioned before , slow decorrelation for the ( continuous time , poisson point ) png model was proved previously in in the case of the _ png droplet _ and _ stationary png_. theorem [ growth_thm ] ( along with the necessary preexisting fluctuation theorems )gives an alternative proof of these results as well as the analogous result for _ flat png_. because of the minimality of the hypotheses of our theorem we may further prove slow decorrelation for the model of _ png with two ( constant ) external sources _ considered in .the way that png fits into the framework of our half - line to point lpp model is that one takes to be a poisson point process of specified intensity on some domain . for the png droplet , stationary png and png with two external sources ,we restrict the point process to and ( in the second and third cases ) augment the measure with additional one dimensional point process along the boundaries . for flat pngthe support of the point process is .the limit process for the png droplet for fixed time was proved in and for flat png was proved in for space - like paths .it was explained in that slow decorrelation implies that these limit processes extend to general dimensional space - time directions ( with time scaling for ) . as opposed to lpp onecan look to the minimum value of .this then goes by the name of directed _ first _ passage percolation and for simplicity we consider this only when we restrict our measure to being supported on a lattice .one may also consider undirected first passage percolation .theorem [ growth_thm ] can be adapted in a straightforward way for both of these models .the statement of the theorem remains identical up to replacing the last passage time variable with the first passage time .for the proof the only change is that the compensator now satisfies rather than .unfortunately no fluctuation theorems have been proved for first passage percolation , so all that we get is a criterion for slow decorrelation .we now briefly consider a lattice - based directed polymer models in dimension and note that just as in lpp , slow decorrelation can arise in these models .unfortunately , just as in first passage percolation , there are no fluctuation theorems proved for such polymers .recently , however , the order of fluctuations for a particular specialization of this model was proved in .it should be noted that while we focus on just one model , the methods used can be applied to other polymer models and in more than dimension ( for example line to point polymers ) .the model we consider is the point to point directed polymer . in this modelwe consider any directed , lattice path from to a point and assign it a gibbs weight where is known as the inverse temperature and where is the sum of weights ( which are independent ) along the path ( is the energy of the path ) . we define the partition function and free energy for a polymer from a point to as : it is expected that the free energy satisfies similar fluctuation theorems to those of lpp ( which is the limit of ) .[ polymer_thm ] consider a directed polymer model and consider a point and a direction .if there exist constants ( depending on and and the model weight distributions ) : and non - negative ; ; ; and distributions , such that then we have slow decorrelation of the point to point polymer at , in the direction and with scaling exponent , which is to say that for all , the direction for a given should correspond to the characteristic through that point .the proof of this criterion for slow decorrelation is identical to the proof for theorem [ growth_thm ] and follows from the computation below ( a result of super - additivity yet again ) : here and the argument is analogous to ( [ growth_compensator ] ) .
there has been much success in describing the limiting spatial fluctuations of growth models in the kardar - parisi - zhang ( kpz ) universality class . a proper rescaling of time should introduce a non - trivial temporal dimension to these limiting fluctuations . in one - dimension , the kpz class has the dynamical scaling exponent , that means one should find a universal space - time limiting process under the scaling of time as , space like and fluctuations like as . in this paper we provide evidence for this belief . we prove that under certain hypotheses , growth models display temporal slow decorrelation . that is to say that in the scalings above , the limiting spatial process for times and are identical , for any . the hypotheses are known to be satisfied for certain last passage percolation models , the polynuclear growth model , and the totally / partially asymmetric simple exclusion process . using slow decorrelation we may extend known fluctuation limit results to space - time regions where correlation functions are unknown . the approach we develop requires the minimal expected hypotheses for slow decorrelation to hold and provides a simple and intuitive proof which applied to a wide variety of models .
social learning has been a topic of central concern in economics during the last decades , as it is central to a wide range of socio - economic phenomena .consumers who want to choose among a given set of available products may seek the opinion of people they trust , in addition to the information they gather from prices and/or advertisement . andvoters who have to decide what candidate to support in an election , or citizens who have to take a stand on some issue of social relevance may rely on their contacts to form their opinion .ultimately , whether our societies take the right course of action on any given issue ( e.g. on climate change ) will hinge upon our ability to aggregate individual information that is largely disperse .thus , in particular , it must depend on the information diffusion mechanism by which agents learn from each other , and therefore on the underlying social network in which they are embedded .the significance of the conceptual challenges raised by these issues is made even more compelling by the booming advance in information and communication technologies , with its impact on the patterns of influence and communication , and on the way and speed in which we communicate .these key issues have attracted the interest of researchers in several fields .for example , the celebrated voter model " is a prototype of those simple mechanistic models that are very parsimonious in the description of individual behavior but allow for a full characterization of the collective behavior induced .the voter model embodies a situation where each agent switches to the opinion / state held by one of the randomly selected neighbors at some given rate , and raises the question of whether the population is able to reach consensus , i.e. a situation where all agents display the same state .the literature on consensus formation , as reviewed e.g. in refs . , has focused , in particular , on the role played by the structure of the underlying network in shaping the asymptotic behavior .one of the main insights obtained is that the higher is the effective dimensionality of the network the harder it is for conformity to obtain .consensus formation in social systems is closely related to the phenomenon of social learning .indeed , the latter can be regarded as a particular case of the former , when consensus is reached on some true " ( or objective ) state of the world , for example , given by an external signal impinging on the social dynamics . at the opposite end of the spectrum ,economists have stressed the _ micro - motives _ that underlie individual behavior and the assumption of rationality .they have also emphasized the importance of going beyond models of global interaction and/or bilateral random matching , accounting for some local structure ( modeled as a social network ) in the pattern of influence or communication among agents . this literature ( see ref . for an early survey ) has considered a number of quite different scenarios , ranging from those where agents just gather and refine information to contexts where , in addition , there is genuine strategic interaction among agents . despite the wide range of specific models considered , the literature largely conveys a striking conclusion : full social conformity is attained ( although not necessarily correct learning ) , irrespectively of the network architecture . on the other hand , to attain correct learning , one must require not only that the population be large but , in the limit , that no individual retain too much influence .the model studied in this paper displays some similarities to , as well as crucial differences with , those outlined above . to fix ideas , the model could be regarded as reflecting a situation where , despite the fact that new information keeps arriving throughout , the consequences of any decisioncan only be observed in the future .even more concretely , this could apply , for example , to the performance of a political candidate , the health consequences of consuming a particular good , or the severity of the problem of climate change , on all of which a flow of fresh information may be generated that is largely independent of agents evolving position on the issue .so , as in ref . , the agents receive an external signal ; however , the signal is noisy and it is confronted with the behavior displayed by neighbors . as in refs . , while the agents make and adjust their choices , they keep receiving noisy signals on what is the best action .in contrast , however , these signals are not associated to experimentation . in this respect, we share with ref . the assumption that agents arrival of information is not tailored to current choices . the problem , of course ,would become trivially uninteresting if agents either have unbounded memory or store information that is a sufficient statistic for the whole past ( e.g. updated beliefs in a bayesian setup ) .for , in this case , agents could eventually learn the best action by relying on their own information alone .this leads us to making the stylized assumption that the particular action currently adopted by each individual is the only trace " she ( and others ) keep of her past experience .thus her ensuing behavior can only be affected by the signal she receives and the range of behavior she observes ( i.e. , her own as well as her neighbors ) . under these conditions, it is natural to posit that if an agent receives a signal that suggests changing her current action , she will look for evidence supporting this change in the behavior she observes on the part of her neighbors . and then , only if a high enough fraction of these are adopting the alternative action , she will undertake the change .this , indeed , is the specific formulation of individual learning studied in the present paper , which is in the spirit of the many threshold models studied in the literature , such as . in the setup outlined, it is intuitive that the acceptance threshold " that agents require to abandon the status quo should play a key role in the overall dynamics . and , indeed , we find that its effect is very sharp .first , note the obvious fact that if the threshold is either very high or very low , social learning ( or even behavioral convergence ) can not possibly occur .for , in the first case ( a very high threshold ) , the initial social configuration must remain frozen , while in the second case ( a very low threshold ) , the social process would enter into a state of persistent flux where agents keep changing their actions . in both of these polar situations , therefore , the fraction of agents choosing the good action would center around the probability with which the signal favors that action . outside of these two polar situations , we find that there is always an intermediate region where social learning does occur . and , within this region , learning emerges abruptly .specifically , there are upper and lower bounds ( dependent on ) such that , if the threshold lies within these bounds , _ all _ agents learn to play the good action while _ no _ learning _ at all _ occurs if the threshold is outside that range . thus the three aforementioned regions are separated by sharp boundaries .a similar abruptness in learning arises as one considers changes in . in this case, there is a lower bound on ( which depends on the threshold ) such that , again , we have a binary situation ( i.e. , no learning or a complete one ) if the informativeness of the signal is respectively below or above that bound . in a sense , these stark conclusions highlight the importance of the social dimension in the learning process .they show that , when matters / parameters are right , " the process of social learning builds upon itself to produce the sharp changes just outlined .as it turns out , this same qualitative behavior is encountered in a wide variety of different network contexts . to understand the essential features at work , we start our analysis by studying the simple case of a complete graph , where every agent is linked to any other agent .this context allows one to get a clear theoretical grasp of the phenomenon .in particular , it allows us to characterize analytically the three different regimes of social learning indicated : correct learning , frozen behavior , or persistent flux .we then show that this characterization also provides a good qualitative description of the situation when the interaction among agents is mediated via a sparse complex network .we consider , in particular , three paradigmatic classes of networks : regular two - dimensional lattices , poisson random networks , and barabsi - albert scale free networks .for all these cases , we conduct numerical simulations and find a pattern analogous to the one observed for the complete graph .the interesting additional observation is that local interaction _ enlarges _ ( in contrast to global interaction ) the region where social learning occurs .in fact , this positive effect is mitigated as the average degree of the network grows , suggesting a positive role for relatively limited / local connectivity in furthering social learning .there is large population of agents , , placed on a given undirected network , where we write if there is link between nodes and in .let time step be indexed discretely .each agent displays , at any time step , one of two alternative actions , which are not equivalent .one of them , say action , induces a higher ( expected ) payoff , but the agents do not know this . at each time step ,one randomly chosen agent receives a signal on the relative payoff of the two actions .this signal , which is independent across time and agents , is only partially informative .specifically , it provides the correct information ( i.e. , action 1 is best " ) with probability , while it delivers the opposite information with the complementary probability .if agent s previous action does not coincide with the action suggested as best , she considers whether changing to the latter .we assume that she chooses ( thus making ) if , and only if , the fraction of neighbors in who chose at exceeds a certain threshold . let this ( common ) threshold be denoted by ] stand for the fraction of agents choosing action at time . in the limit of infinite population size ( ), the dynamics is given by : if while if .this equation is derived by considering the change in the fraction occurring in a time interval of time steps . for , for any finite , this increment converges , by the law of large numbers , to a constant given by the right hand side of eq .( [ mfeq ] ) times .the first term accounts for the number of agents initially with the right signal ( ) who receive the wrong signal ( with probability ) and adopt it , as the fraction of agents also adopting it is larger than the threshold ( ) .the second accounts for the opposite subprocess , whereby agents who receive the correct signal ( with probability ) switch to the correct action when the population supports it ( ) .we assume that , at time , each agent receives a signal and adopts the corresponding action .hence the initial condition for the dynamics above is .it is useful to divide the analysis into two cases : case i : : : in this case , it is straightforward to check that , it follows that correct social learning occurs iff .case ii : : : in this case , we find : , therefore , correct social learning occurs iff ., in a fully connected network of system size , and ( _ a _ ) ; ( _ b _ ) ; . ] combining both cases , we can simply state that , in the global interaction case , correct social learning , occurs if , and only if , that is , the threshold is within an intermediate region whose size grows with the probability , which captures the informativeness of the signal . however , there are other two phases : if , the system reaches the stationary solution ; while if , we have for all times , which means that the system stays in the initial condition .in fully connected networks for different system sizes for and .the continuous line corresponds to an exponential fit of the form , being a constant . ]now we explore whether the insights obtained from the infinite size limit of the global interaction case carry over to setups with a finite but large population , where agents are genuinely connected through a social network .first , we consider the benchmark case of global interaction ( i.e. , a completely connected network ). then , we turn to the case of local interaction and focus on three paradigmatic network setups : lattice networks , erds - rnyi ( poisson ) networks , and barabsi - albert ( scale - free ) networks . the results obtained on the completely connected network ( i.e. , the network where every pair of nodes is linked ) are in line with the theory presented in the previous section .the essential conclusions can be summarized through the phase diagram in the -space of parameters depicted in figure 1 . therewe represent the fraction of agents choosing action in the steady state for each parameter configuration , with the red color standing for a homogeneous situation with ( i.e. , all agents choosing action ) while the blue color codes for a situation where and therefore the two actions are equally present in the population .intermediate situations appear as a continuous color grading between these two polar configurations .we find that , depending on the quality of the external signal and the threshold , the system reaches configurations where either complete learning occurs ( ) or not ( ) . indeed , the observed asymptotic behavior is exactly as predicted by the analysis of the previous section and it displays the following three phases : * phase i : .the system reaches a stationary aggregate configuration where the nodes are continuously changing their state but the average fraction of those choosing action gravitates around the frequency with some fluctuations ( see figure 2_a _ ) .the magnitude of these fluctuations decreases with system size . *phase ii : .the system reaches the absorbing state where everyone adopts action .this is a situation where the whole population eventually learns that the correct choice is action ( see figure 2_b _ ) . *phase iii : .the system freezes in the initial state , so the fraction of agents choosing the correct action coincides with the fraction of those that received the corresponding signal at the start of the process ( see figure 2_c _ ) .it is worth noting that , while in phase i the theory predicts , any finite - size system must eventually reach an absorbing homogenous state due to fluctuations .thus , to understand the nature of the dynamics , we determine the average time that the system requires to reach such an absorbing state .as shown in figure 3 , grows exponentially with .this means that grows very fast with system size , and thus the coexistence predicted by the theory in phase i can be regarded as a good account of the situation even when is just moderately large .now assume that all nodes are placed on a _ regular boundariless lattice _ of dimension , endowed with the distance function given by .the social network is then constructed by establishing a link between every pair of agents lying at a lattice distance no higher than some pre - specified level .this defines the neighborhood of any agent , as given by . in this network , the degree ( i.e. the number of neighbors ) of any node is related to ; for instance , if we have .( ) .the colors represent the fraction of agents choosing action 1action ( from red , , to blue , ) .system size ; average over realizations . ]the behavior of the system is qualitatively similar to the case of a fully connected network .again we find three phases . in two of them, both actions coexist with respective frequencies and ( one phase is frozen and the other continuously fluctuating ) , while in another one the whole population converges to action . a global picture of the situation for the entire range of parameter valuesis shown in figure 4 , with the black diagonal lines in it defining the boundaries of the full - convergence region under global interaction . in comparison with the situation depicted in figure 1, we observe that the region in the -space where behavioral convergence obtains in the lattice network is broader than in the completely connected network .this indicates that restricted ( or local ) interaction facilitate social learning , in the sense of enlarging the range of conditions under which the behavior of the population converges to action . as a useful complement to the previous discussion, figure 5 illustrates the evolution of the spatial configuration for a typical simulation of the model in a lattice network , with different values of and .panels , and show the configurations of the system for a low value of at three different time steps : , and respectively .the evolution of the system displays a configuration analogous to the initial condition , both actions coexisting and evenly spreading throughout the network .this is a situation that leads to dynamics of the sort encountered in _phase i _ above .in contrast , panels , and correspond to a context with a high , which induces the same performance as in _phase iii_. it is worth emphasizing that although panels , and display a similar spatial pattern , they reflect very different dynamics , i.e. , continuous turnover in the first case , while static ( frozen initial conditions ) in the second case .finally , panels , and illustrate the dynamics for an intermediate value of , which leads to a behavior of the kind displayed in _phase ii_. specifically , these panels show that , as the system moves across the three time steps : , and , the system evolves , very quickly , toward a state where all agents converge to action . for different values of and .panels ( _ a - c _ ) : and time steps ( _ a _ ) , ( _ b _ ) and ( _ c _ ) . panels ( _ d - f _ ) : and time steps ( _ d _ ) , ( _ e _ ) and ( _ f _ ) . panels ( _ g - i _ ) : and time steps ( _ g _ ) , ( _ h _ ) and ( _ i _ ) .black color represents an agent using action , while white color represents action .the system size is . ]a lattice network is the simplest possible context where local interaction can be studied .it is , in particular , a regular network where every agent faces exactly symmetric conditions .it is therefore interesting to explore whether any deviation from this rigid framework can affect our former conclusions .this we do here by focusing on two of the canonical models studied in the network literature : the early model of erds and rnyi ( er ) and the more recent scale - free model introduced by barabsi and albert ( ba ) . both of them abandon the regularity displayed by the lattice network and contemplate a non - degenerate distribution of node degrees .the er random graph is characterized by a parameter , which is the connection probability of agents .it is assumed , specifically , that each possible link is established in a stochastically independent manner with probability .consequently , for any given node , its degree distribution determining the probability that its degree is is binomial , i.e. , , with an expected degree given by . in the simulations reported below , we have focused on networks with and . on the other hand , to build a ba network , we follow the procedure described in ref . . at each time step , a new node is added to the network and establishes links to existing nodes .the newcomer selects its neighbors randomly , with the probability of attaching to each of the existing nodes being proportional to their degree .it is well known that this procedure generates networks whose degree distribution follows a power law of the form , with . for our simulations , we have constructed ba networks using this procedure and a value of , leading to an average degree . .the colors represent the fraction of agents choosing action 1 ( from red , , to blue ) .system size , average over realizations . ]the networks are constructed , therefore , so that they have the same average degree in both the er and ba contexts .it is important to emphasize , however , that the degree distributions obtained in each case are markedly different . while in the former case, the degree distribution induces an exponentially decaying probability for high - degree nodes , in the latter case it leads to fat tails " , i.e. associates significant probability to high - degree nodes .the results are illustrated in figure 6 .for the two alternative network topologies , the system displays qualitatively the same behavior found in the lattice network .that is , there are three distinct phases yielding distinct kinds of dynamic performance : convergence to action , frozen behavior , and persistent turnover .however , it is interesting to note that , compared with the case of global interaction , the convergence region ( which we labeled as phase ii before ) is significantly larger .this suggests that local ( i.e. limited ) connectivity facilitates social learning . why does limited connectivity extend the learning region ? intuitively , the reason is that it enhances the positive role in learning played by random fluctuations .such fluctuations are neglected , by construction , in the mean - field approximation and are also minimized when the whole population interacts globally .but , when interaction is local , those fluctuations will tend to de - stabilize the situation in both the constant flux and in the frozen phases at first , locally , but then also globally . to gain a more refined understanding of this issue ,let us try to assess the effect of local interaction on the likelihood that , at some random initial conditions , any given node faces a set of neighbors who favors a change of actions .this , of course , is just equal to the probability that the fraction of neighbors who display opposite behavior is higher than , the required threshold for change .thus , more generally , we want to focus on the conditional distribution densities and that specify , for an agent displaying actions and respectively , the probability density of finding a fraction of neighbors who adopt actions and , respectively .of course , these distributions must depend on the degree distribution of the network and , in particular , on its average degree . specifically , when the average degree of the network is large relative to population size ( thus we approximate a situation of global interaction ) those distributions must be highly concentrated around and respectively . instead , under lower connectivity ( and genuine local interaction ) , the distributions and will tend to be quite disperse . next , let us understand what are the implications of each situation .in the first case , when the connectivity is high , the situation is essentially captured by a mean - field approximation , and thus the induced dynamics must be well described by the global interaction case ( in particular , as it concerns the size of the convergence region ) .in contrast , when the connectivity is low and the distributions and are disperse , a significant deviation from the mean - field theory is introduced . in fact , the nature of this deviation is different depending on the level of the threshold .if it is low , and thus action turnover high , it mitigates such turnover by increasing the probability that the fraction of neighbors with opposite behavior lie below . instead, if is high and action change is difficult , it renders it easier by increasing the probability that the fraction of neighbors with opposite behavior lies above .thus , in both cases it works against the forces that hamper social learning and thus improves the chances that it occurs .more precisely , the above considerations are illustrated in figure 7 for a lattice network . therewe plot the distributions for different levels of connectivity and parameter values and recall that these values correspond to phase i ( with high turnover ) in a _ fully connected _ network .consider first the situation that arises for values of i.e. low connectivity relative to the size of the system .then we find that , among the nodes that are adopting action , attributes a significant probability mass to those agents whose fraction of neighbors choosing action is below the threshold required to change ( as marked by the vertical dashed line ) .such nodes , therefore , will not change their action . and ,as explained , this has the beneficial effect of limiting the extent of action turnover as compared with the global interaction setup . on the other hand, the inset of figure 7 shows that , among the nodes that are adopting action , the distribution associates a large probability mass to those agents whose fraction of neighbors choosing the opposite action is above .this ensures that there is a large enough flow from action to action . in conjunction , these two considerations lead to a situation that allows , first , for some limited nucleation around action to take place , followed by the ensuing spread of this action across the whole system ( figure 7(_b - d _ ) ) .let us now reconsider the former line of reasoning when is large in particular , take the case depicted in figure 7 .then , the corresponding distribution is highly concentrated around , essentially all its probability mass associated to values that lie above .this means that the induced dynamics must be similar to that resulting from the complete - network setups , and thus too - fast turnover in action choice prevents the attainment of social learning .clearly , social learning would also fail to occur for such high value of if the threshold were large . in this case , however , the problem would be that the highly concentrated distributions and would have most of their probability mass lying below the threshold .this , in turn , would lead to the freezing of the initial conditions , which again is the behavior encountered for a complete network . that a node using action has a fraction of neighbor nodes with action , computed on a two - dimensional lattice for , , , and a completely connected network ( from the broadest to the narrowest probability density distribution ) .[ _ inset _ : ( black , continuous ) and ( red , dotted ) for . ] time evolution of the probability densities ( black ) and ( red ) in a two - dimensional lattice with for ( _ b _ ) , ( _ c _ ) 5 and ( _ d _ ) 10 . for all panels ,the dashed line indicates the threshold ; parameter values : system size is , , and . ]the paper has studied a simple model of social learning with the following features .recurrently , agents receive an external ( informative ) signal on the relative merits of two actions . and , in that event , they switch to the action supported by the signal if , and only if , they find support for it among their peers - specifically , iff the fraction of these choosing that action lies above a certain threshold .given the quality of the signal , correct social learning occurs iff the threshold is within some intermediate region , i.e. neither too high nor too low . for , if it is too high , the situation freezes at the configuration shaped at the beginning of the process ; and if it is too low , the social dynamics enters into a process of continuous action turnover .a key conclusion is that social learning is a dichotomic phenomenon , i.e. it either occurs completely or not at all , depending on whether the threshold lies within or outside the aforementioned region .these same qualitative conclusions are obtained , analytically , in the case of global interaction which corresponds to a mean - field version of the model as well as , numerically , in a wide range of social networks : complete graphs , regular lattices , poisson random networks , and barabsi - albert scale - free networks .however , the size of the parameter region where social learning occurs depends on the pattern of social interaction . in general ,an interesting finding is that learning is enhanced ( i.e. the size of the region enlarged ) the less widespread is such interaction .this happens because genuinely local interaction favors a process of spatial nucleation and consolidation around the correct action , which can then spread to the whole population . in sum ,a central point that transpires from our work is that , in contrast to what most of the received socio - economic literature suggests , social learning is hardly a forgone conclusion .this , of course , is in line with the common wisdom that , paraphrasing a usual phrase , crowds are not always wise . in our threshold framework, this insight is robust to the topology or density of social interaction .furthermore , our results highlight the importance of identifying the information diffusion mechanism , and the local sampling of the population provided by the social network .but future research should explore whether it is also robust to a number of important extensions .just to mention a few , these should include ( a ) interagent heterogeneity e.g. in their individual thresholds for change ; ( b ) different behavioral rules e.g. payoff - based imitation ; or ( c ) the possibility that agents adjust their links , so that learning co - evolves with the social network .
social learning is defined as the ability of a population to aggregate information , a process which must crucially depend on the mechanisms of social interaction . consumers choosing which product to buy , or voters deciding which option to take respect to an important issues , typically confront external signals to the information gathered from their contacts . received economic models typically predict that correct social learning occurs in large populations unless some individuals display unbounded influence . we challenge this conclusion by showing that an intuitive threshold process of individual adjustment does not always lead to such social learning . we find , specifically , that three generic regimes exist . and only in one of them , where the threshold is within a suitable intermediate range , the population learns the correct information . in the other two , where the threshold is either too high or too low , the system either freezes or enters into persistent flux , respectively . these regimes are generally observed in different social networks ( both complex or regular ) , but limited interaction is found to promote correct learning by enlarging the parameter region where it occurs . = 1
clock synchronization in wireless sensor networks ( wsn ) is a critical component in data fusion and duty cycling , and has gained widespread interest in recent years .most of the current methods consider sensor networks exchanging time stamps based on the time at their respective clocks . in a _ two - way _ timing exchange process ,adjacent nodes aim to achieve pairwise synchronization by communicating their timing information with each other . after a round of messages ,each node tries to estimate its own clock parameters .a representative protocol of this class is the timing - sync protocol for sensor networks ( tpsns ) which uses this strategy in two phases to synchronize clocks in a network .the clock synchronization problem in a wsn offers a natural statistical signal processing framework .assuming an exponential delay distribution , several estimators were proposed in .it was argued that when the propagation delay is unknown , the maximum likelihood ( ml ) estimator for the clock offset is not unique .however , it was shown in that the ml estimator of does exist uniquely for the case of unknown .the performance of these estimators was compared with benchmark estimation bounds in .a common theme in the aforementioned contributions is that the effect of possible time variations in clock offset , arising from imperfect oscillators , is not incorporated and hence , they entail frequent re - synchronization requirements . in this work , assuming an exponential distribution for the network delays , a closed form solution to clock offset estimation is presented by considering the clock offset as a random gauss - markov process .bayesian inference is performed using factor graphs and the max - product algorithm .by assuming that the respective clocks of a sender node and a receiver node are related by at real time , the two - way timing message exchange model at the instant can be represented as where represents the propagation delay , assumed symmetric in both directions , and is offset of the clock at node relative to the clock at node .the network delays , and , are the independent exponential random variables . by further defining and , the model incan be written as for .the imperfections introduced by environmental conditions in the quartz oscillator in sensor nodes result in a time - varying clock offset between nodes in a wsn . in order to sufficiently capture these temporal variations , the parameters and assumed to evolve through a gauss - markov process given by where and are such that .the goal is to determine precise estimates of and in the bayesian framework using observations .an estimate of can , in turn , be obtained as posterior pdf can be expressed as where uniform priors and are assumed .define , , , , where the likelihood functions are given by the factor graph representation of the posterior pdf is shown in fig .[ fig : fact ] .the factor graph is cycle - free and inference by message passing is indeed optimal .in addition , the two factor graphs shown in fig . [ fig : fact ] have a similar structure and hence , message computations will only be shown for the estimate .clearly , similar expressions will apply to .the message updates in factor graph using max - product can be computed as follows which can be rearranged as where let be the unconstrained maximizer of the exponent in the objective function above , i.e. , this implies that if , then the estimation problem is solved , since . however , if , the solution is .therefore , in general , we can write notice that depends on , which is undetermined at this stage .hence , we need to further traverse the chain backwards . assuming that , from can be plugged back in which after some simplification yields similarly the message from the factor to the variable node can be expressed as the message above can be compactly represented as where proceeding as before , the unconstrained maximizer of the objective function above is given by and the solution to the maximization problem is expressed as again, depends on and therefore , the solution demands another traversal backwards on the factor graph representation in fig .[ fig : fact ] . by plugging back in, it follows that which has a form similar to .it is clear that one can keep traversing back in the graph yielding messages similar to and . in general , for , we can write and using and with , it follows that similarly , by observing the form of and , it follows that the estimate can be obtained by maximizing . the estimate in can now be substituted in to yield , which can then be used to solve for .clearly , this chain of calculations can be continued using recursions and .+ define for real [ lem:1]numbers and , the function defined in satisfies proof : the constants , and are defined in and .the proof follows by noting that which implies that is a monotonically increasing function . + using the notation , it follows that where where follows from lemma [ lem:1 ] .the estimate can be expressed as hence , one can keep estimating at each stage using this strategy .note that the estimator only depends on functions of data and can be readily evaluated . for , define the estimate can , therefore , be compactly represented as by a similar reasoning , the estimate can be analogously expressed as and the factor graph based clock offset estimate ( fge ) is given by it only remains to calculate the functions of data in the expressions for and to determine the fge estimate . with the constants defined in, it follows that similarly it can be shown that and so on . using the constants defined in for , it can be shown that .this implies that . plugging this in yields similarly ,the estimate is given by and the estimate can be obtained using as as the gauss - markov system noise , yields which is the ml estimator proposed in . and .with and , fig .[ bayes_theta_eps ] shows the mse performance of and , compared with the bayesian chapman - robbins bound ( bchrb ) .it is clear that exhibits a better performance than by incorporating the effects of time variations in clock offset . as the variance of the gauss - markov model accumulates with the addition of more samples , the mse of gets worse .[ bayes_theta_vs_sigma_gm ] depicts the mse of in with .the horizontal line represents the mse of the ml estimator .it can be observed that the mse obtained by using the fge for estimating approaches the mse of the ml as .vs . ]the estimation of a possibly time - varying clock offset is studied using factor graphs . a closed form solution to the clock offset estimation problemis presented using a novel message passing strategy based on the max - product algorithm .this estimator shows a performance superior to the ml estimator proposed in by capturing the effects of time variations in the clock offset efficiently .1 i. f. akyildiz , w. su , y. sankarasubramanium , and e. cayirci , `` a survey on sensor networks , '' _ ieee commun . mag .40 , no . 8 , pp .102 - 114 , aug . 2002 .b. sadler and a. swami , `` synchronization in sensor networks : an overview , '' in _ proc .ieee military commun .( milcom 2006 ) _ , pp . 1 - 6 , octs. ganeriwal , r. kumar , and m.b .srivastava , `` timing - sync protocol for sensor networks , '' in _ proc .los angeles , ca , pp . 138 - 149 , nov . 2003 . y .-chaudhari , and e. serpedin , `` clock synchronization of wireless sensor networks , '' _ ieee signal process . mag .124 - 138 , jan . 2011 .h. s. abdel - ghaffar , `` analysis of synchronization algorithm with time - out control over networks with exponentially symmetric delays , '' _ ieee trans .1652 - 1661 , oct .d. r. jeske , `` on the maximum likelihood estimation of clock offset , '' _ ieee trans .1 , pp . 53 - 54 , jan . 2005 .noh , q. m. chaudhari , e. serpedin , and b. suter , `` novel clock phase offset and skew estimation using two - way timing message exchanges for wireless sensor networks , '' _ ieee trans .766 - 777 , apr .
the problem of clock offset estimation in a two - way timing exchange regime is considered when the likelihood function of the observation time stamps is exponentially distributed . in order to capture the imperfections in node oscillators , which render a time - varying nature to the clock offset , a novel bayesian approach to the clock offset estimation is proposed using a factor graph representation of the posterior density . message passing using the max - product algorithm yields a closed form expression for the bayesian inference problem . clock offset , factor graphs , message passing , max - product algorithm
an appealing physical idea of the weak measurement tomography johansen , lundeen1,lundeen2,elefante , salvail offers a possibility of reconstruction of the wave function in a single experimental setup that involves a specific system - pointer coupling ; the so - called direct state tomography ( dst ) . basically , the scheme consists in successive measurements of two complementary observables of the system , where only the first one is weakly coupled to the measurement apparatus lundeen1,lundeen2,elefante , salvail .recently , this procedure was generalized to arbitrary coupling strengths , and it was argued that the strong measurements expectably outperform the weak ones both in precision and accuracy . since in the framework of weak measurementsthe efficiency is traded for accuracy , the error estimation analysis becomes vital .typically , the experimental performance of dst in the case of weak measurements , is compared either with results of strong ( projective ) tomography .an alternative method of estimation of the fidelity of a reconstructed state was introduced in . however , non of the above - mentioned approaches analyzed the global intrinsic error estimation helstrom , englert , checos . in this letterwe show that conveniently reformulating the approach vallone as a mutually unbiased bases ( mub)-like reconstruction scheme in non - orthogonal bases one can carry out the mean square error ( mse ) analysis of dst , including the weak measurement limit , in the framework of measurement statistics .in particular , we exemplify on the single qubit case that non - orthogonal bases appear as effective projective states , in such a way that a weak coupling corresponds to projection into near - parallel bases .this allows us to reformulate the accuracy analysis in terms of measured probabilities . andthus , estimate the intrinsic statistical errors finding the minimum mse using the crmer - rao lower bound .following the general idea of dst we consider an unknown state of the system ( one qubit ) interacting with a pointer ( another qubit ) initially prepared in the eigenstate state of the pauli operator , according to ] and is the fisher matrix per trail , the likelihood . after straightforward but lengthy calculations ( see appendix ) we find that the lower bound for the estimation error per trial in terms of measured probabilities is given by .\label{error2}\end{aligned}\]]it is easy to see that at ( corresponding to ) the mean hilbert - schmidt distance for mubs is recovered .the lower bound ( [ error2 ] ) can still be averaged over the space of quantum states .we will consider pure and mixed states separately .let us first consider an arbitrary pure state with projections and on the basis , that can be taken as due to invariance of the averaging procedure under unitary transformations .it is straightforward to check that , the averaged , over the space of pure states , mse takes the form the double brackets mean averaging both over the sample and over the space of states . for , corresponding to the standard mub tomography , checos , while in the limit the lower bound of the mse diverges as , which qualitatively coincides with results of . in order to average over mixed states we use the eigenvalue distribution based on the bures metric , use of the spectral decomposition , where the eigenstates can be parametrized as ] we perform integration of ( [ error2 ] ) with the measure .the result of such integration can be found analytically in terms of special functions and studied in the limit cases . due to its cumbersome formwe do not present the explicit expression , but instead plot it in fig .[ error1qb ] . over a sample of random statesas a fucntion of : ( red ) dashed line for pure states , ( blue ) continuous line for mixed states .average error for sic - povm does not depend on and is represented as a constant ( red ) dashed line for pure states and ( blue ) continuous line for random mixed states . the vertical line at shows the bound where the .,width=302 ] in fig .[ error1qb ] we plot , where the average is taken for a sample of pure and mixed random states following the routine introduced in . for pure states, the plot of the square root of equation ( [ erroran ] ) perfectly coincides with the numerical results .the mixed states are produced according to the bures metric .as it is expected , the best estimation is obtained for mub tomography , with for pure states , and for mixed states .one can also clearly see that the stronger the measurements are , the smaller the estimation errors are .the performance of dst can be also compared with a tomographic scheme based on symmetric informationally complete positive operator valued measure ( sic - pomv ) measurements . for a single qubita set of projectors such that and span the density matrix the probabilities are the outcomes associated with measurement of the operator , .the corresponding mse lower bound has the form ( [ mse ] ) , where the components of the matrix are , , and the fisher matrix elements per trail are which leads to for pure states . in fig.[error1qb ] we plot for sic - povm tomography as ( red ) dashed constant line for pure states and as a ( blue ) continuous constant line for mixed states , produced according to the bures metric , obtaining in this case by averaging over randomly generated states .one can observe that dst outperforms sic - povm qubit tomography for , which is indicated in fig .[ error1qb ] as a vertical ( magenta ) dotted - dashed line .we have shown that the performance of the dst protocol can be analyzed in a similar way as in the standard projection - based reconstruction schemes . in the framework of our approachwe have been able to determine the estimation error for any measurement strength , including the weak measurement case . in addition , an explicit analytic form for the minimum square error have been found in the pure and mixed states .the proposed scheme can be extended to higher dimensions and composite many - particle systems .in this appendix we briefly deduce eq .( [ error2 ] ) . taking into account the overlaps , into ( [ e ] ) and ( reconstruccion ) one arrives to the fisher matrix ( per trial ) ( [ fisher ] )is obtained directly form the likelihood in particular , one has the two first terms correspond to the mub tomography , the third term appears due to dependence of the sum of probabilities in the non - orthogonal bases on , and the last term comes form the normalization factor .the main difference with the mub case consists in appearing elements in outside of the main diagonal , which is a consequence of the relation ( [ s ] ) : the non - orthogonal bases , the elements are similar to the mub case , normalized by the factor : the explicit forms of and into ( [ mse ] ) one obtains ( [ error2 ] ) .j. erhart , _ et al _ , nat* 8 * , 185 ( 2012 ) ; l.a .rozema , _ et al _ , phys .. lett . * 109 * , 100404 ( 2012 ) ; s .- y .baek , _ et al _ , sci . rep . * 3 * , 2221 ( 2013 ) ; m. ringbauer , _ et al _ , phys .lett . * 112 * , 020401 ( 2014 ) .miszczak , int . j. modc * 22 * , 897 - 918 ( 2011 ) ; j.a .miszczak , z. puchala , and p. gawron , _ qi : quantum information package for mathematica _ , http://zksi.iitis.pl/wiki/projects:mathematica-qi ( 2010 ) .
we show that reformulating the direct state tomography ( dst ) protocol in terms of projections into a set of non - orthogonal bases one can perform an accuracy analysis of dst in a similar way as in the standard projection - based reconstruction schemes . i.e. in terms of the hilbert - schmidt distance between estimated and true states . this allows us to determine the estimation error for any measurement strength , including the weak measurement case , and to obtain an explicit analytic form for the average minimum square errors .
metabolomics is the comprehensive study of all small organic compounds in a biological specimen .metabolite concentrations in body fluids such as urine , serum , and plasma have proven valuable in predicting disease onset and progression .metabolomic data can be generated by a variety of methods of which mass spectrometry and nmr spectroscopy are the most common . while its high sensitivity makes mass spectrometry the preferred method for discovery projects , the high reproducibility of nmr data is ideal for applications in precision medicine . nmr allows for the simultaneous detection of all proton - containing metabolites present at sufficient concentrations in biological specimens .furthermore , nmr signal volume scales linearly with concentration .the complete set of nmr signals acquired from a given specimen is called its metabolite `` fingerprint '' . due to differences in ph ,salt concentration , and/or temperature , relative displacement of signal positions occurs across spectra .binning schemes can compensate for this effect by splitting spectra into segments called bins and summing up signal volumes contained therein .equal - sized bins are commonly used , albeit other schemes such as adaptive binning have been suggested . binned fingerprint data is the typical starting point for subsequent multivariate data analysis including biomarker discovery and specimen classification using methods such as support vector machines , ridge , and lasso regression .metabolite fingerprints need to be scaled to a common unit .typical examples include mmol metabolite per ml plasma , mmol metabolite per mmol creatinine in urine , or the relative contribution of a bucket to the total spectral intensity of an nmr spectrum . in practicethis is achieved by dividing each spectrum by the unit defining quantity : the intensity of an nmr reference such as tsp , the intensity of creatinine , or the total spectral intensity .this scaling of the raw data defines the measurement , but it also serves a second purpose : scaling corrects for unwanted experimental and physiological variability in the raw spectra . correcting unwanted variability is usually called normalization .instrument performance can vary and so can patient - specific parameters like the fluid balance , which is affected by drinking , respiration , defecation , perspiration , or medication , thus altering urine metabolite concentrations without reflecting disease state .all scales have their pros and cons : the nmr reference can normalize for spectrometer performance but not for unwanted variability in urine density .choosing creatinine as a standard for urine assumes the absence of inter - individual differences in the production and renal excretion of creatinine .in fact , creatinine production and excretion is affected by sex , age , muscle mass , diet , pregnancy , and , most importantly , renal pathology .normalization to a constant total spectral intensity or spectral mapping to a reference spectrum assume that the total amount of metabolites is constant over time and across patients and that spectra are not contaminated by signals that do not represent metabolites .however , this is not always the case .in fact , for urinary specimens of patients suffering from proteinuria the additional protein signals greatly increase total spectral area , and scaling to a constant total intensity systematically underestimates metabolite abundances. similarly , excessive glucose uptake , for example by a glucose infusion , leads to high total spectral intensities that are dominated by glucose and its metabolites .while the high values for these metabolites correctly reflect the metabolic state for these patients , their influence on the total spectral intensity leads to systematic underestimation of metabolites not related to glucose metabolism . in general , the preferred scaling protocol depends on the specific data set to be investigated .however , for some data sets it is not possible to use the same scale for all patients in the cohort .we will describe such data sets below . in this casedifferent protocols need to be used for different patients , introducing new challenges to data analysis . in this contribution, we first studied how the choice of scale affects statistical analysis , the selection of biomarkers , and patients diagnosis by these biomarkers .we report on two supervised metabolomics data analysis scenarios , namely urine and plasma biomarker discovery in 1d nmr metabolite fingerprints for the early detection of acute kidney injury onset after cardiac surgery . in both applications we tested metabolites for differential abundance using alternative scaling protocols .we observed pronounced disagreements between the lists of significantly differential metabolites depending on how the same data was scaled .more importantly , the different scalings led to inconsistencies in the classification of individual patients . in view of these observations ,reproducibility of metabolic studies is only possible if the exact same scaling protocols are used . to overcome this problem, we extended zero - sum regression , which has recently been demonstrated to be invariant under any rescaling of data , to logistic zero - sum regression and compared it to two standard methods for constructing multivariate signatures : lasso logistic regression and support vector machines in combination with _t_-score based feature filtering .unlike the latter methods , logistic zero - sum regression always identifies the same biomarkers regardless of the scaling method .consequently , prior data normalization may be omitted completely .we make logistic zero - sum regression available as an _ r _ package and as a high - performance computing software that can be downloaded at https://github.com/rehbergt/zerosum .the first data set comprised 1d nmr fingerprints of = 106 urine specimens , which had been collected from patients 24 h after cardiac surgery with cardiopulmonary bypass ( cpb ) use at the university clinic of erlangen . of these 106 ,34 were diagnosed with acute kidney injury ( aki ) 48 h after surgery .the challenge in this data is to define a urinary biomarker signature that allows for the early detection of aki onset ( acute kidney injury network ( akin ) stages 1 to 3 ) . the second data set consisted of 85 edta - plasma specimens , which had been collected 24 h post - op from a subcohort of the original cohort of 106 patients undergoing cardiac surgery with cpb use and which had been subjected to 10 kda cutoff filtration . in total , 33 patients out of these 85 patients were diagnosed with postoperative aki .again our goal is to detect biomarkers for an earlier detection of aki .a total of 400 of urine or edta - plasma ultrafiltrate was mixed with 200 of phosphate buffer , ph 7.4 , and 50 of 0.75% ( w ) 3-trimethylsilyl-2,2,3,3-tetradeuteropropionate ( tsp ) dissolved in deuterium oxide as the internal standard ( sigma - aldrich , taufkirchen , germany ) .nmr experiments were carried out on a 600 mhz bruker avance iii ( bruker biospin gmbh , rheinstetten , germany ) employing a triple resonance ( , , , lock ) cryogenic probe equipped with -gradients and an automatic cooled sample changer .for each sample , a 1d nmr spectrum was acquired employing a 1d nuclear overhauser enhancement spectroscopy ( noesy ) pulse sequence with solvent signal suppression by presaturation during relaxation and mixing time following established protocols .nmr signals were identified by comparison with reference spectra of pure compounds acquired under equal experimental conditions .the spectral region from 9.5 to .5 ppm of the 1d spectra was exported as even bins of 0.001 ppm width employing amix 3.9.13 ( bruker biospin ) .the data matrix was imported into the statistical analysis software _r _ version 3.3.2 . for the urinary spectra , the region 6.5 4.5 ppm , which contains the broad urea and water signals ,was excluded prior to further analysis . for plasma spectra , the region 6.2 4.6 ppm , containing the urea and remaining water signals ,was removed prior to analysis .in addition , the regions 3.82 3.76 ppm , 3.68 3.52 ppm , 3.23 3.2 ppm , and 0.75 0.72 ppm , corresponding to filter residues and free edta , were excluded prior to classification for the plasma specimens .we compare our scaling- and normalization - invariant approach to standard analysis strategies that were applied to data preprocessed by four state - of - the - art normalization protocols .for creatinine normalization of urinary data we divided each bucket intensity by the summed intensities of the creatinine reference region ranging from 3.055 to 3.013 ppm of the corresponding nmr spectrum . when scaling to the total spectral intensity we summed over all signal intensities from 9.5 to 0.5 ppm after exclusion of the water and urea signals and divided each bucket intensity by this sum .when scaling to the signal intensity of the nmr reference compound , in our case tsp , we summed up the intensities of the tsp buckets ranging from .025 to 0.025 ppm and divided each bucket intensity by this sum .this normalization method corrects for differences in spectrometer performance , but not for differences in fluid intake .finally , we also evaluated probabilistic quotient normalization ( pqn ) .pqn follows the rationale that changes in concentration of one or a few metabolites affect only small segments of the spectra , whereas specimen dilution , for example due to differences in fluid intake in case of urinary specimens , influences all spectral signals simultaneously .following we first normalized each spectrum to its total spectral intensity .then , we took the median across all these spectra as the reference spectrum and calculated the ratio of all bucket intensities between raw and reference spectra .the median of these ratios in a sample was our final division factor . for subsequent analysis , only the spectral region between 9.5 and 0.5ppm was taken into account . to compensate for slight shifts in signal positions across spectra due to small variations in sample ph , salt concentration , and/or temperature , bucket intensities across ten bucketswere fused together in one bucket of 0.01 ppm width by summing up the individual bucket intensities .the resulting intensities were transformed to correct for heteroscedasticity , and all subsequent data analysis was performed with these values .we compared logistic zero - sum regression to two classification algorithms for normalized data , namely a support vector machine employing a linear kernel function combined with univariate feature filtering ( f - svm ) and the least absolute shrinkage and selection operator ( lasso ) logistic regression . in the lasso logistic regression and the zero - sum models , the penalizing parameter that balancesthe bias variance trade - off was optimized in an internal cross - validation .lasso models were trained utilizing the _ r _ package _ glmnet _ .the zero - sum models were trained by employing our _ zerosum _ software , which is implemented in two versions : first , a version optimized for high - performance computing ( hpc ) , and second , an _ r _package for ordinary desktop computers .we utilized the _ zerosum _ hpc implementation throughout the article .the parameter induces sparseness of the models and circumvents over - fitting for lasso and zero - sum models . similarly , we optimized the parameters of f - svm .this was done in a two - fold nested cross - validation , i.e. , in each cross - validation step we screened for the best feature threshold , allowing for included features , and the matching best cost parameter , which was screened in . for training the svm we utilized the _r _ package _metabolic biomarkers can be metabolites or just spectral features .moreover , they can be identified as stand - alone predictors or as part of a multivariate biomarker signature . before we focus on signatures we study the effect of scales on stand - alone biomarkers .more precisely , we focus on spectral buckets with differential intensities between two classes of samples , i.e. , from patients who developed acute kidney injury after cardiac surgery versus from those that did not .bin intensities were normalized to ( a ) a constant total spectral intensity , ( b ) a constant creatinine intensity , ( c ) a median reference spectrum employing the most probable quotient and ( d ) a constant intensity of the nmr reference .for scoring normalized bin intensities as potential biomarkers we used benjamini - hochberg ( b / h ) corrected -values from the moderated _ t_-statistics implemented in the _ r _ package limma and fold changes between the classes .figure [ t - test ] shows plots of spectral bin positions against (-values ) ( a - d ) and fold changes ( e - h ) between aki and non - aki urine specimens . from left to right the plots correspond to ( a , e ) normalization by total spectral intensity , ( b , f ) normalization to creatinine , ( c , g ) pqn , and ( d , h ) normalization to tsp .the red line in plots ( a - d ) marks a significance level of 0.01 , corresponding to a false discovery rate ( fdr ) below 1 .significant buckets are plotted in red in all eight plots . *normalization to total spectral intensity : * on this scale we observed almost exclusively negative fold changes indicating lower bucket intensities for aki in comparison to non - aki .242 features were significant with features corresponding to carnitine , 2-oxoglutaric acid , and glutamine ranking highest .supplementary table s1 ( a ) lists raw and b / h - adjusted -values as well as metabolite assignments of the top ten significant buckets .technical artifacts of the scale become apparent too : the spectral region from 4 to 3.5 ppm , highlighted as a blue band , hardly comprised any significant features .this region is dominated by signals from sugars such as d - mannitol , which had been used as a pre - filling material for the tubes of the cpb machine . as d - mannitol exhibits a rather large number of nmr signals ,as highlighted in an exemplary urine spectrum shown in figure [ aki_urin ] , it comprised between and of the total spectral areas . on a scale that makes the total spectral area the same across all samplesthese signals can not serve as biomarkers . however , all patients take up d - mannitol during surgery , and the amount of d - mannitol still found in urine 24 h past surgery is modulated by actual kidney function .d - mannitol entangles kidney function with the total spectral area .therefore , the spectral area can not be recommended for scaling biomarkers , at least not in this context .* normalization to creatinine : * for this scale we observed different analysis results . fold changes were predominantly positive indicating higher metabolite levels in aki patients .204 buckets were significant , and the top ten are listed in supplementary table s1 ( b ) .the biomarker ranked highest was tranexamic acid , which is given in cardiac surgery to prevent excessive blood loss .strikingly , the region between 4 and 3.5 ppm ( blue band ) containing the d - mannitol signals was highly significant on this scale. however , this scale confounds with aspects of kidney function . in response to kidney diseaseurinary creatinine excretion rates can be affected .thus , normalization to creatinine can also obscure other metabolite biomarkers if their excretion correlates with that of creatinine .* probabilistic quotient normalization : * on this scale we observed both positive and negative fold changes in almost equal numbers .only 89 features were now significant , and the top ten are listed in table s1 ( c ) , with carnitine and glutamine as the leading biomarkers .again , the blue shaded region now covers significant features , although the number is much lower than after creatinine scaling . due to the strong conceptual similarity between pqn and normalization to total spectral intensity, the d - mannitol artifact also compromises the use of pqn .* tsp normalization : * on this scale we did not detect any significant biomarkers .the scaling method corrects only for differences in spectrometer performance , not for changes in global metabolite concentration . due to large variability in urine density throughout the cohort, it can not be used in this context .we observed only one bucket at 3.715 ppm , identified as an overlap of propofol - glucuronide , broad protein signals , and tentatively d - glucuronic acid , which obtained a significant -value on three scales ( figure [ venn_aki_classif]a ) .the plasma data biomarker discovery was not consistent across scales either ( supplementary table s2 , figure s1 , and figure [ venn_aki_classif]b ) . scaling to total spectral area and scaling to tsp predominantly identified metabolites that accumulate in the blood of patients developing aki .one might explain this observation by a reduced glomerular filtration in these patients .however , the pqn data immediately challenged this interpretation , as it identified a large set of down - regulated metabolites in aki .creatinine normalization is not common for plasma metabolomics and was not investigated here .exemplary 1d nmr spectrum of a urine specimen collected 24 h after surgery from an aki patient .the lower histogram illustrates the common binning procedure.,scaledwidth=50.0% ] number of significant buckets and their overlap between the different scaling methods for ( a ) the aki urine and ( b ) the aki plasma data set.,title="fig:",scaledwidth=24.0% ] number of significant buckets and their overlap between the different scaling methods for ( a ) the aki urine and ( b ) the aki plasma data set.,title="fig:",scaledwidth=24.0% ] biomarkers can be combined to biomarker signatures in multivariate analysis .machine - learning algorithms are used to learn these signatures from training data .we tested two of these algorithms , namely a linear svm with -score based feature filtering ( f - svm ) and standard logistic lasso regression .both methods implement linear signatures of the form where is the raw signal from bucket in patient , the weight of this feature , and the scaling factor used to normalize sample .both methods were used to learn signatures from data normalized in four different ways , namely scaling by total spectral area , scaling to creatinine , pqn , and scaling to tsp .the performance of the algorithms was tested in cross - validation .figure [ roc_aki]a shows roc curves of signatures that aim to predict aki from urinary fingerprints using f - svm signatures .the areas under the roc curves ( auc - roc ) are summarized in table [ roc_table ] .the higher this area the better the prediction of patient outcome .performances ranged from an auc of 0.72 for tsp - scaled data to an auc of 0.83 for pqn data . in line with the results for stand - alone biomarkers , the f - svm algorithm picked different features across scaling methods .not a single bucket was chosen simultaneously for all four scales ( figure [ roc_aki]a bottom row ) .figure [ roc_aki]b and table [ roc_table ] show the corresponding results for standard lasso logistic regression .the lasso signatures performed slightly better and were more consistent .nevertheless , there is still a dependence of the chosen biomarkers on the scale .similar results were observed for the plasma data set ( supplementary figure s2 ) . the urinary aki data set : receiver operating characteristic ( roc ) curves for two classification approaches , ( a ) svm in combination with -test based feature filtering and ( b ) lasso , after application of four different normalization strategies : scaling to total spectral area ( red solid line ) , scaling to creatinine ( blue dashed line ) , probabilistic quotient normalization ( pqn ) ( green dotted line ) , and scaling to tsp ( yellow dashed - dotted line ) .the bottom row shows the number of features included in the respective classification models in venn diagrams .the corresponding models were built by averaging over all models of the outer cv loop . ] from a clinical perspective , varying biomarkers constitute only a minor problem , as long as they agree in their predictions of outcome. however , they usually do not agree .for the urinary data set , lasso signatures yielded conflicting predictions in 16% of patients . for f - svm signatures ,the percentage increased to 30% .figure [ classification_summary_aki ] summarizes the predictions of aki onset after cardiac surgery patient by patient for the urinary data set .the row `` patient outcome '' shows patients in blue who did not develop aki ( akin stage 0 ) and in red those that developed aki .furthermore , the yellow dashed lines highlight patients with akin stage 2 and 3 , a more severe manifestation of kidney injury .the latter patients constitute the high - risk group where early detection of aki onset can save lives .the true outcomes 48 h after surgery are contrasted to the predictions 24 h earlier . shownare predictions of f - svm and lasso signatures for the four scaling methods .we observed that predictions frequently changed with scale and that signatures learned on data that was scaled to , e.g. , creatinine did not properly identify the highest risk group . for f -svm only the signature on pqn data identified all high - risk patients correctly .the lasso identified high - risk patients for total spectral area and pqn data correctly .we discuss some misclassifications in more detail .patients aki-35 and aki-106 developed severe aki after 48 h but were not predicted to do so by two of the f - svm signatures , namely those for creatinine and total spectral area scaled data .a key observation of this paper is that these misclassifications could be traced back to data normalization . in theory , the predictions could have been saved by readjusting the scaling of these fingerprints ( however , the necessary scaling factor is not evident _ a priori _ ) .figure [ prob_vs_gamma ] shows the prediction probabilities of these signatures on the -axis .if this probability was above 0.5 we predicted an onset of aki , otherwise we did not .the -axis shows possible multiplicative scale adjustments .a value of indicates the actual scale used for prediction .values of correspond to a down - scaling of all buckets by the factor , while values of correspond to an up - scaling by .the plot thus shows the probability of developing aki as a function of the scale - readjustment factor .the dashed lines correspond to f - svm and the solid lines to lasso signatures .colors indicate the underlying scaling methods of the signatures : total spectral area ( red ) , creatinine ( blue ) , pqn ( green ) , and tsp ( yellow ) . on the original scale ( )patient aki-35 shows prediction probabilities below 0.5 for f - svm - creatinine ( blue dashed line ) and f - svm - total - area ( red dashed line ) .let be the ratio of the creatinine signal ( from 3.055 to 3.013 ppm ) and the remaining spectral area after exclusion of the d - mannitol region ( 4.0 to 3.5 ppm ) . for aki-35 we observed , while the median over all aki predicted samples was . in line with this, up - scaling the fingerprint rescues the prediction for this patient ( blue dashed line in figure [ prob_vs_gamma]a ) .similarly , , the ratio between d - mannitol and the remaining spectral area , was , while the median over all aki predicted samples was 2.42 , which explains why the f - svm - total - area prediction could be rescued by a down - scaling of the fingerprint ( red dashed line in figure [ prob_vs_gamma]a ) .similar problems can be observed for patient aki-106 , and for the lasso - creatinine signature in patients aki-70 and aki-4 . for the plasma data , we observed inconsistent predictions for 19% ( 6% ) of the patients for f - svm ( lasso ) ( supplementary figure s3 ) . here , no method identified the high - risk group completely , with best identification shown for lasso on pqn data . in summary , we observed inconsistencies in the selection of biomarkers and more importantly in the prediction of patient outcome across the four scaling methods .these inconsistencies were most pronounced for the filter - based feature selection in f - svm and affected urinary fingerprints more than plasma fingerprints . still they could be observed for all analysis scenarios . for several patientsa simple scale adjustment would have saved the prediction .the next section will show that zero - sum logistic regression resolves these inconsistencies fully and leads to more accurate predictions . the probability of aki prediction as a function of the scale readjustment factor ( see text ) .the dashed lines correspond to f - svm signatures and the solid lines to lasso signatures .colors indicate the underlying scaling methods of the signatures : total spectral area ( red ) , creatinine ( blue ) , pqn ( green ) , and tsp ( yellow ) .the black dotted line gives the zero - sum predictions , which are independent of ., title="fig : " ] the probability of aki prediction as a function of the scale readjustment factor ( see text ) .the dashed lines correspond to f - svm signatures and the solid lines to lasso signatures .colors indicate the underlying scaling methods of the signatures : total spectral area ( red ) , creatinine ( blue ) , pqn ( green ) , and tsp ( yellow ) .the black dotted line gives the zero - sum predictions , which are independent of ., title="fig : " ] the probability of aki prediction as a function of the scale readjustment factor ( see text ) .the dashed lines correspond to f - svm signatures and the solid lines to lasso signatures .colors indicate the underlying scaling methods of the signatures : total spectral area ( red ) , creatinine ( blue ) , pqn ( green ) , and tsp ( yellow ) .the black dotted line gives the zero - sum predictions , which are independent of ., title="fig : " ] the probability of aki prediction as a function of the scale readjustment factor ( see text ) .the dashed lines correspond to f - svm signatures and the solid lines to lasso signatures .colors indicate the underlying scaling methods of the signatures : total spectral area ( red ) , creatinine ( blue ) , pqn ( green ) , and tsp ( yellow ) .the black dotted line gives the zero - sum predictions , which are independent of ., title="fig : " ] zero - sum regression is a novel machine - learning algorithm that is insensitive to rescaling the data .it allows for a selection of biomarkers that does not depend on the units chosen .the classifications of patients that result from these signatures do not depend on any scaling of the data either .in fact , patients can be classified with spectral data that were not normalized at all . for the reader s convenience we review the concept here in a nutshell : let be metabolomics data , where is the logarithm of the intensity for bucket in sample and the corresponding clinical response of patient .in regression analysis the data sets need to be normalized to a common unit .note that the data are on a logarithmic scale , therefore the scaling to a common unit becomes a shifting of the spectrum by some sample specific value .thus for normalized data the regression equation reads now note that the equation becomes independent of the normalization if and only if the regression coefficients sum up to zero , i.e. , this is the idea of zero - sum regression . in the machine - learning context of high - content data ,zero - sum regression can be combined with lasso or elastic - net regularization and shows predictive performances that were not compromised by the zero - sum constraint . inmany biomarker discovery challenges the response is not continuous but binary .we thus need to extend the concept of zero - sum regression from linear regression to classification .we do this by introducing logistic zero - sum regression . in standard logistic regressionthe log - likelihood of normalized data reads where we abbreviated . in this generalized linear modelthe log - likelihood again becomes independent of the normalization if the regression coefficients add up to zero .this statement also holds if we add the penalizing term , which corresponds to the elastic - net regularization penalty .the parameter is calibrated in cross - validation , while is usually fixed to a specific value in $ ] .here , corresponds to ridge regression and to the lasso .thus , logistic zero - sum regression amounts to finding coefficients that minimize for all applications throughout the article , we have chosen .as proposed by we use the quadratic approximation of eq .( [ loglilogist ] ) locally at the current parameters of the optimization algorithm : with is independent of and can thus be neglected .an efficient coordinate - descent algorithm for zero - sum regression can be constructed by incorporating the zero - sum constraint into the log - likelihood by substituting with . calculating the partial derivative of the resulting log - likelihood with respect to ( ) and solving for yields an update scheme for and : the update value for given by .after each update the approximation has to be renewed .since we update and simultaneously the number of updates scales quadratically with the number of features .therefore , the minimization problem ( [ loglizsum ] ) will be computationally demanding if becomes high - dimensional .we performed a nested cross - validation ( cv ) , where the inner cv uses 10 folds to determine a suitable regularization strength and the outer cv is a leave - one - out cv to evaluate the prediction accuracy of the resulting models .we are evaluating two different data sets , each within a leave - one - out cv , one data set with 4 and the other with 3 different normalizations , and therefore we have to do 107 + 86=686 independent inner cvs .each of these cvs is performed on an approximated regularization path of length 500 , however , the algorithm stops when overfitting occurs . in totalwe performed model fits .this number presents an obvious computational challenge . to avoid numerical uncertainties in the feature selection we used the very precise convergence criterion of , which additionally increases the computational effort . on a standard server( 2 intel xeon x5650 processors with 6 cores each ) , the complete calculation took about two days .we were able to bring the compute time down to 12 minutes by developing an hpc implementation of _ zerosum _ and executing it on 174 nodes of the supercomputer qpace 3 operated by the computational particle physics group .qpace 3 currently comprises 352 intel xeon phi 7210 ( `` knights landing '' ) processors , connected by an omnipath network .each processor contains 64 compute cores , and each of these compute cores contains two 512-bit wide vector units . to run efficiently on this machine ,the _ zerosum _ c code was extended to use avx512 vector intrinsics for the calculation of the coordinate - descent updates .openmp is used to parallelize the inner cv so that the data has to be stored in memory only once and can be accessed from all folds . additionally , we provide a new _ zerosum _ _ r _ package , which is a wrapper around the hpc version and can easily be used within _r _ package also includes functions for exporting and importing all necessary files for / from the hpc implementation . for users without access to hpc facilities, our package can be run on a regular workstation with reduced convergence precision and a shorter regularization path in less than one hour .receiver operating characteristic ( roc ) curves for logistic zero - sum regression , after application of four different normalization strategies , scaling to total spectral area ( red solid line ) , scaling to creatinine ( blue dashed line ) , probabilistic quotient normalization ( pqn ) ( green dotted line ) , and scaling to tsp ( yellow dashed - dotted line ) for ( a ) the urinary aki and ( b ) the plasma aki data set .the right column shows the number of features included in the respective classification models in venn diagrams .the corresponding models were built by averaging over all models of the outer cv loop ., title="fig : " ] receiver operating characteristic ( roc ) curves for logistic zero - sum regression , after application of four different normalization strategies , scaling to total spectral area ( red solid line ) , scaling to creatinine ( blue dashed line ) , probabilistic quotient normalization ( pqn ) ( green dotted line ) , and scaling to tsp ( yellow dashed - dotted line ) for ( a ) the urinary aki and ( b ) the plasma aki data set . the right column shows the number of features included in the respective classification models in venn diagrams .the corresponding models were built by averaging over all models of the outer cv loop ., title="fig : " ] logistic zero - sum regression is not affected by changes in scale . actually , it will always yield the same result regardless of the scaling method used .figure [ roc_zerosum ] gives the corresponding roc curves and venn diagrams for the four normalization strategies investigated .since zero - sum models are scale independent , this classification approach always identified the same biomarkers and assigned the same weights to them independently of the normalization method chosen .supplementary file 1 ( supplementary file 2 ) lists all nmr buckets and corresponding metabolites for the urinary ( plasma ) data set with regression weights not equal to zero in at least one classification - normalization approach . againconsider the urinary aki data set .metabolites with large absolute regression weights ( ) in the zero - sum models included both endo- as well as exogenous compounds .the largest positive regression weight was assigned to a bucket at 3.285 ppm , which was identified as a superposition of myo - inositol , taurine , 4-hydroxy - propofol-4-oh - d - glucuronide , and tentatively to d - glucuronic acid as well as an unknown metabolite .other important buckets with large positive regression weights comprised propofol - glucuronide , 4-hydroxy - propofol-1-oh - d - glucuronide , isobutyrylglycine , broad protein signals , and probably 2-oxoisovaleric acid , indoxyl sulfate , as well as unidentified metabolites . buckets with large negative regression weights correspond to a superposition of carnitine and 4-hydroxy - propofol-4-oh - d - glucuronide , n - acetyl - l - glutamine , and probably 4-hydroxyphenylacetic acid , as well as unidentified metabolites .positive / negative regression coefficients indicate an up-/down - regulation of the corresponding metabolite in the aki in comparison to the non - aki group .consequently , we can report an up - regulation of exogenous compounds such as glucuronide conjugates of propofol , an anesthetic agent which had been administered during the cardiac surgery and is mainly metabolized by glucuronidation .elevated urinary levels of propofol conjugates in the aki group point to a delayed excretion of exogenous compounds caused by reduced glomerular filtration or prolonged administration .higher absolute concentrations of propofol - glucuronide ( -value ) have already been reported for the 24 h urine nmr fingerprints in .an up - regulation of carnitine in the non - aki group is indicated by its large negative regression weight .this observation has already been reported in , where an up - regulation of carnitine , whose main function is the transport of long - chain fatty acids into the mitochondria for subsequent beta - oxidation , in the non - aki group has been discussed as a successful protective response against ischemic injury . also , logistic zero - sum regression returned the same predictions for all patients regardless of the applied normalization strategy , summarized in figure [ classification_summary_aki ] , as well as in supplementary figure s3 . furthermore , the zero - sum constraint did not compromise predictive performance for the urinary and plasma aki data set , respectively ( table [ roc_table ] , figures [ roc_zerosum]a and [ roc_zerosum]b ) .auc values ranged among the highest for the urinary aki data set , and yielded the largest value among all three classification methods in combination with three different normalization strategies for the plasma aki data set .most importantly , zero - sum regression reliably identified patients that developed severe aki ( stage 2 and 3 , highlighted by yellow dashed area ). only lasso on pqn data gave competitive results .with respect to auc , zero - sum was still superior .moreover , a sample - specific rescaling has no effect on the corresponding zero - sum prediction probabilities , as illustrated in figure [ prob_vs_gamma ] by the black dotted lines .we first pointed out some intrinsic problems in metabolomics biomarker detection .the identification of biomarkers strongly depends on the method selected for scaling the reference profiles . for stand - alone biomarkers , accordance across scales can be as low as 0% .the same is true for multivariate biomarker signatures when filtering approaches like filtering by the _t_-statistics are used , while wrapping approaches like the lasso yield somewhat more consistent yet still not scale - independent signatures .more importantly , the prediction of a patient s outcome can change depending on the scaling method employed .in fact , for some cases we could attribute failing prediction to inappropriate scaling of the raw data . to overcome this problemwe suggest logistic zero - sum regression , which provides a completely scale - independent analysis .it always selects the same biomarkers , assigns the same weights to them , and predicts the same outcome no matter whether the raw data was scaled relative to total spectral area , creatinine , tsp , any other reference point , by pqn , or not at all .furthermore , zero - sum regression was among the best aki predictors both for urinary and plasma fingerprints . also , the number of chosen features did not change significantly compared to other signatures , allowing for transfer of zero - sum signatures into clinical practice .the interplay of statistical analysis and normalization protocols has been described by several authors . proposedstrategies to overcome these issues included the selection of the most `` robust '' normalization or the parallel application of several normalization strategies and a subsequent meta analysis of the individual results .clearly , these strategies deal with the problem , but unlike zero - sum regression do not solve it .the zero - sum signatures did not identify completely new biomarkers .in fact , most of the underlying metabolites of the signatures were described before .however , we believe that zero - sum signatures make better use of these biomarkers by combining and weighing them differently .predictions are no longer confined to a specific normalization protocol but hold in general .biological interpretation becomes easier as there are fewer user - dependent choices to make .it is a set of metabolites that hold predictive information and not their abundance relative to an arbitrarily chosen reference point .moreover , signatures can be validated across studies and tested in different labs even if scaling protocols do not match or if no scaling is applied at all .this fact has the potential to greatly enhance the reproducibility of clinical trials .the urinary aki data set is a good example for data where a proper normalization can not replace independence of normalization .we believe that this data set just can not be normalized properly : varying urine density precludes tsp normalization , the strong confounding of spectral areas by d - mannitol signals and by proteinuria eliminates both normalization to the total spectral area and pqn , while creatinine excretion is entangled with kidney function and varies strongly across patients , disqualifying it from use as a common reference .moreover , correcting for artifacts like d - mannitol requires that the existence of an artifact is known for a specific patient .interestingly , the lasso - tsp signature worked surprisingly well in spite of strong variability in urinary density across patients .coincidently , however , this signature had weights adding up to almost ( yet not exactly ) zero .lasso is an algorithm used in artificial intelligence .here it detected the intrinsic normalization problems with the tsp - normalization and `` intelligently '' decided for a scale - independent zero - sum type signature . our _ zerosum _ package can be used on a desktop computer for small data sets and reduced precision .large clinical studies with several thousand patients require the use of an hpc infrastructure .we here offer code that can be used for such studies . in conclusion , we provide a high - performance classification framework independent of prior data normalization , which reduces the number of user - adjustable parameters and should be ideally suited for the transfer of metabolic signatures across labs .furthermore , we expect that the results presented here also hold for metabolomics data generated by other methods such as mass spectrometry .the authors thank drs . gunnar schley , carsten willam , and kai - uwe eckardt for providing the urine and plasma specimens of the aki data sets .financial support : e : med initiative of the german ministry for education and research ( grant 031a428a ) and the german research foundation ( sfb / trr-55 ) .none .anderson , p. e. , mahle , d. a. , doom , t. e. , reo , n. v. , delraso , n. j. , raymer , m. l. , _ dynamic adaptive binning : an improved quantification technique for nmr spectroscopic data _ , metabolomics , 2011 , 7(2):179 - 190 .altenbuchinger , m. , rehberg , t. , zacharias , h. u. , oefner , p. j. , dettmer - wilde , k. , holler , e. , weber , d. , gessner , a. , hiergeist , a. , spang , r. , _ reference point insensitive molecular data analysis _ , bioinformatics , 2016 , doi:10.1093/bioinformatics / btw598 .dawiskiba , t. , deja , s. , mulak , a. , zbek , a. , jawie , e. , paweka , d. , banasik , m. , mastalerz - migas , a. , balcerzak , w. , kaliszewski , k. , skra , j. , bar , p. , korta , k. , pormaczuk , k. , szyber , p. , litarski , a. , mynarz , p. , _ serum and urine metabolomic fingerprinting in diagnostics of inflammatory bowel diseases _ , world journal of gastroenterology , 2014 , 20(1):163 - 174 .de meyer , t. , sinnaeve , d. , van gasse , b. , tsiporkova , e. , rietzschel , e. r. , de buyzere , m. l. , gillebert , t. c. , bekaert , s. , martins , j. c. , van criekinge , w. , _ nmr - based characterization of metabolic alterations in hypertension using an adaptive , intelligent binning algorithm _ , 2008 , analytical chemistry , 80(10):3783 - 3790 .dieterle , f. , ross , a. , schlotterbeck , g. , senn , h. , _ probabilistic quotient normalization as robust method to account for dilution of complex biological mixtures .application in 1h nmr metabonomics _, analytical chemistry , 2006 , 78(13):4281 - 90 .elliott , p. , posma , j. m. , chan , q. , garcia - perez , i. , wijeyesekera , a. , bictash , m. , ebbels , t. m. d. , ueshima , h. , zhao , l. , van horn , l. , daviglus , m. , stamler , j. , holmes , e. , nicholson , j. k. , _ urinary metabolic signatures of human adiposity _ , science translational medicine , 2015 , 7(285):285ra62 .gronwald , w. , klein , m. s. , kaspar , h. , fagerer , s. r. , nrnberger , n. , dettmer , k. , bertsch , t. , oefner , p. j. , _ urinary metabolite quantification employing 2d nmr spectroscopy _ , analytical chemistry , 2008 , 80(23):9288 - 9297 .gronwald , w. , klein , m. s. , zeltner , r. , schulze , b .-reinhold , s. w. , deutschmann , m. , immervoll , a .-k . , bger , c. a. , banas , b. , eckardt , k .- u ., oefner , p. j. , _ detection of autosomal dominant polycystic kidney disease by nmr spectroscopic fingerprinting of urine _ , kidney international , 2011 , 79:1244 - 1253 .hochrein , j. , klein , m. s. , zacharias , h. u. , li , j. , wijffels , g. , schirra , h. j. , spang , r. , oefner , p. j. , gronwald , w. , _ performance evaluation of algorithms for the classification of metabolic ^1^h nmr fingerprints _ , journal of proteome research , 2012 , 11:6242 - 6251 .hochrein , j. , zacharias , h. u. , taruttis , f. , samol , c. , engelmann , j. , spang , r. , oefner , p. j. , gronwald , w. , _ data normalization of - nmr metabolite fingerprinting datasets in the presence of unbalanced metabolite regulation _ , journal of proteome research , 2015 , 14(8):3217 - 3228 .keun , h. c. , ebbels , t. m. , antti , h. , bollard , m. e. , beckonert , o. , schlotterbeck , g. , senn , h. , niederhauser , u. , holmes , e. , lindon , j. c. , nicholson , j. k. , _ analytical reproducibility in 1h nmr - based metabonomic urinalysis _, chemical research in toxicology , 2002 , 15(11 ) , 1380 - 1386 .ross , a. , schlotterbeck , g. , dieterle , f. , senn , h. , _ nmr spectroscopy techniques for application to metabonomics _, in lindon , j. c. , nicholson , j. k. , holmes , e. , ( eds . ) , _ nmr spectroscopy techniques for application to metabonomics _ ,elsevier bv : amsterdam , the netherlands , 2007 , pp .55 - 112 .ritchie , m.e . ,phipson , b. , wu , d. , hu , y. , law , c.w . , shi , w. , smyth , g.k . ,_ limma powers differential expression analyses for rna - sequencing and microarray studies _ , 2015 , nucleic acids research 43(7 ) , e47 .saccenti , e. , _ correlation patterns in experimental data are affected by normalization procedures : consequences for data analysis and network inference _ , journal of proteome research , 2016 , doi 10.1021/acs.jproteome.6b00704 .viant , m. r. , lyeth , b. g. , miller , m. g. , berman , r. f. , _ an nmr metabolomic investigation of early metabolic disturbances following traumatic brain injury in a mammalian model _ ,nmr in biomedicine , 2005 , 18(8 ) , 507 - 516 .zacharias , h. u. , schley , g. , hochrein , j. , klein , m. s. , kberle , c. , eckardt , k .- u ., willam , c. , oefner , p. j. , gronwald , w. , _ analysis of human urine reveals metabolic changes related to the development of acute kidney injury following cardiac surgery _ , metabolomics , 2013 , 9(3):697 - 707 .zacharias , h. u. , hochrein , j. , klein , m. s. , samol , c. , oefner , p. j. , gronwald , w. , _ current experimental , bioinformatic and statistical methods used in nmr based metabolomics _, current metabolomics , 2013 , 1(3):253 - 268(16 ) .zacharias , h. u. , hochrein , j. , vogl , f. c. , schley , g. , mayer , f. , jeleazcov , c. , eckardt , k .- u ., willam , c. , oefner , p. j. , gronwald , w. , _ identification of plasma metabolites prognostic of acute kidney injury after cardiac surgery with cardiopulmonary bypass _ , journal of proteome research , 2015 , 14(7):2897 - 2905 .
* motivation : * metabolomics data is typically scaled to a common reference like a constant volume of body fluid , a constant creatinine level , or a constant area under the spectrum . such normalization of the data , however , may affect the selection of biomarkers and the biological interpretation of results in unforeseen ways . * results : * first , we study how the outcome of hypothesis tests for differential metabolite concentration is affected by the choice of scale . furthermore , we observe this interdependence also for different classification approaches . second , to overcome this problem and establish a scale - invariant biomarker discovery algorithm , we extend linear zero - sum regression to the logistic regression framework and show in two applications to nmr - based metabolomics data how this approach overcomes the scaling problem . * availability : * logistic zero - sum regression is available as an _ r _ package as well as a high - performance computing implementation that can be downloaded at https://github.com/rehbergt/zerosum .
difficult combinatorial landscapes are found in many important problems in physics , computing , and in common everyday life activities such as resource allocation and scheduling .for example , spin - glass systems give rise to such energy landscapes which are characterized by many local minima and high energy barriers between them .these landscapes generally show frustration , i.e. frozen disorder where the system is unable to relax into a state in which all constraints are satisfied . in completely different fields , such as combinatorial optimization ,similar hard problems also arise , for example the well - known _ traveling salesman problem _ and many others . in order to understand the reasons that make these problems difficult to optimize ,a number of model landscapes have been proposed .one of the simplest yet representative example is kauffman s family of landsdcapes .the family of landscapes is a problem - independent model for constructing multimodal landscapes that can gradually be tuned from smooth to rugged , where the term `` rugged '' is intuitively related to the degree of variability in the objective function value in neighboring positions in configuration space .the more rugged the landscape , the higher the number of local optima , and the landscape becomes correspondingly more difficult to search for the global optimum .the idea of an landscape is to have spins " or `` genes '' , each with two possible values , or .the model is a real stochastic function defined on binary strings of length , .the value of determines how many other spin values in the string influence a given spin .the value of is the average of the contributions of all the spins : by increasing the value of from 0 to , landscapes can be tuned from smooth to rugged . for contributions can be optimized independently which makes a simple additive function with a single maximum . at the other extreme when the landscape becomes completely random ,the probability of any given configuration of being the optimum is , and the expected number of local optima is .intermediate values of interpolate between these two cases and have a variable degree of `` epistasis '' , i.e. of spin ( or gene ) interaction .the variables that form the context of the fitness contribution of gene can be chosen according to different models .the two most widely studied models are the _ random neighborhood _ model , where the variables are chosen randomly according to a uniform distribution among the variables other than , and the _ adjacent neighborhood _ model , in which the variables are those closest to in a total ordering ( using periodic boundaries ) .no significant differences between the two models were found in terms of global properties of the respective families of landscapes , such as mean number of local optima or autocorrelation length .similarly , our preliminary studies on the characteristics of the landscape optima networks did not show noticeable differences between the two neighborhood models .therefore , we conducted our full study on the more general random model .the model is related to spin glasses , and more precisely to -spin models , where plays a role similar to . in spin glassesthe function analogous to is the energy and the stable states are the minima of the energy hypersurface . in this study we seek to provide fundamental new insights into the structural organization of the local optima in combinatorial landscapes , particularly into the connectivity of their basins of attraction .combinatorial landscapes can be seen as a graph whose vertices are the possible configurations .if two configurations can be transformed into each other by a suitable operator move , then we can trace an edge between them . the resulting graph , withan indication of the fitness at each vertex , is a representation of the given problem fitness landscape .a useful a simplification of the graphs for the energy landscapes of atomic clusters was introduced in .the idea consists in taking as vertices of the graph not all the possible configurations , but only those that correspond to energy minima . for atomic clusters these are well - known , at least for relatively small assemblages .two minima are considered connected , and thus an edge is traced between them , if the energy barrier separating them is sufficiently low . in this casethere is a transition state , meaning that the system can jump from one minimum to the other by thermal fluctuations going through a saddle point in the energy hyper - surface .the values of these activation energies are mostly known experimentally or can be determined by simulation . in this way, a network can be built which is called the `` inherent structure '' or `` inherent network '' in .we propose a network characterization of combinatorial fitness landscapes by adapting the notion of _ inherent networks _ described above .we use the family of landscapes as an example . in our casethe inherent network is the graph where the vertices are all the local maxima and the edges account for transition probabilities between maxima .we exhaustively extract such networks on representative small landscape instances , and perform a statistical characterisation of their properties .our analysis is inspired , in particular , by the work of on energy landscapes , and in general , by the field of complex networks .a related work can be found in , where the case of lattice polymer chains is studied . however , the notion of an edge there is very different , being related to moves that bring a given conformation into an allowed neighboring one .similar ideas have been put forward in physical chemistry to understand the thermodynamics and kinetics of complex biomolecules through the network study of their free - energy landscapes .it should also be noted that our approach is different from the barrier - tree representations of landscapes proposed by stadler et al .( see , for example , ) .the next section describes how combinatorial landscapes are mapped onto networks , and includes the relevant definitions and algorithms used in our study .the empirical analysis of our selected landscape instances is presented in the following two sections ; one devoted to the study of basins , and the other to the network statistical features .finally , we present our conclusions and ideas for future work .many natural and artificial systems can be modeled as networks .typical examples are communication systems ( the internet , the web , telephonic networks ) , transportation lines ( railway , airline routes ) , biological systems ( gene and protein interactions ) , and social interactions ( scientific co - authorship , friendships ) .it has been shown in recent years that many of these networks exhibit was has been called a _ small - world _topology , in which nodes are highly clustered yet the path length between them is small. additionally , in several of these networks the distribution of the number of neighbours ( the degree distribution ) is typically right - skewed with a heavy tail " , meaning that most of the nodes have less - than - average degree whilst a small fractions of hubs have a large number of connections .these topological features are very relevant since they impact strongly on networks properties such as their robustness and searchability . to model a physical energy landscape as a network , doye needed to decide first on a definition both of a state of the system and how two states were connected. the states and their connections will then provide the nodes and edges of the network . for systems with continuous degrees of freedom , this was achieved through the ` inherent structure ' mapping . in this mappingeach point in configuration space is associated with the minimum ( or ` inherent structure ' ) reached by following a steepest - descent path from that point .this mapping divides configurations into basins of attraction surrounding each minimum on the energy landscape .our goal is to adapt this idea to the context of combinatorial optimization . in our case, the vertexes of the graph can be straightforwardly defined as the local maxima of the landscape .these maxima are obtained exhaustively by running a best - improvement local search algorithm ( see fig . [ algohc ] ) from every configuration of the search space . the definition of the edges , however , is a much more delicate matter . in our initial attempt we considered that two maxima and were connected ( with an undirected and unweighed edge ) , if there exists at least one pair of direct neighbors solutions and , one in each basin of attraction ( and ) .we found empirically on small instances of landscapes , that such definition produced densely connected graphs , with very low ( ) average path length between nodes for all .therefore , apart from the already known increase in the number of optima with increasing , no other network property accounted for the increase in search difficulty .furthermore , a single pair of neighbours between adjacent basins , may not realistically account for actual basin transitions occurring when using common heuristic search algorithms .these considerations , motivated us to search for alternative definitions of the edges connecting local optima .in particular , we decided to associate weights to the edges that account for the transition probabilities between the nodes ( local optima ) .more details on the relevant algorithms and formal definitions are given below .* definition : * fitness landscape . +a landscape is a triplet where is a set of potential solutions i.e. a search space , , a neighborhood structure , is a function that assigns to every a set of neighbors , and is a fitness function that can be pictured as the _ height _ of the corresponding solutions . in our study , the search space is composed by binary strings of length , therefore its size is .the neighborhood is defined by the minimum possible move on a binary search space , that is , the 1-move or bit - flip operation . in consequence , for any given string of length , the neighborhood size is . the algorithm to determine the local optima andtherefore define the basins of attraction , is given below : choose initial solution choose such that * definition : * local optimum . + a local optimum is a solution such that , .the algorithm defines a mapping from the search space to the set of locally optimal solutions .+ * definition : * basin of attraction .+ the basin of attraction of a local optimum is the set .the size of the basin of attraction of a local optimum is the cardinality of .notice that for non - neutral fitness landscapes , as are landscapes , the basins of attraction as defined above , produce a partition of the configuration space .therefore , and , + * definition : * edge weight .+ for each pair of solutions and , let us define as the probability to pass from to with the given neighborhood structure . in the case of binary strings of size , and the neighborhood defined by the single bit - flip operation , there are neighbors for each solution , therefore : if , and + if , .we can now define the probability to pass from a solution to a solution belonging to the basin , as : notice that .thus , the total probability of going from basin to basin is the average over all of the transition probabilities to solutions : is the size of the basin .we are now prepared to define our ` inherent ' network or network of local optima . + * definition : * local optima network .+ the local optima network is the graph where the nodes are the local optima , and there is an edge with weight between two nodes and if .notice that since each maximum has its associated basin , also describes the interconnection of basins . according to our definition of edge weights, may be different than .thus , two weights are needed in general , and we have an oriented transition graph .the following two definitions are relevant to the discussion of the boundary of basins . *definition : * boundary of a basin of attraction .+ the _ boundary _ of a basin of attraction can be defined as the set of configurations within a basin that have at least one neighbor s solution in another basin . *definition : * interior of a basin of attraction .+ conversely , the _ interior _ of a basin is composed by the configurations that have all their neighbors in the same basin . formally , order to avoid sampling problems that could bias the results , we used the largest values of that can still be analyzed exhaustively with reasonable computational resources .we thus extracted the local optima networks of landscape instances with , and . for each pair of and values ,30 randomly generated instances were explored .therefore , the networks statistics reported below represent the average behaviour of 30 independent instances . besides the maxima network, it is useful to describe the associated basins of attraction as these play a key role in search algorithms .furthermore , some characteristics of the basins can be related to the optima network features .the notion of the basin of attraction of a local maximum has been presented before .we have exhaustively computed the size and number of all the basins of attraction for and and for all even values plus . in this section ,we analyze the basins of attraction from several points of view as described below . in fig .[ nbasins ] we plot the average size of the basin corresponding to the global maximum for and , and all values of studied .the trend is clear : the basin shrinks very quickly with increasing .this confirms that the higher the value , the more difficult for an stochastic search algorithm to locate the basin of attraction of the global optimum [ !ht ] , title="fig:",width=302 ] + [ !ht ] + [ tab : cumulsize ] & & & + 2 & & & + 4 & & & + 6 & & & + 8 & & & + 10 & & & + 12 & & & + 14 & & & + 15 & & & + + 2 & & & + 4 & & & + 6 & & & + 8 & & & + 10 & & & + 12 & & & + 14 & & & + 16 & & & + 17 & & & + fig .[ bas-18-size ] shows the cumulative distribution of the number of basins of a given size ( with regression line ) for a representative instances with , .table [ tab : cumulsize ] shows the average ( of 30 independent landscapes ) correlation coefficients and linear regression coefficients ( intercept ( and slope ( ) ) between the number of nodes and the basin sizes for instances with and all for all the studied values of .notice that distribution decays exponentially or faster for the lower and it is closer to exponential for the higher .this observation is relevant to theoretical studies that estimate the size of attraction basins ( see for example ) .these studies often assume that the basin sizes are uniformly distributed , which is not the case for the landscapes studied here . from the slopes of the regression lines ( table [ tab : cumulsize ] ) one can see that high values of give rise to steeper distributions ( higher values ) .this indicates that there are less basins of large size for large values of . in consequence ,basins are broader for low values of , which is consistent with the fact that those landscapes are smoother . [ !ht ] + + the scatter - plots in fig .[ fig : cor_fit - size ] illustrate the correlation between the basin sizes of local maxima ( in logarithmic scale ) and their fitness values .two representative instances for = 18 and = 4 , 8 are shown .notice that there is a clear positive correlation between the fitness values of maxima and their basins sizes .in other words , the higher the peak the wider tend to be its basin of attraction .therefore , on average , with a stochastic local search algorithm , the global optimum would be easier to find than any other local optimum .this may seem surprising .but , we have to keep in mind that as the number of local optima increases ( with increasing ) , the global optimum basin is more difficult to reach by a stochastic local search algorithm ( see fig .[ nbasins ] ) .this observation offers a mental picture of landscapes : we can consider the landscape as composed of a large number of mountains ( each corresponding to a basin of attraction ) , and those mountains are wider the taller the hilltops .moreover , the size of a mountain basin grows exponentially with its hight .we now briefly describe the statistical measures used for our analysis of maxima networks .the standard clustering coefficient does not consider weighted edges .we thus use the _ weighted clustering _ measure proposed by , which combines the topological information with the weight distribution of the network : where , if , if and . for each triple formed in the neighborhood of the vertex , counts the weight of the two participating edges of the vertex . is defined as the weighted clustering coefficient averaged over all vertices of the network .the standard topological characterization of networks is obtained by the analysis of the probability distribution that a randomly chosen vertex has degree . for our weighted networks , a characterization of weightsis obtained by the _ connectivity and weight distributions _ and that any given edge has incoming or outgoing weight . in our study, for each node , the sum of outgoing edge weights is equal to as they represent transition probabilities .so , an important measure is the weight of self - connecting edges ( remaining in the same node ) .we have the relation : . the vertex _ strength _ , ,is defined as , where the sum is over the set of neighbors of .the strength of a node is a generalization of the node s connectivity giving information about the number and importance of the edges .another network measure we report here is _ disparity _ , which measures how heterogeneous are the contributions of the edges of node to the total weight ( strength ) : the disparity could be averaged over the node with the same degree .if all weights are nearby of , the disparity for nodes of degree is nearby . finally , in order to compute the average shortest path between two nodes on the optima network of a given landscape , we considered the expected number of bit - flip mutations to pass from one basin to the other .this expected number can be computed by considering the inverse of the transition probabilities between basins . in other words , if we attach to the edges the inverse of the transition probabilities , this value would represent the average number of random mutations to pass from one basin to the other . more formally , the distance ( expected number of bit - flip mutations ) between two nodes is defined by where .now , we can define the length of a path between two nodes as being the sum of these distances along the edges that connect the respective basins . & & & + + 2 & & & & & + 4 & & & & & + 6 & & & & & + 8 & & & & & + 10 & & & & & + 12 & & & & & + 13 & & & & & + + 2 & & & & & + 4 & & & & & + 6 & & & & & + 8 & & & & & + 10 & & & & & + 12 & & & & & + 14 & & & & & + 15 & & & & & + + 2 & & & & & + 4 & & & & & + 6 & & & & & + 8 & & & & & + 10 & & & & & + 12 & & & & & + 14 & & & & & + 16 & & & & & + 17 & & & & & + in this section we study in more depth some network features which can be related to stochastic local search difficulty on the underlying fitness landscapes .table [ tab : statistics ] reports the average ( over 30 independent instances for each and ) of the network properties described . and are , respectively , the mean number of vertices and the mean number of edges of the graph for a given rounded to the next integer . is the mean weighted clustering coefficient . is the mean disparity , and is the mean path length .the fourth column of table [ tab : statistics ] lists the average values of the weighted clustering coefficients for all and .it is apparent that the clustering coefficients decrease regularly with increasing for all . for the standard unweighed clustering, this would mean that the larger is , the less likely that two maxima which are connected to a third one are themselves connected .taking weights , i.e. transition probabilities into account this means that either there are less transitions between neighboring basins for high , and/or the transitions are less likely to occur .this confirms from a network point of view the common knowledge that search difficulty increases with . [ !ht ] + +the average shortest path lengths are listed in the sixth column of table [ tab : statistics ] . fig .[ fig : distances ] ( top ) is a graphical illustration of the average shortest path length between optima for all the studied landscapes .notice that the shortest path increases with , this is to be expected since the number of optima increases exponentially with .more interestingly , for a given the shortest path increases with , up to , and then it stagnates and even decreases slightly for the . this is consistent with the well known fact that the search difficulty in landscapes increases with .however , some paths are more relevant from the point of view of a stochastic local search algorithm following a trajectory over the maxima network . in order to better illustrate the relationship of this network property with the search difficulty by heuristic local search algorithms , fig .[ fig : distances ] ( bottom ) shows the shortest path length to the global optimum from all the other optima in the landscape .the trend is clear , the path lengths to the optimum increase steadily with increasing . [cols="^ " , ]we have proposed a new characterization of combinatorial fitness landscapes using the well - known family of landscapes as an example .we have used an extension of the concept of inherent networks proposed for energy surfaces in order to abstract and simplify the landscape description . in our casethe inherent network is the graph where the nodes are all the local maxima and the edges accounts for transition probabilities ( using the 1-flip operator ) between the local maxima basins of attraction .this mapping leads to oriented weighted graphs , instead of the more commonly used unordered and unweighed ones .we believe that the present representation is closer to the view offered by a monte carlo search heuristic such as simulated annealing which produces a trajectory on the configuration space based on transition probabilities according to a boltzmann equilibrium distribution . we have exhaustively obtained these graphs for , and for all even values of , plus , and conducted a network analysis on them . the network representation of the fitness landscapes has proved useful in characterizing the topological features of the landscapes andgives important information on the structure of their basins of attraction .in fact , our guiding motivation has been to relate the statistical properties of these networks , to the search difficulty of the underlying combinatorial landscapes when using stochastic local search algorithms ( based on the bit - flip operator ) to optimize them .we have found clear indications of such relationships , in particular : the clustering coefficients : : : suggest that , for high values of , the transition between a given pair of neighboring basins is less likely to occur .the shortest paths to the global optimum : : : become longer with increasing , and for a given , they clearly increase with higher . the outgoing weight distribution: : : indicate that , on average , the transition probabilities from a given node to neighbor nodes are higher for low . the incoming weight distribution: : : indicate that , on average , the transition probabilities from the neighborhood of a node become lower with increasing .the disparity coefficients : : : reflect that for high the transitions to other basins tend to become equally likely , which is an indication of the randomness of the landscape .the previous results clearly confirm and justify from a novel network point of view the empirically known fact that landscapes become harder to search as they become more and more random with increasing .the construction of the maxima networks requires the determination of the basins of attraction of the corresponding landscapes .we have thus also described the nature of the basins , and found that the size of the basin corresponding to the global maximum becomes smaller with increasing .the distribution of the basin sizes is approximately exponential for all and , but the basin sizes are larger for low , another indirect indication of the increasing randomness and difficulty of the landscapes when becomes large .furthermore , there is a strong positive correlation between the basin size and the degree of the corresponding maximum , which confirms that the synthetic view provided by the maxima graph is a useful .finally , we found that the size of the basins boundaries is roughly the same as the size of basins themselves .therefore , nearly all the configurations in a given basin have a neighbor solution in another basin .this observation suggests a different landscape picture than the smooth standard representation of 2d landscapes where the basins of attraction are visualized as real mountains .some of these results on basins in landscapes were previously unknown .this study represents our first attempt towards a topological and statistical characterization of easy and hard combinatorial landscapes , from the point of view of complex networks analysis .much remains to be done .first of all , the results found should be confirmed for larger instances of landscapes .this will require good sampling techniques , or theoretical studies since exhaustive sampling becomes quickly impractical .other landscape types should also be examined , such as those containing neutrality , which are very common in real - world applications , and especially the landscapes generated by important hard combinatorial problems such as the traveling salesman problem and other resource allocation problems .work is in progress for neutral versions of landscapes and for knapsack problems .finally , the landscape statistical characterization is only a step towards implementing good methods for searching it .we thus hope that our results will help in designing or estimating efficient search techniques and operators .p. f. stadler .fitness landscapes . in m.lssig and valleriani , editors , _ biological evolution and statistical physics _ , volume 585 of _ lecture notes physics _ , pages 187207 , heidelberg , 2002 .springer - verlag .
we propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces . we use the well - known family of landscapes as an example . in our case the inherent network is the graph whose vertices represent the local maxima in the landscape , and the edges account for the transition probabilities between their corresponding basins of attraction . we exhaustively extracted such networks on representative landscape instances , and performed a statistical characterization of their properties . we found that most of these network properties are related to the search difficulty on the underlying landscapes with varying values of .
= knuth and yao showed that the expected number of independent bernoulli random bits needed to generate an integer - valued random variable whose distribution is given by , where , is at least equal to the binary entropy of : they also exhibited an algorithm dubbed the ddg tree algorithm for which the expected number of random bernoulli bits is not more than . by grouping, one can thus develop algorithms for generating batches of independent copies of such that the expected number of bernoulli random bits per random variable does not exceed .while these results settle the discrete random variate case quite satisfactorily , the generation of continuous or mixed random variables has not been treated satisfactorily in the literature .the objective of this note is to study the number of bernoulli random bits to generate a random variable with a given precision , provided that we can define `` precision '' in a satisfactory manner .note that any algorithm that takes as input the accuracy parameter , returns a random variable where ,, , are independent identically distributed ( or i.i.d . )bernoulli bits , is the number of bits needed , and , , are given sequences of functions . for a vector ,let denote the -norm of for : .for , the -norm is . with ,all -norms are the same for ] and its binary expansion be to complete the proof , it remains to prove that if is the first non - zero coefficient of the binary expansion of , then there are two cases : either ( 1 ) or ( 2 ) .the inequalities are strict for case 2 since .for the first case , , and line ( [ crosseteton2 ] ) is obviously true . for the second case , then for the upper bound , for the lower bound , we now recall the han - hoshi algorithm published in that implements the inversion method . given with countably finite or infinite , the algorithm partitions the interval ] , then the unique such that is distributed according to . for a binary random source of unbiased i.i.d .bits , their algorithm is as follow : return .the following two figures [ exec_fig_hanhoshi1 ] and [ exec_fig_hanhoshi2 ] are examples that show the underlying ddg tree during the execution of the han and hoshi algorithm . . the cumulative values are , , , , , , and . ] [ exec_fig_hanhoshi1 ] such that , , and . ]let be the number of random coins needed and also the number of iteration of the `` repeat '' loop .for , . to every node ( internal or external ) corresponds an interval .the root corresponding to the interval . for each internal node corresponds an interval that is not contained in one of the interval , and , if the source produces , then the left child corresponds to the interval and , if , then the right child corresponds to .each leaf ( external node ) corresponds to an interval entirely contained in upon which the integer is returned with probability .the following theorem was proved by han and hoshi : [ hhthm ] for the han - hoshi algorithm , our ( new ) proof partitions the leaves for symbol in the ddg tree arbitrarily into two sets , and , such that and each possess at least one leaf per level .let , , where is the probability attached to leaf , i.e. .we have . by nesting and elementary calculations , let be the -th bit in the binary expansion of , and let be the -th bit for .then as in the proof of theorem [ kythm ] , we have so that , using ( [ adenozine ] ) , this section , we give a lower bound for the complexity of sampling any continuous distribution to an arbitrary precision .let be a countable partition of , and let be a fixed precision parameter .consider the infinite graph with as vertices the sets , and as edges all pairs such that therefore , if is not an edge of , then for all , .let be the maximal degree of any vertex of .we now state a lemma that we shall use in conjunction with the knuth - yao result , mentioned and reproved in the previous section , in order to prove our main theorem mentioned in the introduction .[ lowerbndlem ] let be a target random vector of .let be an output with the property that , with probability one , .let be the number of bits used to generate by an algorithm .then where and are as above .we can maximize the bound from lemma [ lowerbndlem ] , of course , by selecting the most advantageous partition and combination .the bound from lemma [ lowerbndlem ] coincides with the bound in shannon when the distribution is discrete with a finite number of atoms since , in that case , by choosing sufficiently small .let and be two ( dependent ) random variables of , and denote by . note that if is not an edge of .thus is the random number of bits needed to generate a discrete random variable that outputs a vertex of with probability , then and therefore it is interesting to recall a general result from csiszr about the hypercube partition entropy of an absolutely continuous random vector that will become useful later .of particular interest to us is the cubic partition partitioned by .the cells of this partition are of the form we recall that if has finite entropy a condition we refer to as rnyi s condition then if has a density , is well - defined , i.e. , it is either finite or .we have [ absctsrenyithm ] under rnyi s condition , for general partition , and random variable with density , where denotes the lebesgue measure .in particular , fix .if is uniform on and , then thus and , by jensen s inequality and the concavity of , the inequality follows by summing over .[ csiszar_lemma ] let have density , and let rnyi s condition be satisfied .if , then as , if , then as , the fifth theorem of csiszr stipulates that if and is not absolutely continuous , then , as , for more information about the asymptotic theory for the entropy of partitioned distributions as the partitions become finer , one can consult rnyi , csiszr , csiszr , and linder and zeger .[ lowerbndthm ] let have density .let be an output with the property that with probability one , . then , under rnyi s condition , where is the volume of the unit ball in , and is the number of random bits needed to generate to .let be a cubic partition .then where is the maximal degree in the graph on defined by connecting with if .we set and use observe that if denotes the -ball of radius centered at , then by elementary geometric considerations , so that as , also , so that a random variable .we call a partition a -partition if for every set , there exists ( called the center ) such that then any algorithm that selects with probability can be used to generate a random variable that approximate to within . after generating , set then , necessarily , there is a coupling with . if the selection of is done with the help of the method of knuth and yao , then , if still denotes the number of random bits required , for , we can take , the cubic partition with sides . for ,a simple partition into intervals of length can be used for all values of .if has a density and or , then the procedure suggested above has , as , where in the last step we assume rnyi s condition and .compare with the lower bound and note that the difference is . for later reference ,we recall these values of for the main distributions : \hspace{-1mm}:\phantom{12 } & \mathcal{e}(f)=0,\\ \text{exponential}(1)\hspace{-1mm}:\phantom{12 } & \mathcal{e}(f)=\log_{2}(e),\\ \text{normal}(0,1)\hspace{-1mm}:\phantom{12 } & \mathcal{e}(f)=\log_{2}\sqrt{2\pi{}e}.\end{aligned}\ ] ] recall that for , , a scale factor shows up as in the upper and lower bounds because for general , we can take the cubic partition with sides . under rnyi s condition and , we have the difference with the lower bound is using , , we obtain which unfortunately increases linearly with . to avoid this growing differential which we did not have for seems necessary to consider partitions that better approximate -balls .the inversion method for generating a random variable with distribution function uses the property that has distribution function , where denotes the inverse , and is uniform ] density , we have , and so the bound of theorem [ thm_inv_mono_bornee ] is however the `` o(1 ) '' can be omitted in this case as the following simple calculation shows : theorem [ thm_inv_mono_bornee ] improves over the bound for the partition method ( for ) by . under other regularity conditions, one can hope to obtain similar bounds that beat the partition bound . for the exponential density , the inversion method yields where flajolet and saheb proposed a method for the exponential law that has where as . for the normal law ,karney proposes a method that addresses the variable approximation issue but does not offer explicit bounds .inversion would yield but the drawback is that this requires the presence of ( an oracle for ) , the inverse gaussian distribution function .even the partition method requires a nontrivial oracle , namely . to sidestep this , one can use a slightly more expensive method based on the box - mller , which states that the pair of random variables with exponential and uniform on the unit circle , provides a standard gaussian in of zero mean and unit covariance matrix .the random variable is maxwell , i.e. , it has density , , and its differential entropy is where is the euler - mascheroni constant .we sketch the procedure , which also serves as an example for more complicated random variate generation problems .assume that the two normals are required with -accuracy each ( this corresponds to the choice of and ) .then we first generate a maxwell random variable by inversion , noting that the maxwell random variable is needed with -accuracy .the maxwell law is unimodal with mode at .its left piece has probability .so we first pick a piece randomly using on average no more than two bits .the we apply inversion on the appropriate piece . by theorem [ thm_inv_mono_bornee ] , we use random bits where the generated approximation is called . next we generate a uniform random variable with accuracy the generated value has . since has differential entropy , we see that the expected number of bits , , needed is bounded by then we return and claim that jointly , and similarly for the cosine .next , putting everything together , we see that the total expected number of bits is not more than the lower bound for generating two independent gaussians is less , i.e. , a sequence of i.i.d .random variables , , into a sequence of i.i.d .bernoulli bits has been the subject of many papers .the setting of interest to us is the following .let , , be a ( possibly infinite ) number of cumulative distributions supported on the positive integers .let be a fixed probability vector .let , , , be i.i.d .random integers drawn from . given for ,draw independently from the distributions , , , . as a special case, we have the classical setting when and then , , , are i.i.d . according to .let , , have binary entropies given by , , , all assumed to be finite .in other words the entropy of is denoted by .assume also that [ quibivilortrinuloxifene ] there exits an algorithm ( described below ) that , upon input , , , outputs a sequence of i.i.d .bernoulli bits where furthermore , these bits are independent of .theorem [ quibivilortrinuloxifene ] describes how many perfect random bits we can extract from , , , , i.e. , should be near the information content , the entropy of . not surprisingly then, the way to achieve this can be inspired by the optimal or near - optimal methods of compression , and , in particular , arithmetic coding . note that one can assume for where are i.i.d .uniform random variables on ] random variable ( with binary expansion ) with the ( infinite ) data sequence .the bits , , are i.i.d .bernoulli and independent of , , , to be more precise , consider this algorithm . a sequence of pairs , , with and as described previously for . to verify the correctness of algorithm [ algo_extractx ] , the intervals ] and let be independent bernoulli random variables such that . if , then first of all , as shown in , if is an exponential mean random variable , then is distributed as a geometric random variable with parameter , and , the fractional part of , is distributed as a truncated exponential random variable on the interval , and and are independent .we concentrate on the fractional part therefore .the following theorem tells us that the fractional part is the convolution of independent bernoulli random variables .[ gravelconvolbernoexpo ] let be a sequence of independent bernoulli distributed random variables with \text { , and } \\\mathbf{p}\{x_j=0\}&=1-p_j\text { for all .}\end{aligned}\ ] ] let . if then is a truncated exponential random variable , i.e. , the p.d.f. of is $.}\ ] ] since the fourier transform of is since is the sum of the independent s , we have which is the fourier transform of . we can thus generate with precision if we set , and let since a raw bernoulli random variable requires bits on average , this simple method , which has accuracy guarantee , uses an average not more than bits . on the other hand ,the lower bound for is where the factor `` 2 '' in can be avoided when batch generation is used .it can also be eliminated at a tremendous storage cost if the vector is generated using the algorithm of knuth - yao since we know the individual probabilities , i.e. , for all .one can show that thus , for the knuth - yao method for this vector , again using knuth - yao , a geometric random variable can be generated exactly using not more than random bits .an exponential random variable generated by this method has an overall expected complexity as .both authors thank tamas linder for his help . claude gravel wants to thank gilles brassard from universit de montral for his financial support .20 d. e. knuth and a. c .- c .yao , `` the complexity of nonuniform random number generation , '' in _ algorithms and complexity : new directions and recent results ._ , j. f. traub , ed.1em plus 0.5em minus 0.4emnew york : carnegie - mellon university , computer science department , academic press , 1976 , pp .357428 , reprinted in knuth s _ selected papers on analysis of algorithms _( csli , 2000 ) .
we study the problem of the generation of a continuous random variable when a source of independent fair coins is available . we first motivate the choice of a natural criterion for measuring accuracy , the wasserstein metric , and then show a universal lower bound for the expected number of required fair coins as a function of the accuracy . in the case of an absolutely continuous random variable with finite differential entropy , several algorithms are presented that match the lower bound up to a constant , which can be eliminated by generating random variables in batches . + * keywords : * random number generation , random bit model , differential entropy , partition entropy , inversion , probability integral transform , tree - based algorithms , random sampling
rapid growth in the use of wireless services coupled with inefficient utilization of scarce spectrum resources has led to much interest in the analysis and development of cognitive radio systems .hence , performance analysis of cognitive radio systems is conducted in numerous studies to gain more insights into their potential uses . in most of the previous work ,transmission rate is considered as the main performance metric .for instance , the secondary user mean capacity was studied in by imposing a constraint on the signal - to - interference - noise ratio ( sinr ) of the primary receiver and considering different channel side information ( csi ) levels .the authors in determined the optimal power allocation strategies that achieve the ergodic capacity and the outage capacity of the cognitive radio channel under various power and interference constraints . in ,the authors studied the optimal sensing time and power allocation strategy to maximize the average throughput in a multiband cognitive radio network .recently , the work in proposed generic expressions for the optimal power allocation scheme and the ergodic capacity of a spectrum sharing cognitive radio under different levels of knowledge on the channel between the secondary transmitter and the secondary receiver and the channel between the secondary transmitter and the primary receiver subject to average / peak transmit power constraints and the interference outage constraint .although transmission rate is a common performance metric considered for secondary users , error rate is another key performance measure to quantify the reliability of cognitive radio transmissions . in this regard ,several recent studies incorporate error rates in cognitive radio analysis .for instance , the authors in characterized the optimal constellation size of -qam and the optimal power allocation scheme that maximize the channel capacity of secondary users for a given target bit error rate ( ber ) , interference and peak power constraints .the work in mainly focused on the power allocation scheme minimizing the upper bound on the symbol error probability of phase shift keying ( psk ) in multiple antenna transmissions of secondary users .the authors in proposed a channel switching algorithm for secondary users by exploiting the multichannel diversity to maximize the received snr at the secondary receiver and evaluated the transmission performance in terms of average symbol error probability .the optimal antenna selection that minimizes the symbol error probability in underlay cognitive radio systems was investigated in .moreover , the recent work in analyzed the minimum ber of a cognitive transmission subject to both average transmit power and interference power constraints . in their model , the secondary transmitter is equipped with multiple antennas among which only one antenna that maximizes the weighted difference between the channel gains of transmission link from the secondary transmitter to the secondary receiver and interference link from the secondary transmitter to the primary receiver is selected for transmission .the authors in obtained a closed - form ber expression under the assumption that the interference limit of the primary receiver is very high . also , the work in focused on the optimal power allocation that minimizes the average ber subject to peak / average transmit power and peak / average interference power constraints while the interference on the secondary users caused by primary users is omitted .moreover , in , the opportunistic scheduling in multiuser underlay cognitive radio systems was studied in terms of link reliability . in the error rate analysis of the above - mentioned studies , channel sensing errorsare not taken into consideration .practical cognitive radio systems , which employ spectrum sensing mechanisms to learn the channel occupancy by primary users , generally operate under sensing uncertainty arising due to false alarms and miss - detections . for instance , different spectrum sensing methods for gaussian , and non - gaussian environments , and dynamic spectrum access strategies have extensively been studied recently in the literature , and as common to all schemes , channel sensing is generally performed with errors and such errors can lead to degradation in the performance . with this motivation , we in this paper study the symbol error rate performance of cognitive radio transmissions in the presence of imperfect channel sensing decisions .we assume that secondary users first sense the channel in order to detect the primary user activity before initiating their own transmissions . following channel sensing , secondary users employ two different transmission schemes depending on how they access the licensed channel : sensing - based spectrum sharing ( sss ) and opportunistic spectrum access ( osa ) . in the sss scheme , cognitive users are allowed to coexist with primary users in the channel as long as they control the interference by adapting the transmission power according to the channel sensing results .more specifically , secondary users transmit at two different power levels depending on whether the channel is detected as busy or idle . in the osa scheme ,cognitive users are allowed to transmit data only when the channel is detected as idle , and hence secondary users exploit only the silent periods in the transmissions of primary users , called as spectrum opportunities . due to the assumption of imperfect channel sensing , two types of sensing errors , namely false alarms andmiss detections , are experienced .false alarms result in inefficient utilization of the idle channel while miss - detections lead to cognitive users transmission interfering with primary user s signal .such interference can be limited by imposing interference power constraints . in our error rate analysis, we initially formulate the optimal decision rule and error rates for an arbitrary digital modulation scheme .subsequently , motivated by the requirements to efficiently use the limited spectrum in cognitive radio settings , we concentrate on quadrature amplitude modulation ( qam ) as it is a bandwidth - efficient modulation format . more specifically , in our analysis , we assume that the cognitive users employ rectangular qam for data transmission , analysis of which , as another benefit , can easily be specialized to obtain results for square qam , pulse amplitude modulation ( pam ) , quadrature phase - shift keying ( qpsk ) , and binary phase - shift keying ( bpsk ) signaling .in addition to the consideration of sensing errors and relatively general modulation formats , another contribution of this work is the adoption of a gaussian mixture model for the primary user s received faded signals in the error - rate analysis .the closed - form error rate expressions in aforementioned works are obtained when primary user s faded signal at the secondary receiver is assumed to be gaussian distributed . however , in practice , cognitive radio transmissions can be impaired by different types of non - gaussian noise and interference , e.g. , man - made impulsive noise , narrowband interference caused by the primary user , primary user s modulated signal , and co - channel interference from other cognitive radios .therefore , it is of interest to investigate the error rate performance of cognitive radio transmissions in the presence of primary user s interference which is modeled to have a gaussian mixture probability density function ( pdf ) ( which includes pure gaussian distribution as a special case ) .main contributions of this paper can be summarized as follows . under the above - mentioned assumptions ,we first derive , for both sss and osa schemes , the optimal detector structure , and then we present a closed - form expression of the average symbol error probability under constraints on the transmit power and interference . through this analysis , we investigate the impact of imperfect channel sensing ( i.e. , the probabilities of detection and false alarm ) , interference from the primary user , and both transmit power and interference constraints on the error rate performance of cognitive transmissions . also , the performances of sss and osa transmission schemes are compared when primary user s faded signal is modeled to have either a gaussian mixture or a purely gaussian density .the remainder of this paper is organized as follows : section [ sec : system_model ] introduces the system model . in section [ sec : general_formulation ] , general formulations for the optimal detection rule and average symbol error probability in the presence of channel sensing errors are provided for sss and osa schemes . in section [ sec : error_rate_qam_pam ] , closed - form average symbol error probability expressions for specific modulation types , i.e. , arbitrary rectangular qam and pam are derived subject to both transmit power and interference constraints under the assumptions of gaussian - mixture - distributed primary user faded signal and imperfect channel sensing decisions .numerical results are provided and discussed in section [ sec : num_results ] .finally , section [ sec : conclusion ] concludes the paper .we consider a cognitive radio system consisting of a pair of secondary transmitter - receiver and a pair of primary transmitter - receiver .the secondary user initially performs channel sensing , which can be modeled as a hypothesis testing problem .assume that denotes the hypothesis that the primary users are inactive in the channel , and denotes the hypothesis that the primary users are active .various channel sensing methods , including energy detection , cyclostationary detection , and matched filtering , have been proposed and analyzed in the literature .regardless of which method is used , one common feature is that errors in the form of miss - detections and false - alarms occur in channel sensing .the ensuing analysis takes such errors into account and depends on the sensing scheme only through the detection and false - alarm probabilities .assume that and denote the sensing decisions that the primary users are inactive and active , respectively .then , the detection and false - alarm probabilities can be expressed respectively as the following conditional probabilities : following channel sensing , the secondary transmitter performs data transmission over a flat - fading channel . in the sss scheme ,the average transmission power is selected depending on the channel sensing decision .more specifically , the average transmission power is if primary user activity is detected in the channel ( denoted by the event ) whereas the average power is if no primary user transmissions are sensed ( denoted by the event ) .we assume that there is a peak constraint on these average power levels , i.e. , we have where denotes the peak transmit power limit .we further impose an average interference constraint in the following form : where is the detection probability and is the channel fading coefficient between the secondary transmitter and the primary receiver .note that with probability , primary user activity is correctly detected and primary receiver experiences average interference proportional to . on the other hand , with probability ,miss - detections occur , secondary user transmits with power , and primary receiver experiences average interference proportional to .therefore , can be regarded as a constraint on the average interference inflicted on the primary user primary receivers by replacing the constraint in ( [ eq : interference_power_constraint_sss ] ) with , where is the channel fading coefficient between the secondary transmitter and the primary receiver . in thissetting , effectively becomes a constraint on the worst - case average interference . ] . in the osa scheme ,no transmission is allowed when the channel is detected as busy and hence , we set = 0 . now with the peak power and average interference constraints , we have which can be combined to write above , we have introduced the average interference constraint . however ,if the instantaneous value of the fading coefficient is known at the secondary transmitter , then a peak interference constraint in the form for can be imposed .note that transmission power in an idle - sensed channel is also required to satisfy the interference constraint due to sensing uncertainty ( i.e. , due to the consideration of miss - detection events ) .hence , a rather strict form of interference control is being addressed under these limitations .now , including the peak power constraint , we have for ( while keeping in the osa scheme ) . above, denotes the peak received power limit at the primary receiver . as a result of channel sensing decisions and the true nature of primary user activity, we have four possible cases which are described below together with corresponding input - output relationships : * _ case ( slowromancap1@ ) _ : a busy channel is sensed as busy , denoted by the joint event . * _ case ( slowromancap2@ ) _ : a busy channel is sensed as idle , denoted by the joint event . * _ case ( slowromancap3@ ) _ : an idle channel is sensed as busy , denoted by the joint event . * _ case ( slowromancap4@ ) _ : an idle channel is sensed as idle , denoted by the joint event . in the above expressions , is the transmitted signal , is the received signal , and denotes zero - mean , circularly - symmetric complex fading coefficient between the secondary transmitter and the secondary receiver with variance . is the circularly - symmetric complex gaussian noise with mean zero and variance , and hence has the pdf the active primary user s received faded signal at the secondary receiver is denoted by .notice that if the primary users are active and hence the hypothesis is true as in cases ( slowromancap1@ ) and ( slowromancap2@ ) , the secondary receiver experiences interference from the primary user s transmission in the form of .we assume that has a gaussian mixture distribution , i.e. , its pdf is a weighted sum of complex gaussian distributions with zero mean and variance for : where the weights satisfy with for all .primary user s received faded signal has a gaussian mixture distribution , if we , for instance , have where , which is the channel fading coefficient between the primary transmitter and the secondary receiver , is a circularly symmetric , complex , zero - mean , gaussian random variable , and is the primary user s modulated digital signal .note that is conditionally gaussian given .now , assuming that the modulated signal can take different values with prior probabilities given by for , has a gaussian mixture distribution as in ( [ eq : gaussian_mix ] ) . in the case of multiple primary transmitters for which we have the above argument can easily be extended if all channel fading coefficients are zero - mean gaussian distributed .gaussian mixture model is generally rich enough to accurately approximate a wide variety of density functions ( * ? ? ?* section 3.2 ) .this fact indicates that the applicability of our results can be extended to various other settings in which has a distribution included in this class of densities .additionally , in the special case of , the gaussian mixture distribution becomes the pure complex gaussian distribution .hence , the results obtained for the gaussian mixture distribution can readily be specialized to derive those for the gaussian distributed as well . as observed from the input - output relationships in ( [ eq : received_signal1])([eq : received_signal4 ] ) , when the true state of the primary users is idle , corresponding to the cases ( slowromancap3@ ) and ( slowromancap4@ ) , the additive disturbance is simply the background noise . on the other hand , in cases ( slowromancap1@ ) and ( slowromancap2@ )in which the channel is actually busy , the additive disturbance becomes whose distribution can be obtained through the convolution of density functions of the background gaussian noise and the primary user s received faded signal . using the result of gaussian convolution of gaussian mixture given by , the distribution of can be obtained as note that also has a gaussian mixture distribution .note further that the pdf of can be expressed in terms of its real and imaginary components as moreover , the marginal distributions of each component are given by it is easily seen that the pdf of in ( [ eq : gaussian_mix_conv ] ) can not be factorized into the product of the marginal pdf s of its real and imaginary parts , given in ( [ eq : marginal_inphase ] ) and ( [ eq : marginal_quad ] ) , respectively .therefore , the real and imaginary parts of the additive disturbance are dependent .when , i.e. , in the case of a pure gaussian distribution , the joint distribution can written as a product of its real and imaginary components since they are independent .in this section , we present the optimal decision rule and the average symbol error probability for the cognitive radio system in the presence of channel sensing errors .we provide general formulations applicable to any modulation type under sss and osa schemes .more specific analysis for arbitrary rectangular qam and pam is conducted in section [ sec : error_rate_qam_pam ] . in the cognitive radio setting considered in this paper , the optimal maximum _ a posteriori _ probability ( map ) decision rule given the sensing decision and the channel fading coefficient can be formulated for any arbitrary -ary digital modulation as follows : where is the prior probability of signal constellation point and .above , ( [ eq : map_decision_rule-2 ] ) follows from bayes rule and ( [ eq : cond - prob - yr ] ) is obtained by conditioning the density function on the hypotheses and .note that in ( [ eq : cond - prob - yr ] ) is the conditional distribution of the received real signal given the transmitted signal , channel fading coefficient , channel sensing decision , and true state of the channel , and can be expressed as for .note that the sensing decision affects the density function through the power of the transmitted signal .moreover , the conditional probabilities in ( [ eq : cond - prob - yr ] ) can be expressed as where and are the prior probabilities of the channel being idle and busy , respectively , and the conditional probabilities in the form depend on the channel sensing performance . as discussed in section [ subsec : sensing ] , is the detection probability and is the false alarm probability . from these formulations , we see that the optimal decision rule in general depends on the sensing reliability .the average symbol error probability ( sep ) for the map decision rule in ( [ eq : map_decision_rule ] ) in the sss scheme can be computed as the above average symbol error probability can further be expressed as in ( [ eq : pe_intdecisionregions ] ) \end{split}\end{aligned}\ ] ] ' '' '' where and are the decision regions of each signal constellation point for when the channel is sensed to be idle and busy , respectively .if cognitive user transmission is not allowed in the case of the channel being sensed as occupied by the primary users , we have the osa scheme for which the average probability of error can be expressed as this section , we conduct a more detailed analysis by considering rectangular qam transmissions to demonstrate the key tradeoffs in a lucid setting . correspondingly , we determine the optimal decision regions by taking channel sensing errors into consideration and identify the error rates for sss and osa schemes .we derive closed - form minimum average symbol error probability expressions under the transmit power and interference constraints .note that the results for qam can readily be specialized for pam , qpsk , and bpsk transmissions .the signal constellation point in rectangular qam signaling can be expressed in terms of its real and imaginary parts , respectively , as where the amplitude level of each component is given by above , and are the modulation size on the in - phase and quadrature components , respectively , and denotes the minimum distance between the signal constellation points and is given by where is the transmission power under sensing decision .it is assumed that the fading realizations are perfectly known at the receiver . in this case ,phase rotations caused by the fading can be offset by multiplying the channel output with where is the phase of the fading coefficient .hence , the modified received signal can be written in terms of its real and imaginary parts as follows : where the subscripts and are used to denote the real and imaginary components of the signal , respectively . note that and have the same statistics as and , respectively , due to their property of being circularly symmetric .hence , given the transmitted signal constellation point , the distribution of the modified received signal is given by moreover , the real and imaginary parts of noise , i.e. , and are independent zero - mean gaussian random variables , and the real and imaginary parts of primary users faded signal , i.e. , and , are gaussian mixture distributed random variables . in the following ,we characterize the decision regions of the optimal detection rule for equiprobable qam signaling in the presence of sensing uncertainty . [prop : decisionrule ] for cognitive radio transmissions with _equiprobable _ rectangular -qam modulation ( with constellation points expressed as in ( [ eq : complexqampoint])([eq : imaginaryqampoint ] ) ) under channel sensing uncertainty in both sss and osa schemes , the optimal detection thresholds under any channel sensing decision are located midway between the received signal points .hence , the optimal detector structure does not depend on the sensing decision ._ proof _ : see appendix [ app : proof - prop ] . in this subsection , we present closed - form average symbol error probability expressions under both transmit power and interference constraints for sss and osa schemes .initially , we express the error probabilities for a given value of the fading coefficient .subsequently , we address averaging over fading and also incorporate power and interference constraints .we note that in the presence of peak interference constraints , the transmitted power level depends on the fading coefficient experienced in the channel between the secondary and primary users as seen in ( [ eq : instantaneous - power - g ] ) .therefore , we in this case consider an additional averaging of the error rates with respect to . under the optimal decision rule given in the previous subsection , the average symbol error probability of _ equiprobable _ signals for a given fading coefficient can be expressed as where denotes the conditional error probability given the transmitted signal , channel fading , sensing decision , and true state of the channel .we can group the error patterns of rectangular -qam modulation into three categories . specifically , the probability of error for the signal constellation points on the corners is equal due to the symmetry in signaling and detection .the same is also true for the points on the sides and the inner points .the symbol error probability for the four corner points is given by where and .the distributions of the gaussian noise and the primary user s interference signal plus noise are given in ( [ eq : gaussian_noise ] ) and ( [ eq : gaussian_mix_conv ] ) , respectively .after evaluating the integrals , the above expression becomes where is the gaussian -function . for the points on the sides , except the corner points ,the symbol error probability is where .after performing the integrations , we can express as finally , the symbol error probability for inner points is obtained from which can be evaluated to obtain overall , we can express in ( [ eq : pe_mpam_h ] ) by combining , and as follows after rearranging the terms , the final expression for the average symbol error probability is given by ( [ eq : pe_h_qam ] ) shown at the top of next page .\\ + \pr\{{\mathcal{h}}_1|{\hat{\mathcal{h}}}_k\}\bigg[2\bigg(2-\frac{1}{m_i}-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l q\bigg(\sqrt{\frac{d_{min , k}^2|h|^2}{4(\sigma_{l}^2+\sigma_{n}^2 ) } } \bigg)-4\bigg(1-\frac{1}{m_i}\bigg)\bigg(1-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l q^2\bigg(\sqrt{\frac{d_{min , k}^2|h|^2}{4(\sigma_{l}^2+\sigma_{n}^2 ) } } \bigg)\bigg]\bigg\}. \end{split}\ ] ] ' '' '' this expression can be specialized to square qam signaling by setting .we observe above that while the optimal decision rule does not depend on the sensing decisions , the error rates are functions of detection and false alarm probabilities .note also that the expressions above are for a given value of fading .the unconditional symbol error probability averaged over fading can be evaluated from in the special case of a rayleigh fading model for which the fading power has an exponential distribution with unit mean , i.e. , , the above integral can be evaluated by adopting the same approach as in and using the indefinite integral form of the gaussian -function and square of the gaussian -function , given , respectively , by the resulting unconditional average symbol error probability over rayleigh fading is given by ( [ eq : pe_sss_without_power_constraints ] ) at the top of next page \\ & + \pr\{{\mathcal{h}}_1|{\hat{\mathcal{h}}}_k\}\bigg[\bigg(2-\frac{1}{m_i}-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l\bigg(1-\frac{1}{\beta_{1,k}}\bigg)-2\bigg(1-\frac{1}{m_i}\bigg)\bigg(1-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l\bigg(\frac{2}{\pi}\frac{1}{\beta_{1,k}}\tan^{-1}\left(\frac{1}{\beta_{1,k}}\right)-\frac{1}{\beta_{1,k}}+\frac{1}{2}\bigg)\bigg]\bigg\}. \end{split}\ ] ] ' '' '' where and for .the average symbol error probability for rectangular qam signaling in the presence of gaussian - distributed can readily be obtained by letting in ( [ eq : pe_sss_without_power_constraints ] ) .although the sep expression in ( [ eq : pe_sss_without_power_constraints ] ) seems complicated , it is in fact very simple to evaluate .furthermore , this can be upper bounded as this upper bound follows by removing the negative terms that include on the right - hand side of ( [ eq : pe_h_qam ] ) and then integrating with respect to fading distribution .note also that the upper bound in ( [ eq : pe_sss_tight ] ) with is the exact symbol error probability for pam modulation .note further that the sep expression in ( [ eq : pe_sss_without_power_constraints ] ) is a function of the transmission powers and . the optimal choice of the power levels under peak power and average interference constraints given in ( [ eq : peak_power_constraint ] ) and ( [ eq : interference_power_constraint_sss ] ) and the resulting error rates can be determined by solving as discussed in section [ subsec : power - interference ] , if the fading coefficient between the secondary transmitter and the primary receiver is known and peak interference constraints are imposed , then the maximum transmission power is given by after inserting these and into the upper bound in ( [ eq : pe_sss_tight ] ) and evaluating the expectation over the fading coefficient , we obtain where and denotes the upper bound in ( [ eq : pe_sss_tight ] ) . if is exponentially distributed with unit mean , then by using the identity in ( * ? ?3.362.2 ) , we can evaluate the second integral on the right - hand side of ( [ eq : pe_sss_tight - avg ] ) and express the upper bound as in ( [ eq : pe_sss_min ] ) given on the next page \\&+\pr\{{\mathcal{h}}_1|{\hat{\mathcal{h}}}_k\}\sum_{l=1}^p\lambda_l\bigg[{{\mathrm{e}}}^{b_1}-2\sqrt{\gamma_{1}\pi}{{\mathrm{e}}}^{\gamma_{1}}q\big(\sqrt{2}(b_1+\gamma_{1})\big)\bigg]\bigg\}\\ \end{split}\ ] ] ' '' '' where , .it should be noted that we can easily obtain the _ exact _ symbol error probability expressions for pam modulation by replacing and in ( [ eq : pe_h_qam ] ) , ( [ eq : pe_sss_without_power_constraints ] ) , ( [ eq : pe_sss_min ] ) . in the osa scheme ,secondary users are not allowed to transmit when the primary user activity is sensed in the channel .therefore , we only consider error patterns under given in ( [ eq : sep1 ] ) , ( [ eq : sep2 ] ) , ( [ eq : sep3 ] ) . hence , following the same approach adopted in section [ subsubsec : error_rate_sss ] , the average symbol error probability under the osa scheme is obtained as in ( [ eq : pe_osa_without_power_constraints ] ) given on the next page .\\ & + \pr\{{\mathcal{h}}_1|{\hat{\mathcal{h}}}_0\}\bigg[\bigg(2-\frac{1}{m_i}-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l\bigg(1-\frac{1}{\beta_{1,0}}\bigg)-2\bigg(1-\frac{1}{m_i}\bigg)\bigg(1-\frac{1}{m_q}\bigg)\sum_{l=1}^p\lambda_l\bigg(\frac{2}{\pi}\frac{1}{\beta_{1,0}}\tan^{-1}\left(\frac{1}{\beta_{1,0}}\right)-\frac{1}{\beta_{1,0}}+\frac{1}{2}\bigg)\bigg ] .\end{split}\ ] ] ' '' '' similarly , the sep upper bound becomes note that under average interference constraints , the maximum allowed transmission power in an idle - sensed channel is given by on the other hand , if the peak interference power constraint is imposed , the maximum allowed transmission power is after inserting this into ( [ eq : pe_osa_tight ] ) , assuming again that is exponentially distributed with unit mean , and evaluating the integration in a similar fashion as in section [ subsubsec : error_rate_sss ] , an upper bound on the average symbol error probability can be obtained as in ( [ eq : pe_osa_min ] ) on the next page where and .\\&+\pr\{{\mathcal{h}}_1|{\hat{\mathcal{h}}}_0\}\sum_{l=1}^p\lambda_l\bigg[{{\mathrm{e}}}^{\frac{q_{{\rm{pk}}}}{p_{{\rm{pk}}}}}-2\sqrt{\psi_{1}\pi}{{\mathrm{e}}}^{\psi_{1}}q\bigg(\sqrt{2}\bigg(\frac{q_{{\rm{pk}}}}{p_{{\rm{pk}}}}+\psi_{1}\bigg)\bigg)\bigg]\bigg\ } \end{split}\ ] ] ' '' ''in this section , we present numerical results to demonstrate the error performance of a cognitive radio system in the presence of channel sensing uncertainty for both sss and osa schemes .more specifically , we numerically investigate the impact of sensing performance ( e.g. , detection and false - alarm probabilities ) , different levels of peak transmission power and average and peak interference constraints on cognitive transmissions in terms of symbol error probability .theoretical results are validated through monte carlo simulations . unless mentioned explicitly ,the following parameters are employed in the numerical computations .it is assumed that the variance of the background noise is . when the primary user signal is assumed to be gaussian , its variance , is set to 0.5 . on the other hand ,in the case of primary user s received signal distributed according to the gaussian mixture model , we assume that , i.e. , there are four components in the mixture , for all , and the variance is still . also , the primary user is active over the channel with a probability of , hence and . finally , we consider a rayleigh fading channel between the secondary users with fading power pdf given by for , and also assume that the fading power in the channel between the secondary transmitter and primary receiver is exponentially distributed with . 0.45 0.45 0.45 we initially consider peak transmit and average interference constraints as given in ( [ eq : peak_power_constraint ] ) and ( [ eq : interference_power_constraint_sss ] ) , respectively . in the following numerical results , for the sss scheme , we plot the error probabilities and optimal transmission power levels obtained by solving ( [ eq : sss_expectation_pe ] ) . in the case of osa , we plot the average error probability expressed in ( [ eq : pe_osa_without_power_constraints ] ) with maximum allowed power given in ( [ eq : maxpower - averageinterference - osa ] ) . in fig .[ fig : sep_p0_p1_qavg_sss ] , we display the average symbol error probability ( ) and optimal transmission powers and as a function of the average interference constraint , , in the sss scheme . and set to and , respectively .the peak transmission power is db . we assume that the secondary users employ 2-pam , 4-qam , 8-pam and -qam modulation schemes for data transmission .we have considered both gaussian and gaussian - mixture distributed .in addition to the analytical results obtained by using ( [ eq : pe_sss_without_power_constraints ] ) and solving ( [ eq : sss_expectation_pe ] ) , we performed monte carlo simulations to determine the sep .we notice in the figure that analytical and simulation results agree perfectly .additionally , it is seen that for all modulation types , error rate performance of secondary users improves as average interference constraint becomes looser ( i.e. , as increases ) , allowing transmission power levels and to become higher as illustrated in the lower subfigures .saturation seen in the plot of is due to the peak constraint .other observations are as follows . as the modulation size increases , increases as expected .it is also interesting to note that lower is attained in the presence of gaussian - mixture distributed when compared with the performance when has a pure gaussian density with the same variance . 0.45 0.45 in fig .[ fig : sep_p0_qavg_osa ] , average and transmission power are plotted as a function of the average interference constraint , , for the osa scheme .we again set db , and , and consider 2-pam , 4-qam , 8-pam and -qam schemes .it is observed from the figure that as increases , error probabilities initially decrease and then remain constant due to the fact that the secondary users can initially afford to transmit with higher transmission power as the interference constraint becomes less strict , but then get limited by the peak transmission power constraint and send data at the fixed power level of .again , we observe that lower error probabilities are attained when the primary user s received signal follows a gaussian mixture distribution . 0.45 0.45 in fig .[ fig : sep_pd_avg ] , the average of 4-qam ( in the upper subfigure ) and 8-pam signaling ( in the lower subfigure ) are plotted as a function of the detection probability . is set to .we consider both sss and osa schemes .it is assumed that db and db . we observe that for both modulation types in both sss and osa schemes decreases as increases .hence , performance improves with more reliable sensing . in this case, the primary reason is that more reliable detection enables the secondary users transmit with higher power in an idle - sensed channel .for instance , if , then the transmission power is only limited by in both sss and osa . in the figure, we also notice that lower is achieved in the osa scheme , when compared with the sss scheme , due to the fact that osa avoids transmission over a busy - channel in which interference from the primary user s transmission results in a more noisy channel and consequently higher error rates are experienced . at the same time, it is important to note that not transmitting in a busy - sensed channel as in osa potentially reduces data rates . 0.45 0.4 in fig .[ fig : sep_pf_avg ] , the average of 4-qam and 8-pam signaling are plotted as a function of the false alarm probability for both sss and osa . it is assumed that . it is further assumed that db and db , again corresponding to the case in which average interference power constraint is dominant compared to the peak transmit power constraint . in both schemes, increases with increasing false alarm probability .hence , degradation in sensing reliability leads to performance loss in terms of error probabilities . in osa ,the transmission power does not depend on and hence is fixed in the figure .the increase in the error rates can be attributed to the fact that secondary users more frequently experience interference from primary user s transmissions due to sensing uncertainty .for instance , in the extreme case in which , the probability terms in ( [ eq : pe_osa_without_power_constraints ] ) become and , indicating that although the channel is sensed as idle , it is actually busy with probability one and the additive disturbance in osa transmissions always includes . in the sss scheme ,higher leads to more frequent transmissions with power which is generally smaller than in order to limit the interference on the primary users .transmission with smaller power expectedly increases the error probabilities . on the other hand, we interestingly note that as approaches 1 , becomes higher than when ( [ eq : sss_expectation_pe ] ) is solved , resulting in a slight decrease in when exceeds around 0.9 .we now address the peak interference constraints by assuming that the transmission powers are limited as in ( [ eq : instantaneous - power - g ] ) . in this section ,analytical error probability curves are plotted using the upper bounds in ( [ eq : pe_sss_min ] ) in the case of sss and in ( [ eq : pe_osa_min ] ) in the case of osa since we only have closed - form expressions for the error probability upper bounds when we need to evaluate an additional expectation with respect to .note that these upper bounds provide exact error probabilities when pam is considered . additionally , the discrepancy in qam is generally small as demonstrated through comparisons with monte carlo simulations which provide the exact error rates in the figures . 0.45 0.45 in fig .[ fig : sep_ppk_sss ] , we plot the average as a function of the peak transmission power , , for the sss scheme in the presence of gaussian distributed and gaussian - mixture distributed primary user s received faded signal in the upper and lower subfigures , respectively .the secondary users again employ 2-pam , 4-qam , 8-pam and -qam schemes . the peak interference power constraint , is set to 4 db .it is seen that monte carlo simulations match with the analytical results for pam and are slightly lower than the analytical upper bounds for qam .as expected , the average initially decreases with increasing and a higher modulation size leads to higher error rates at the same transmission power level .we again observe that lower error rates are experienced when has a gaussian mixture distribution rather than a gaussian distribution with the same variance .it is also seen that as increases , the curves in all cases approach some error floor at which point interference constraints become the limiting factor . another interesting observation is the following . in fig .[ fig : sep_ppk_sss ] , are plotted for two different pairs of detection and false alarm probabilities . in the first scenario ,channel sensing is perfect , i.e. , and .in the second scenario , we have and . in both scenarios ,we observe the same error rate performance .this is because the same transmission power is used regardless of whether the channel is detected as idle or busy , i.e. , for both .the interference constraints are very strict as noted in section [ subsec : power - interference ] .hence , averaging over channel sensing decisions becomes averaging over the prior probabilities of channel occupancy , which does not depend on the probabilities of detection and false alarm . indeed ,spectrum sensing can be altogether omitted under these constraints . 0.45 0.45 in fig .[ fig : sep_ppk_osa ] , we plot the average as a function of for the osa scheme . as before , 2-pam , 4-qam , 8-pam and -qam are considered .imperfect sensing with and is considered in the upper subfigure whereas perfect sensing ( i.e. , and ) is assumed in the lower subfigure . in both subfigures, it is seen that increasing initially reduces which then hits an error floor as the interference constraints start to dominate .it is also observed that perfect channel sensing improves the error rate performance of cognitive users .note that if sensing is perfect , secondary users transmit only if the channel is actually idle and experience only the background noise . on the other hand , under imperfectsensing , secondary users transmit in miss - detection scenarios as well , in which they are affected by both the background noise and primary user interference , leading to higher error rates .cognitive radio transmission impaired by gaussian mixture distributed again results in lower compared to gaussian distributed .but , of course , this distinction disappears with perfect sensing in the lower subfigure since the secondary users experience only the gaussian background noise as noted above . finally , the gap between the analytical and simulation results for qam is due to the use of upper bounds in the analytical error curves as discussed before . 0.45 0.45 in fig .[ fig : sep_pd_qpk ] , we display the average of 4-qam and 8-pam signaling as a function of the detection probability . is set to . both sss and osa schemes are considered . here , we also assume that db , db . it is seen that error rate performances for sss scheme for both modulation types do not depend on detection probability because of the same reasoning explained in the discussion of fig .[ fig : sep_ppk_sss ] . on the other hand, the error rate performance for the osa scheme improves with increasing detection probability since the secondary user experiences less interference from the primary user activity .it is also seen that osa scheme outperforms sss scheme .0.45 0.45 in fig .[ fig : sep_pf_qpk ] , we analyze the average of 4-qam and 8-pam signaling as a function of the false alarm probability .detection probability is .similarly as before , db and db . again , error rate performance does not depend on in the sss scheme .it is observed that in osa scheme increases with increasing false alarm probability .hence , degradation in the sensing performance in terms of increased false alarm probabilities leads to degradation in the error performance . as discussed in section [ subsec : averageinterference ] regarding the error rates in fig .[ fig : sep_pf_avg ] , deterioration in the performance is due to more frequent exposure to interference from primary user s transmissions in the form of .one additional remark from the figure is that sss scheme gives better error rate performance compared to osa scheme at higher values of .we have studied the error rate performance of cognitive radio transmissions in both sss and osa schemes in the presence of transmit and interference power constraints , sensing uncertainty , and gaussian mixture distributed interference from primary user transmissions . in this setting , we have proved that the midpoints between the signals are optimal thresholds for the detection of equiprobable rectangular qam signals .we have first obtained exact expressions for given fading realizations and then derived closed - form average sep expressions for the rayleigh fading channel .we have further provided upper bounds on the error probabilities averaged over the fading between the secondary transmitter and primary receiver under the peak interference constraint .the analytical expressions have been validated through monte - carlo simulations . in the numerical results, we have had several interesting observations .we have seen that , when compared to sss , lower error rates are generally attained in the osa scheme .also , better error performance is achieved in the presence of gaussian - mixture distributed in comparison to that achieved when is gaussian distributed with the same variance .we have also addressed how the error rates and transmission powers vary as a function of the power and interference constraints .finally , we have demonstrated that symbol error probabilities are in general dependent on sensing performance through the detection and false alarm probabilities .for instance , we have observed that as the detection probability increases , the error rate performance under both schemes improves in interference - limited environments .similarly , sep is shown to decrease with decreasing false - alarm probability .hence , we conclude that sensing performance is tightly linked to error performance and improved sensing leads to lower error rates . since the signals are _equiprobable _ , the maximum likelihood ( ml ) decision rule is optimal in the sense that it minimizes the average probability of error .since cognitive transmission is allowed only when the channel is sensed as idle under osa scheme , it is enough to evaluate the ml decision rule under sensing decision , which can be expressed as the above decision rule can further be expressed as above maximization simply becomes the comparison of the likelihood functions of the received signals given the transmitted signals . without loss of generality , we consider the signal constellation point .then , the decision region for the in - phase component of this signal constellation point is given by ( [ eq : comparision_ml1 ] ) .the right - side boundary of the corresponding decision region , which can be found by equating the likelihood functions in ( [ eq : comparision_ml1])([eq : comparision_ml2 ] ) shown at the top of next page . gathering the common terms together, the expression in ( [ eq : comparision_ml1 ] ) can further be written as in ( [ eq : comparision_ml3 ] ) given on the next page .we note that all the terms on the left - hand side of ( [ eq : comparision_ml3 ] ) other than the terms inside the parentheses are nonnegative .let us now consider these difference terms .inside the first set of parentheses , we have which can easily be seen to be greater than zero if and is zero if .the same is also true for the term inside the second set of parentheses given by therefore , the inequality in ( [ eq : comparision_ml3 ] ) can be reduced to similarly , ( [ eq : comparision_ml2 ] ) simplifies to from these observations , we immediately conclude that the decision rule to detect involves comparing with thresholds located at midpoints between the received neighboring signals . following the same approach, we can determine the decision region for the quadrature component of the signal constellation point by comparing the likelihood functions in ( [ eq : comparision_ml1_im ] ) ( [ eq : comparision_ml2_im ] ) shown on the next page , under the sss scheme , the secondary users are allowed to transmit in busy - sensed channel ( i.e. , under sensing decision ) as well . since the simplified decision rules in ( [ eq : comparision_ml3-simplified ] ) and ( [ eq : comparision_ml2-simplified ] ) do not depend on the sensing decision , the same set of inequalities are obtained for the ml detection rule under sensing decision , leading to the same conclusion regarding the decision rule and thresholds . p. j. smith , p. a. dmochowski , h. a. suraweera , and m. shafi , the effects of limited channel knowledge on cognitive radio system capacity , " _ ieee trans .927 - 933 , feb . 2013 .x. kang , y .- c .liang , a. nallanathan , h. k. garg , and r. zhang , optimal power allocation for fading channels in cognitive radio networks : ergodic capacity and outage capacity , " _ ieee trans .wireless commun ._ , vol . 8 , no .940 - 950 , feb . 2009 .r. sarvendranath and n. b. mehta , antenna selection in interference - constrained underlay cognitive radios : sep - optimal rule and performance benchmarking , " _ ieee trans .496 - 506 , feb . 2013 .suraweera , j. p. smith , and m. shafi , capacity limits and performance analysis of cognitive radio with imperfect channel knowledge , " _ ieee trans .4 , pp . 1811 - 1822 , may 2010 .i. f. akyildiz , w .- y .lee , m. c. vuran , and s. mohanty , next generation / dynamic spectrum access / cognitive radio wireless networks : a survey , " _ comput .13 , pp . 2127 - 2159 , sep . 2006 .alouini and a. j. goldsmith , a unified approach for calculating the error rates of linearly modulated signals over generalized fading channels , " in _ ieee trans .1324 - 1334 , sep . 1999j. w. craig , a new , simple , and exact result for calculating the probability of error for two - dimensional signal constellations , " in _ proc .ieee military communications conf .( milcom91 ) _ , mclean , va , pp .571 - 575 , oct . 1991 .
this paper studies the symbol error rate performance of cognitive radio transmissions in the presence of imperfect sensing decisions . two different transmission schemes , namely sensing - based spectrum sharing ( sss ) and opportunistic spectrum access ( osa ) , are considered . in both schemes , secondary users first perform channel sensing , albeit with possible errors . in sss , depending on the sensing decisions , they adapt the transmission power level and coexist with primary users in the channel . on the other hand , in osa , secondary users are allowed to transmit only when the primary user activity is not detected . initially , for both transmission schemes , general formulations for the optimal decision rule and error probabilities are provided for arbitrary modulation schemes under the assumptions that the receiver is equipped with the sensing decision and perfect knowledge of the channel fading , and the primary user s received faded signals at the secondary receiver has a gaussian mixture distribution . subsequently , the general approach is specialized to rectangular quadrature amplitude modulation ( qam ) . more specifically , the optimal decision rule is characterized for rectangular qam , and closed - form expressions for the average symbol error probability attained with the optimal detector are derived under both transmit power and interference constraints . the effects of imperfect channel sensing decisions , interference from the primary user and its gaussian mixture model , and the transmit power and interference constraints on the error rate performance of cognitive transmissions are analyzed . cognitive radio , channel sensing , fading channel , gaussian mixture noise , interference power constraint , pam , probability of detection , probability of false alarm , qam , symbol error probability .
a theorem of shannon basic to all information theory describes the optimum compression of a discrete memoryless source , showing that the minimum achievable rate is the entropy of the source distribution .the situation is the following : let be a probability distribution on the finite set .we call an _ _ for the _ discrete memoryless source _ , if are stochastic maps , with a finite set , such that where denoting the minimal such that an code exists , by , shannon shows that for with the entropy of the distribution . motivated by the work , and by a construction in ( in footnote 4 ) , we study here the following modification of this problem : to each is associated a probability distribution on the finite set ( thus is a stochastic map , or channel , form to ) .an is now a pair of stochastic maps ( compare with eq .( [ eq : det : code ] ) ) , and instead of condition ( [ eq : det : condition ] ) we impose where is the on function on : .note that for two probability distributions and , equals their _ total variational distance _ of the two .we define to be the minimal of an .note that for , and the point mass in , the new notion of coincides with the previous one . notice further , that we allow probabilistic choices in the encoding and decoding . while it is easy to see that this freedom does not help in shannon s problem , it is crucial for the more general form , that we will study in this paper .the basic problem of course is to find the optimum rate of compression ( if the limit exists ; otherwise is to be considered ) , and especially the behaviour of this function at . for the case , i.e. perfect restitution of the distributions , these definitions in principle make sense , but we do nt expect a neat theory to emerge .instead we define the minimal entropy of the distribution on induced by the encoder ( with the idea that blocks of these we may data compress to this rate ) .obviously , so the limit exists , and is equal to the infimum of the sequence . to evaluate this quantity is another problem we would like to solve .the structure of this paper is as follows : first we find lower bounds ( section [ sec : lower ] ) , then discuss upper bounds , preferrably by constructing codes : in section [ sec : cr : trick ] we show how the lower bound is approached by using the additional resource of common randomness , in section [ sec : local : fid ] we prove achievability of it under a letterwise fidelity criterion as a consequence of this result , section [ sec : howards ] presents a constructions to upper bound and . in section [ sec : applications ] applications of the results and conjectures are presented : first , we make it plausible that the distillation procedure of is asymptotically reversible , second we show that shannon s coding theorem allows an `` inverse '' ( at least in situations where unlimited common randomness is around ) , third we give a simple proof that feedback does not increase the rate of a discrete memoryless channel , and fourth demonstrate , how shannon s rate distortion theorem follows as a corollary .the compression result ( with or without common randomness ) thus reveals a great unifying power in classical information theory .finally , in section [ sec : quantum ] we discuss extensions of our results to the case of a source of mixed quantum states : the present discussion fits into this models as probability distributions are just commuting mixed state density operators .let us mention here the previous work on the problem : the major initiating works are and .the latter introduced the distinction between blind and visible coding , and between the block and letterwise fidelity criterion .in contrast to the pure state case the four possible combinations of these conditions seem to lead to rather different answers . the case of blind coding with either the letter or blockwise fidelity criterion was solved recently by koashi and imoto . otherwise in this paper, we will only address the visible case .an attempt on the letterwise fidelity case with either blind or visible encoding was made in .however , an examination of the approach of this work shows that it does not fit into any of the the classes of fidelity criteria proposed by : for a code one could either apply the _ global _ criterion , which is essentially our eq .( [ eq : condition ] ) , that is definitely not what is considered in , there being employed rate distortion theory .or one could impose that the output is good on the average letterwise ( the _ local _ criterion of ) : \leq \lambda,\ ] ] where denotes the marginal distribution of on the factor in , and is any distance measure on probability distributions ( that we require only to be convex in the second variable ) . for is implied by eq .( [ eq : condition ] ) .this , too , is not met in , as there and are constructed as deterministic maps , while to satisfy eq .( [ eq : l : condition ] ) one needs at least a small amount of randomness . to achieve thisone could base the fidelity condition on looking at individual letter positions of _ source and output simultaneously _ : \! \leq \lambda.\ ] ] condition ( [ eq :l : condition ] ) being weaker than ( [ eq : condition ] ) , this one is still weaker .however , this , too , does not coincide with the criterion of : denoting by the joint distribution of and according to and , i.e. , one considers ( this is implied by eq . ( 1 ) of for , which in turn is implied by eq .( [ eq : ll : condition ] ) for ) .it is not at all clear how to connect this with any of the above : eq .( [ eq : ks : condition ] ) is about the _ empirical joint distribution _ of letters in and ( assume for simplicity , as indeed the authors of do , that and are deterministic ) , that is about a distribution created by selecting a position randomly , while eqs .( [ eq : condition ] ) to ( [ eq : ll : condition ] ) are about distributions created either by the coding process alone or in conjunction with the source .our view is confirmed in an independent recent analysis of by soljanin , to the same effect .an interesting new twist was added when in ( and later in a more extended way in and the recent ) the use of unlimited common randomness between the sender and receiver was allowed in the visible coding model with blockwise fidelity criterion . as already mentioned , we reproduce this result here in detail , with special attention to the resource of common randomness : we present a protocol for which we prove that it has minimum common randomness consumption in the class of protocols which even simulate full passive feedback of the received signal to the sender .let the random variable be distributed according to . then we can define by by ( [ eq : condition ] ) we have the markov chain using data processing inequality as follows : with for . to be precise, one may choose ( for ) employing the following well known result with eq .( [ eq : condition ] ) .[ lemma : h : cont ] let and be probability distributions on a set with finite cardinality , such that . then see e.g. .thus we arrive at [ satz : lower ] for any and : where is the mutual information of the channel between the input distribution and the output distribution . by using slightly stronger estimates , we even get [ satz : strong : lower ] for every let be an optimal . from eq .( [ eq : condition ] ) we find ( by a markov inequality argument ) that denote the intersection of this set with the typical sequences ( see eq .( [ eq : typical ] ) below ) by , with . then and there exists an code for the channel with , see ( the case of a classical quantum channel was done in ) . by constructionthis is a for the channel .we want now view as belonging to the message encoder , and as belonging to the message decoder , the resulting code being one for the identical channel on .let us denote the concatenation of the map with the channel decoder by .on the other hand , we may replace by a deterministic map , because randomization at the encoder never decreases error probabilities : still is an .it is now obvious that for every , hence and we are done .it might be a bit daring to formulate conjectures at this point , so we content ourselves with posing the following questions : [ quest : main ] is it true that for all in fact , we would like to go present a slightly stronger statement : _ question [ quest : main] _ : for every , , , and large enough does there exists a with and with the additional property that here is the set of _ typical sequences _ : where counts the number of occurences of in , and .observe that by chebyshev s inequality in fact , by employing the chernoff bound we even obtain with these bounds it is easily seen that a positive answer to the latter question implies the same to the former .but also conversely , it is not difficult to show that a `` yes '' to question [ quest : main ] implies a `` yes '' to question [ quest : main].the following construction is a generalization and refinement of the one by bennett et al . ( footnote 4 ) , found independently by dr , vidal , and cirac . the idea there is to use common randomness between the sender and the receiver of the encoded messages .formally this means that and also depend on a common random variable , uniformly distributed and independent of all others .note that this has a nice expression when viewing and as map valued random variables : here we allow dependence ( via ) between and , while in the initial definition , eq . ( [ eq : code ] ) , and are independent ( as random variables ) .it seems that the power of allowing the use of common randomness can be understood from this point of view : it is a `` convexification '' of the theory with deterministic or independent encoders and decoders .it is easy to see that the lower bound of theorem [ satz : lower ] still applies here .we only have to modify the derivation a little bit : with a slight variant of .we shall apply an explicit large deviation estimate for sampling probability distributions from ( extended to density operators in ) , which we state separately without proof : [ lemma : large : deviation ] let be independent identically distributed ( i.i.d . )random variables with values in the function algebra on the finite set , which are bounded between and , the constant function with value .assume that the average .then for \right\ } \leq 2|{{\mathcal{k}}}|\exp\left(-m\frac{\eta^2 s}{2\ln 2}\right)\!,\ ] ] where =[(1-\eta)\sigma;(1+\eta)\sigma] ] . before we prove our main theorem , we need three lemmas on exact types and conditional types .the first is a simple yet crucial observation : [ lemma : types : blurb ] let be a channel from to , a p.d. on , the induced distribution on and the transpose channel from to .let , be exact of , , respectively that are marginals of a joint exact of .consider the uniform distribution on on , which has the property and the channel from to , where is the set of _ conditional exact typical sequences _ of .then the induced distribution on is the uniform distribution , i.e. and the transpose channel to is indeed , defined by with .straightforward .[ lemma : cardinalities ] there is an absolute constant such that for all distributions on , , channels and for , consider a joint on with marginals on and of . then , introducing the channel with : see .the third contains the central insight for our construction : [ lemma : covering ] with the hypotheses and notation of lemma [ lemma : types : blurb ] there exist families , , from such that for all ,\tag{i}\ ] ] and ,\tag{ii}\ ] ] for all and that satisfy introduce i.i.d .random variables , distributed on according to ( i.e. uniformly ) .then for all : hence lemma [ lemma : large : deviation ] applies and we find by choosing and according to the lemma we enforce that the sum of these probabilities is less than , hence there are actual values of the such that all ( i ) and ( ii ) are satisfied .with this we are ready to prove : [ satz : sim : feedback : channel ] there exists an with and common randomness consumption in fact , not only the condition ( [ eq : condition ] ) is satisfied but the even stronger suppose is seen at the source , and that its type is .for each joint of we assume that families as described in lemma [ lemma : covering ] are fixed throughout .then the protocol the sender follows is : 1 .choose a joint type on with probability and send it .note that can be written , with the marginal on and a channel .2 . if is not typical or is not jointly typical then terminate . 3 . use the common randomness to choose uniformly .4 . choose according to and send it .the receiver chooses , using the common randomness sample .let us first check that this procedure works correctly : for typical we can calulate the distribution of conditional on the event that their joint type is : this is then a distribution on , and we assume to be typical . with the `` big notation : signifies any function whose modulus is bounded by . here we have used the definition of the protocol , then lemma [ lemma : types : blurb ] ( for the definition of and the fact that does not depend on ) , then lemma [ lemma : covering ] .so , the induced distribution is , up to a factor between and , equal to the correct output distribution . now averaging over the typical gives eq .( [ eq : individual : condition ] ) .what is the communication cost ?sending is asymptotically for free , as the number of joint types is bounded by the polynomial .sending costs bits , with bounded according to lemma [ lemma : covering ] .that is , on the other hand and we are done .[ rem : chernoff : better ] in the above statement of theorem [ satz : sim : feedback : channel ] we assumed to be a constant , absorbed into the `` '' in the code length estimate . using the chernoff estimate ( [ eq : typical : prob : exp ] ) on the probabilities of typical sets in the above proof in fact shows the existence of an satisfying ( [ eq : individual : condition ] ) in the line of , the interpretation of this result is that investing common randomness at rate , one can simultate the noisy channel by a noiseless one of rate , when sending only words .considering the construction again , we observe that in fact not only it provides a simulation of the channel , but additionally of the _ noiseless passive feedback_. simply because the sender can read off from his random choices the obtained by the receiver , too .this observation is the key to show that our above construction is optimal under the hypothesis that the channel _ with noiseless passive feedback _ is simulated : in fact , since both sender and receiver can observe the very output sequence of the channel , which has entropy , they are able to generate common randomness at this rate . since communication was only at rate , the difference must by invested in prepared common randomness : otherwise we would get more of it out of the system than we could have possibly invested .formally this insight is captured by the following result : [ satz : feedback : channel : opt ] if the decoder of a with common randomness consumption $ ] ( with distribution ) depends deterministically on and ( which is precisely the condition that the encoder can recover the receiver s output ) then for the first inequality introduce the channels , and their induced distributions on and transpose channels with respect to , i.e. then we can rewrite eq .( [ eq : condition ] ) as this inequality oviously remains valid if we restrict the sum to and replace and by their restrictions to : and , respectively .on the other hand , choosing , we have which yield hence there exists at least one such that note that , as functions on , so , when we introduce the support of the left hand side , we arrive at from which our claim follows by a standard trick : let , with .then and using the fact that this implies now only note that ( since is deterministic ) and by we are done. now for the second inequality : from the definition we get , by summing over , because the are all deterministic , the distributions are all supported on sets of cardinality .hence the support of can be estimated .on the other hand , we deduce which , by the same standard trick as before , yields our estimate : with , the set satisfies but since for all we can conlude collecting these results we can state [ cor : feedback : channel : opt ] for any simulation of the channel together with its noiseless passive feedback with error , at rate and common randomness consumption rate : conversely , these rates are also achievable. a simulation of the channel must be in the error bound for _ every _ input , hence eq .( [ eq : condition ] ) will be satisfied for every distribution .the lower bounds follow now from theorem [ satz : feedback : channel : opt ] by choosing to maximize and , respectively . to achieve this , the encoder , on seeing reports its type to the receiver ( asymptotically free ) and then they use the protocol of theorem [ satz : sim : feedback : channel ] for , the empirical distribution of .possibly they have to use the channel at rate to set up additional common randomness beyond the given . at this pointwe would like to point out a remarkable parallel of methods and results to the work : our use of lemma [ lemma : covering ] is the classical case of of the use of its quantum version from , and the main result of the cited paper is the quantum analog of the present theorem [ satz : sim : feedback : channel ] .the optimality result there has its classical case formulated in theorems [ satz : lower ] ( and [ satz : strong : lower ] ) and [ satz : feedback : channel : opt ] , and even the construction of the following section has its counterpart there .the use of common randomness turned out to be remarkably powerful , and it is known in various occasions to make problems more tractable : a major example is the arbitrarily varying channel ( see for example the review ) . while for discrete memoryless channels it does not lead to improved rates or error bounds , it there allows for a `` reverse '' of shannon s coding theorem in the sense of simulating efficiently a noisy channel by a noiseless one .this viewpoint seems to extend to quantum channels as well , assisted by entanglement rather than common randomness : see .we shall expand on the power of the `` randomness assisted '' viewpoint in section [ sec : applications ] .here we show that from the theorem of the previous section a solution to the compression problem under a slightly relaxed distance criterion follows : whereas previously we had to employ common randomness to achieve the lower bound , this will turn out to be unnecessary now . specifically, our condition will be eq .( [ eq : l : condition ] ) : [ satz : coding : local ] there exists an code with such that choose an as in theorem [ satz : sim : feedback : channel ] .obviously this code meets the condition of the theorem , except for the use of common randomness .we will show that a uniformly random choice among a _ small _ ( subexponential ) number of is sufficient for this to hold .then the protocol simply is : 1 .the sender choses uniformly random ( among the chosen few ) , and sends it to the receiver ( at asymptotic rate ) .2 . she uses to encode , and the receiver uses to decode . by constructionthis meets the requirements of the theorem . to prove our claim , note that from theorem [ satz : sim : feedback : channel ] we can infer introduce i.i.d .random variables , distributed according to . with the notations and we have denote the minimal nonzero entry of by , and choose so small that for all typical and all by lemma [ lemma : large : deviation ] we obtain \text { on } { { \operatorname{supp}\,}}w_{x_k}\right\ } \\ & \hspace{4.7 cm } \leq 2|{{\mathcal{y}}}|\exp\left(-q\frac{\epsilon^2 u}{4\ln 2}\right ) .\end{split}\ ] ] hence the sum of these probabilities is upper bounded by which is less than for hence there exist actual values such that which is what we wanted to prove : observe that grows only polynomially .as we remarked already in the introduction , proposed to prove this result ( and indeed more , being interested in the tradeoff between rate and error ) , but eventually turned to the much softer condition ( [ eq : ks : condition ] ) , which originates from the traditional model of rate distortion theory .nice though the idea of the previous section is , the lower bound results show that on this road we can not hope to approach the conjectured bound , because without common randomness at hand we have to spend communication at the same rate to establish it ( compare , appendix , for this rather obvious looking fact ) . in this sectionwe want to study the _ perfect _ restitution of the probability distributions ( i.e. ) : recall that here we want to minimize , and this minimum we call .obviously , so the limit exists , and is equal to the infimum of the sequence. then we have [ satz : old : new ] for all it is sufficient to prove the inequality for in place of : fix a with .then , for choose any code for , which is possible at rate .then with and is an for the mixed state source with limiting rate . it would be nice if we could prove also an inequality in the other direction , but it seems that a direct reduction like in the previous proof does not exist : for this we would need to take an and convert it to an , increasing the entropy only slightly .[ fig : network ] . note that we included a sink , edges leading to the sink obviously having probability .,title="fig : " ] a nice picture to think about the problem of finding is the following in the spirit of flow networks : from the source we go to one of the nodes , with probability .then , with a probability of we go to , and from there with a probability of to .then the condition is that examples of this constructions are discussed in ( where it was in fact invented ) , and here we want to add some general remarks on optimizing it , as well thoughts on a possible algorithm to do that .we begin with a general observation on the number of intermediate nodes : [ satz : cd ] an optimal zero error code for requires at most intermediate nodes , with , . for a fixed set problem is the following : + under the constraints minimize the entropy , where .observe that for each fixed set of the constraints define a convex admissible region for the , of which a _ concave _ function is to be minimized .hence , the minimum will be achieved at an extreme point of the region , that we rewrite as follows : an extreme point must be extremal in every of the summand convex bodies .on the other hand , an extreme point of must meet many of the inequalities ( ) with equality .since there remain only at most nonzero for every . in particular , only at most many are accessed at all .in fact , to minimize , at most , otherwise would contain full information about .[ rem : cd : more ] the last argument can be improved : for we can even assume .the argument of the proof gives us the idea that maybe by an alternating minimization we can find the optimal code : indeed , conditions ( [ eq : cd : e ] ) and ( [ eq : cd : ed ] ) for fixed are linear in , and the target function is concave ( entropy of a linear function of ) , so we can find it s minimum at an extreme point of the admissible region .this part is solved by standard convex optimization methods . on the other hand , for fixed , eqs .( [ eq : cd : d ] ) and ( [ eq : cd : ed ] ) are linear in .however , variation does not change the aim function .still we have freedom to choose , and this might be a good rule : let _ maximize _ the conditional entropy .the rationale is that this entropy signifies the ignorance of the sender about the actual output .if it does not approach in the limit this means that the protocol simulates partial feedback of the channel , which could be used to extract common randomness .this amount is a lower bound to what the protocol has to communicate in excess of .we have , however , no proof that this rule converges to an optimum .in this section we point out three important connections to other questions , some of which depend on positive answers to the questions [ quest : main ] and [ quest : main]. it is known that if two parties ( say , alice and bob ) have access to many inpendent copies of the pair of random variables ( which are supposed to be correlated ) , then they can , by public discussion ( which is overheard by an eavesdropper ) , create common randomness at rate , almost independent of the eavesdropper s information . for detailssee , where this is proved , and also the optimality of the rate .one might turn around the question and ask , how much common randomness is required to create the pair approximately .this question , in the vein of that of the previous subsection , is really about reversibility of transformations between different appearances of correlation .note that this was confirmed in for the case of deterministic correlation between and , i.e. , which there was parallelled to entanglement concentration and dilution for pure states .an affirmative answer to question [ quest : main ] , surprisingly implies that a rate of of common randomness is sufficient , _ with no further public discussion _ to create pairs .this is done by first creating the distribution of on from the common randomness ( this alice and bob do each on their own ! ) : this may be not altogether obvious as the common randomness is assumed in pure form ( i.e. a uniform distribution on alternatives ) , while the distribution may have no regularity . to overcome thisdifficulty fix an and let now we partition the unit interval into the subintervals ,\end{aligned}\ ] ] and define , .notice that for the probabilities for s belonging to the same set differ from each other only by a factor between and , and that , because of , by definition of .hence , defining uniform distributions on for , it is immediate that now the distribution on the in this formula can be approximated to within by a distribution , which in turn can be obtained directly from a uniform distribution on alternatives . in this way we reduced everything to a number of uniform distributions , maybe on differently sized sets ,all bounded by and a helper uniform distribution on a set of size .however , it is well known that these can be obtained from a uniform distribution on items within arbitrarily small error . given this distribution on , bob applies , whereas alice applies the transpose channel to .one readily checks that this produces the joint distribution of , up to arbitrarily small disturbance in the total variational norm .note that this result would imply a new proof of the optimality of of the rate of common randomness distillation from : because we can simulate the latter pair of random variables with this rate of common randomness , we would obtain a net increase of common randomness after application of the distillation , which clearly can not be .it was already pointed out that this study has the paper as one motiviation , with its idea to prove the optimality of shannon s coding theorem by showing that every noisy channel can be simulated by a binary noiseless one operating at rate .shannon s theorem is understood as saying that the noisy channel can simulate a binary noiseless one of rate .both simulations are allowed to perform with small error .note that an affirmation of question [ quest : main] , implies that this can be done , without the common randomness consumption like in section [ sec : cr : trick ] .as indicated , this provides a proof of the converse to shannon s coding theorem : the idea is that otherwise we could , given a rate of noiseless bits simulate the channel , which in turn could be used to transmit at a rate . the combination of simulation and coding yields a coding method for transmitting bits over a channel providing noisless bits , which is absurd ( in this reasoning is called `` causality argument '' ) .theorem [ satz : sim : feedback : channel ] allows us to prove even more : [ satz : feedback : capacity ] for the channel with noisless feedback ( i.e. after each symbol transmitted the sender gets a copy of the symbol read by the receiver , and may react in her encoding ) the capacity is given by .in fact , for the maximum size of an code let an optimal code for the channel with noiseless feedback be given .we will construct an with shared randomness , as follows : choose a simulation of the channel on sending bits , and using shared randomness , and with error bounded by ( this is possible by the construction of theorem [ satz : sim : feedback : channel ] see remark [ rem : chernoff : better ] ) .we shall use independent copies of the feedback code in parallel : in each round inputs symbols are prepared , sent through the channel , yielding respective feedback symbols .obviously , each round can be simulated with an error in the output distribution bounded by , using our simulation of the channel w ( which , as we remarked earlier , simulates even the feedback ) . in each of the parallel executions of the feedback codethus accumulates an error of at most , increasing the error probability of the code to .hence on the block of all the feedback codes we can bound the error probability by .but this is subexponentially ( in ) close to , so a standard argument applies : first , by considering average error probability we can get rid of the shared randomness : there exists one value of the shared random variable for which the average error probability is bounded by .then we can argue that there is a subset of the constructed code s message set which has _maximal _ error probability bounded by and what we achieved so far hence is this : a code of messages with error probability and using noiseless bits .clearly , we may assume the encoder to be deterministic without losing in error probability .but then at most messages can be mapped to the same codeword without violating the error condition . collecting everything we conclude^n , \end{split}\ ] ] implying the theorem .[ rem : feedback ] the _ weak converse _ ( i.e. the statement that the rate for codes with error probability approaching is bounded by ) is much easier to obtain , by simply keeping track of the mutual information between the message and the channel output through the course of operating a feedback code , using some well known information identities , and finally estimating the code rate employing fano s inequality .let be any _ distortion measure _, i.e. a non negative real function .this function is extended to words by letting shannon s rate distortion theorem is about the following problem : construct an code ( which my be chosen to be deterministic ) such that for a given i.e. , the average distortion between source and output word is bounded by . a pair of non negative real numbers is said to be _ achievable _ if there exist codes with code rate tending to and distortion rate asymptotically bounded by .define the _ rate distortion function _ as the minimum such that is achievable .[ satz : shannon : r : d ] the rate distortion function is given by the following formula : where is the expected ( single letter ) distortion when using the channel .the proof of `` '' here is a simple exercise using convexity of mutual information in the channel and standard entropy inequalities .we can give a simple proof of the `` ''part of this result , using theorem [ satz : feedback : channel : opt ] : choose some channel satisfying the distortion constraint .then mapping to obviously satisfies the distortion constraint on the code in the sense that the expected distortion between input and output , over source and channel , is bounded by .of course , sampling at the encoder and sending some will not meet the bound .however , we can apply theorem [ satz : feedback : channel : opt ] to approximately simulate the joint distribution of and by using some common randomness and a deterministic code sending bits .hence , invoking linearity of the definition of , so there must be one such that , which ends our proof . at this pointwe would like to advertise our point of view that theorem [ satz : coding : local ] , and even more so theorem [ satz : sim : feedback : channel ] , is what rate distortion is actually about : the former theorem shows how to simulate a given channel on all individual positions of a transmission , and this is what we need in rate distortion .in fact , rate distortion theory is unchanged when instead of the one convex condition ( `` distortion bound '' ) on the code we have several , effectively restricting the admissible approximate joint types of input and output to any prescribed convex set in particular a single point . the strength of theorem [ satz : coding : local ] in comparison to such a development of rate distortion theory lies in the fact that with its help we satisfy the convex conditions in _ every letter _ , not just in the block average . and theorem [ satz : sim : feedback : channel ] gives the analogue of this even with the condition imposed on the whole block , yielding results that are not obtainable by simply applying rate distortion tools ( see e.g. ) .the problem studied in this paper has a natural extension to quantum information theory : now the source emits ( generally mixed ) quantum states on the hilbert space ( ) , with probabilities , and an is a pair of maps where is the set of states on the code hilbert space and is completely positive , trace preserving , and linear .the condition to satisfy is with the trace norm on density operators .define , like before , as the minimum of an .sometimes , the stronger condition will be applied .notice that this contains our original problem as the special case of a _ quasiclassical _ ensemble , when all the commute ( which means they can be interpreted as probability distributions on a set of common eigenstates ) .this problem ( with a number of variations , which we explained in the introductory section [ sec : pdsources ] for the classical case ) is studied in . there ( and previously in ) it is shown that the lower bound theorem [ satz : lower ] holds in the quantum case , too , with understanding as von neumann entropy : [ satz : mixed : lower ] for all , with a function for . let us improve this slightly by proving the strong version of this result : [ satz : mixed : strong : lower ] for all by much the same method as the proof of theorem [ satz : strong : lower ] : the changes are that we need the more general code selection result of , thm .ii.4 , instead of the classical theorem , and which we state separately below : if is an optimal , define obviously , so we can apply lemma [ lemma : max : code ] and find an code for such that this is an for the channel , with , if we choose small enough . combining with the transmission encoder , and with the transmission decoder , we obtain an code for many messages over a noiseless system with hilbert space of dimension . to each message there belongs a decoding operator on the coding space , forming together a povm : . nowto decode correctly with probability , for each we must have on the other hand , by , we conclude and we are done .[ lemma : max : code ] for there is a constant and such that for every discrete memoryless quantum channel and distributions on the following holds : if is such that then there exists an code with the properties see , thm .progress on the problem of achievability of this bound is not known to us .it is remarkable that koashi and imoto could obtain the exact optimal bound in the case of _ blind _ coding .it is indirectly defined via a canonical joint decomposition of the source states , but it can be derived from their result that generically the optimum rate is , which is achieved by simply schumacher encoding the ensemble . nevertheless , the results obtained in the classical case are very encouraging , so we state two conjectures : [ conj : qmain : cr ] for there exist with common randomness , asymptotically achieving transmission rate and common randomness consumption .if it turns out true , and also question [ quest : main ] has a positive answer , we might even hope that also [ quest : qmain ] for , is [ note that , as in the case of question [ quest : main ] , codes achieving the optimal bound may also be constructed to satisfy eq .( [ eq : qu : condition : strong ] ) . ]answers `` yes '' .the implications of these statements , if they are true , would be of great significance to quantum information theory : not only would we get a new proof of the capacity of a classical quantum channel being bounded by the maximum of the holevo information and for the optimality of common randomness extraction from a class of bipartite quantum sources , but also the achievability of in the quantum rate distortion problem with visible coding would follow , that until now has escaped all attempts .we demonstrated the current state of knowledge in the problem of visible compression of sources of probability distributions and its extension to mixed state sources in quantum information theory .apart from reviewing the currently known constructions we contributed a better understanding of the resources involved : in particular the use of common randomness in some of them , and providing strong converses . also we showed the numerous applications the result ( and sometimes the conjectures ) have throughout information theory , making the matter an eminent unifying building block within the theory .we would like to draw the attention of the reader once more to our questions [ quest : main ] and [ quest : qmain ] , and especially the conjecture [ conj : qmain : cr ] offering them as a challenge to continue this work .research partially supported by sfb 343 `` diskrete strukturen in der mathematik '' of the deutsche forschungsgemeinschaft , by fakultt fr mathematik , universitt bielefeld , by the university of bristol , and by the u.k . engineering and physical sciences research council . c. h. bennett , p. w. shor , j. a. smolin , a. v. thapliyal , `` entanglement assisted classical capacity of noisy quantum channels '' , phys .letters , vol .30813084 , 1999 .by the same authors : `` entanglement assisted capacity of a quantum channel and the reverse shannon theorem '' , e print quant - ph/0106052 , 2001 .g. kramer , s. a. savari , `` quantum data compression of ensembles of mixed states with commuting density operators '' , e print quant - ph/0101119 . presented at the ams meeting ( hoboken , nj ) , april 2829 , 2001 .
we study the problem of efficient compression of a stochastic source of probability distributions . it can be viewed as a generalization of shannon s source coding problem . it has relation to the theory of common randomness , as well as to channel coding and rate distortion theory : in the first two subjects `` inverses '' to established coding theorems can be derived , yielding a new approach to proving converse theorems , in the third we find a new proof of shannon s rate distortion theorem . after reviewing the known lower bound for the optimal compression rate , we present a number of approaches to achieve it by code constructions . our main results are : a better understanding of the known lower bounds on the compression rate by means of a strong version of this statement , a review of a construction achieving the lower bound by using common randomness which we complement by showing the optimal use of the latter within a class of protocols . then we review another approach , not dependent on common randomness , to minimizing the compression rate , providing some insight into its combinatorial structure , and suggesting an algorithm to optimize it . the second part of the paper is concerned with the generalization of the problem to quantum information theory : the compression of mixed quantum states . here , after reviewing the known lower bound we contribute a strong version of it , and discuss the relation of the problem to other issues in quantum information theory .
`` we live in a global world '' has become a clich .historically , the exchange of goods , money , and information was naturally limited to nearby locations , since globalization was effectively blocked by spatial , territorial , and cultural barriers . today, new technology is overcoming these barriers and exchange can take place in an increasingly international arena .nevertheless , geographical proximity still seems to be important for the trade of goods as well as for mobile phone communication and scientific collaboration .however , since the internet allows information to travel more easily and rapidly than goods , it remains unclear what are the effective barriers of global information exchange .as information exchange requires shared interests , we therefore need to better understand global connections in interest , and the factors that form these connections .although globalization of information has been discussed extensively in the research literature , currently there is no method to quantitatively map bilateral information interests from large - scale data . without such a method , it becomes difficult to justify qualitative statements about , for example , the complex interplay between shared values and conflict on a global scale .we use data mining and statistical analysis to device a measure of bilateral information interests , and use this measure to construct a world map of information interests . to study interests on a global scale , we use the free online encyclopedia wikipedia , which has evolved into one of the largest collaborative repositories of information in the history of mankind .the free online encyclopedia consists of almost 300 language editions , with english being the largest one .this multi - lingual encyclopedia captures a wide spectrum of information in millions of articles .these articles undergo a peer - reviewed editing process without a central editing authority .instead , articles are written , reviewed , and edited by the public .each article edit is recorded , along with a time - stamp , and , if the editor is unregistered , the computer s ip address .the ip address makes it possible to connect each edit to a specific location .therefore we can use wikipedia editors as sensors for mapping information interest to specific countries . in this paper , we use co - editing of the same wikipedia article as a proxy for shared information interests . to find global connections , we look at how often editors from different countries co - edit the same articles . to infer connections of shared interest between countries , we develop a statistical model and represent significant correlations between countries as links in a global network .structural analysis of the network suggests that interests are polarized by factors related to geographical proximity , language , religion and historical background .we quantify the effects of these factors using regression analysis and find that information exchange indeed is constrained by the impact of social and economic factors connected to shared interests .as one of the largest and most linguistically diverse repositories of human knowledge , wikipedia has become the world s main platform for archiving factual information . one important feature of wikipedia is that every edit made to an article is recorded .thanks to this detailed data , wikipedia provides a unique platform for studying different aspects of information processes , for example , semantic relatedness of topics , collaboration , social roles of editors , and the geographical locations of wikipedia editors . in this work , we used data from wikipedia dumps to select a random sample from the english wikipedia edition , which is the largest and most widespread language edition . in total , the english edition has around 10 million articles , including redirects and duplicates .since retrieving the editing histories of all articles is computationally demanding , we randomly sampled more than six million articles from this set . for each english article, we retrieved the complete editing history of the same article in all language editions that the english wikipedia page links to .finally we merged all language editions together to create a global editing history for each article . for each edit ,the editing history includes the text of the edit , its time - stamp , and , for unregistered editors , the ip address of the editor s computer . from the ip address associated with the edit, we retrieved the geolocation of the corresponding editor using an ip database . for the purpose of spatial analysis, we limited the analysis to edits from unregistered editors , because data on the location for most of the registered wikipedia editors are unavailable .the resulting dataset contains more than six million ( 6,285,753 ) wikipedia articles and about 140 million edits in total .we use these edits to create interest profiles for countries .we identify the interest profile of a country by aggregating the edits of all wikipedia editors whose ips are recorded in the country .if an article is co - edited by editors located in different countries , we say that the countries share a common interest in the information of the article .in other words , we connect countries if their editors co - edit the same articles . indirectly , we let individuals who edit wikipedia represent the population of their country . while wikipedia editors in a country certainly do not represent a statistically unbiased sample , there is a higher tendency that they edit contents that are related to the country in which they live .therefore , we approximate the interest profile of a country with collective editing behavior of editors in that country .inferring the location of all editors on the country level is non - trivial .although we have data on all edits , we do not know the location of registered editors because their ips are not recorded .one proposed approach to tackle this problem makes use of circadian rhythms of editing activity to infer the location of the editors .this method approximates the longitude of a location but provides little information about its latitude .therefore , we must limit the analysis to the activity of unregistered editors with recorded ip addresses .this will arguably affect the results .not only do registered editors contribute to 70% of all 140 million edits , they also have somewhat different behavior . for example , many of the most active registered users take on administrative functions , develop career paths , or specialize in covering selected topics . on the other hand , some unregistered editors are involved in vandalism , but often their activity nevertheless indicates their interest .while we can only speculate about how including registered editors would affect the results , unregistered editors can nevertheless provide useful information about shared interests between countries . from the co - editing data , we create a network that represents countries as nodes and shared interests as links .the naive approach is to use the raw counts of co - edits between countries as weighted links .the problem with this approach is that it is biased toward the number of editors in each country .some countries may be strongly connected , not because of evident shared interests but merely as a result of a large community of active wikipedia editors . to address this problem, we propose a statistical validation method that filters out connections that could exist only due to size effects or noise .the filtering method assumes a multinomial distribution and determines the expected number of co - occurring edits from the empirical data . in other words , we infer significant links in a bipartite system in which countries are in one set and articles are in the other set .there are other existing methods to evaluate the significant correlation between entities in bipartite systems .for example , proposed a systematic approach to one - mode projections of bipartite graphs for different motifs . in another work, used the hypergeometric distribution and measured the -value for each subset of the bipartite network .moreover , proposed a community detection method to classify topics to articles more efficiently , and used a disparity filtering method to infer significant weights in networks .finally , adopted a statistical approach to determine significant links between languages in various written documents .however , the model that we use has the advantages that it makes it easy to account for size variation inside an article and to compute the -scores for analyzing the country - based editor activity .in this section we outline the formalisation of the model .we link countries based on their co - occurring edits over all wikipedia articles . for a specific article , we calculate the link weight between all pairs of countries that edited the article , as follows : if editors in country have edited an article times , and editors in country have edited the same article times , then the countries empirical link weight , , is calculated as : since the total number of articles is over six million , most country pairs have co - edited at least one article . therefore , the aggregation of all articles results in numerous links between countries , and the countries with relatively large editing activities become highly central . accordingly , we can not know if the link exists by chance , or because countries actually tend to edit the same articles more frequently than expected .to determine which links are statistically significant , we compare the empirically observed link weights with the weight given by a null model . in the null model , we assume that each edit comes from a country randomly picked proportionally to its total number of edits .more specifically , the random assignments are performed by drawing the countries from a multinomial distribution .that is , for each edit , country is selected proportional to its cumulative editing activity , , where is the total number of edits for all articles .note that each edit is sampled independently from all other edits , and that the cumulative edit activity of a country in the null model on average will be the same as the observed one .this null model preserves the average level of activity of the countries , but randomizes the temporal order and the articles that countries edit .figure [ fig : model ] shows an example of this reshuffling scheme with four articles . from the null model, we can analytically compute the expected probability , , that two countries and edit the same article ( detailed derivation in the methods ) : where is the total number of edits in article . to compare the empirical and expected link values , we compute standardized values , so called -scores . for countries and and article , the -score is defined as where the standard deviation , is computed in the method section .the -scores are useful for comparisons of weights , since they account for the large variations that exist in the articles edit histories .we then sum over all articles to find the cumulative -score for countries and using the bonferroni correction , we consider a link to be significant if the probability of observing the total -score is less than , where is the number of countries . since the total -score is a sum over many independent variables , we can approximate the expected total -score distribution with a normal distribution .the normal distribution has average value and standard deviation , where is the number of wikipedia articles .thus , the threshold for the significant link weight is , where is derived from the condition that , where countries and is the standard gaussian distribution ( with zero average and unit variance ) .if the total -score is larger than the threshold , we create a link between countries and with weight according to in summary , the interest model maintains the average level of activity of the countries and randomizes the articles that they edit . by comparing results from the interests model and empirical values , we can identify significant links between countries .to investigate the effective barriers of global information exchange , we first identify large - scale structures among the thousands of links between countries . in this way, we can highlight the groups of countries that share interest in the same information . to reveal such groups among the pairwise connections , we use a network community detection method based on random walks as a proxy for interest flows . while the community - detection method we use is good at breaking chains of links, we may connect some countries primarily based on strong connections with common countries and not between themselves . nevertheless , simplifying and highlighting important structures provide a valuable map to investigate the large - scale structure of global information exchange . in our clustering approach ,we first build a network of countries connected with the significant links we find in our filtering . to identify groups of countries , we envision an editor game in which editors from different countries are active in sequence . in this relay race , a country exchanges information to another country proportional to the weight of the link between the countries .accordingly , the sequence of countries forms a random walk and certain sets of countries with strong internal connections will be visited for a relatively long time .this process is analog to the community - detection method known as the map equation .here we use the map equation s associated search algorithm infomap to identify the groups of countries we are looking for and to reveal the large - scale structure .we will discuss the results at four levels of detail , from the big picture to the detailed dynamics , and highlight different potential mechanisms for barriers of information exchange .first , we will show a global map of countries with shared information interests , and continue with the interconnections between the clusters .then we will consider each cluster separately and examine the interconnections between countries within the clusters .finally , we will apply multiple regression analysis to examine explanatory variables to the highways and barriers of information exchange .the world map of information interests suggests that cultural and geopolitical features can explain the division of countries . between the 234 countries , we identified 2,847 significant links that together form a network of article co - edits . by clustering the network, we identified 18 clusters of strongly connected countries ( see supplementary table for a detailed list of countries in each cluster ) .the resulting network is illustrated as a map in fig .[ fig : full_worldmap_m1 ] , where countries of the same cluster share the same color .the map suggests that the division of countries can be explained by a combination of cultural and geopolitical features .for example , the united states and canada share a long geographical border and extensive mutual trade , and are clustered together despite the fact that other english - speaking countries are not . moreover , religion is a plausible driver for the formation of the cluster of countries in the middle east and north africa , as well as the cluster of russia and the orthodox eastern - european countries .another factor in the formation of shared information interests is language .for example , countries in central and south america are divided into two clusters with portuguese and spanish as common languages in each cluster , respectively .colonial history can also shape similarity in interests , as in the cluster of portugal , angola and brazil , as well as the cluster of former soviet union countries .overall , there is strong empirical evidence that geographical proximity , common religion , shared language , and colonial history can explain the division of countries . to examine the connections between clusters, we looked at the network structure at the cluster level .the network in fig.[fig : worldfrance ] shows the connections between the clusters of countries illustrated in fig .[ fig : full_worldmap_m1 ] with the same color coding .connections tend to be stronger between clusters of geographically proximate countries also at this level .interestingly , the middle east cluster in turquoise has the largest link strengths to other clusters , forming a hub that connects east and west , north and south . interpreting the strong connections as potential highways for information exchange , the middle east is not only a melting pot of ideas , but also plays an important role in the spread of information . to get better insights into how the clusters are shaped, we zoomed into the country networks within clusters . in the upper left corner of fig .[ fig : worldfrance ] , we show the strongest connections within the central european cluster .it suggests that countries are linked based on the overlap of the official languages .for example , belgium has three official languages , dutch , french and german .indeed , belgium is connected closely with the netherlands , france , and luxembourg .we observed the same pattern in other clusters , and the triad of switzerland , germany , and austria is another example of strongly linked countries with a shared language .just to illustrate what interests can form the bilateral connections , we looked at a number of concrete examples .first , we ranked the articles according to their significant -scores for each pair of countries .then we looked at the top - ranked articles and here report the results for two european country pairs : germany austria in the european cluster and sweden norway in the scandinavian cluster .the articles with the most significant co - edits relate to local and regional interests , including sports , media , music , and places ( see supplementary table ) . for example, the top - ranked articles in the germany austria list include an austrian singer who is also popular in germany , and an austrian football player who is playing in the german league .the top - ranked articles in the sweden norway list shows a similar pattern of locally related topics , for example , a host of a popular tv show simultaneously aired in sweden and norway , a swedish football manager who has been successful both in sweden and norway , and a music genre that is nearly exclusive to scandinavian countries .altogether , the top articles suggest that an important factor for co - editing is related interests , which in turn may be an effect of shared language , religion , or colonial history , as well as geographical proximity or large volume of trade between countries .to quantify the impact of social and economic factors behind the shared interests , we performed a multiple regression quadratic assignment procedure ( mrqap ) analysis .this method is specifically suited when there are collinearity and autocorrelation in the data .we performed the mrqap using the ` netlm ` function in the ` sna ` r package .the dependent variables in the regression model were the significant -scores that we obtained from the data .our independent variables were geographical proximity , trade , colonial ties , shared language , and shared religion , as suggested by the analysis of the map of information interests ( see the supplementary information s2 for a more detailed description of the data ) .lrrrrr & & & & & + intercept & 0.41 & 0.3 & 2.33 & 2.33 & 2.28 + shared language & 0.91 ( 69 ) & 0.82 ( 64 ) & 0.77 ( 60 ) & 0.75 ( 58 ) & 0.74 ( 57 ) + shared religion & & 2.76 ( 46 ) & 2.6 ( 44 ) & 2.6 ( 43 ) & 2.44 ( 40 ) + log distance & & & -0.23 ( -23 ) & -0.23 ( -23 ) & -0.23 ( -23 ) + colonial tie & & & & 4.5 ( 22 ) & 4.35 ( 21 ) + log trade & & & & & 0.03 ( 10 ) + adjusted r - squared & 0.13 & 0.19 & 0.20 & 0.21 & 0.22 + f - statistic & 7,774 & 3,590 & 2,610 & 2,110 & 1,716 + df & 30,874 & 30,873 & 30,872 & 30,871 & 30,870 + all independent variables show significant correlation with the data ( see table [ table : regression ] ) . to observe the variation between different independent matrices , we examined them in different models . in model , we examined the influence of shared language , which explains 13% of our observation . in model ,we added shared religion , which increases the power of the model to 19% . in model , we included the geographical proximity .it slightly increases the r - squared and has negative relation to the observed -scores , since short distance corresponds to high proximity . in models and , respectively , we added colonial ties and trade . including all these explanatory variables into the regression model enabled us to increase the explanatory power of the model to 22% .the correlation of each variable with the observed -scores can be inferred from the -statistic .shared language shows the strongest association , followed by shared religion , geographical proximity , colonial ties , and volume of trade ( see table [ table : regression ] ) .the influence of language on shared interests is not surprising .it is well known that interests are formed by cultural expression and public opinion , and language is an important platform for these expressions .that the relation between language and interests is important has also been demonstrated by the surprisingly small overlap between languages in wikipedia and the variation in the editing of controversial topics .moreover , the influence of religion is in line with the huntington s thesis that the source of division between people in the post - cold war period is primarily rooted in cultural differences and religion .similar results were found in other studies that analyzed twitter and email communication worldwide .overall , the analysis reveals that information exchange is constrained by the impact of social and economic factors connected to shared interests . in other words , globalization of the technologydoes not bring globalization of the information and interests .language , religion , geographical proximity , historic background , and trade are potential driving factors to polarize the information interests .these results coincide with earlier works that highlight the impact of the colonization , immigration , economics , and politics on the cultural similarities and diversities .by simplifying and highlighting the important structure in the myriad edits of wikipedia , we provide a world map of shared information interests .we find that information interests despite globalization are diverse , and that the highways and barriers of information exchange are formed by social and economic factors connected to shared interests . in descending order, we find that language , religion , geographical proximity , historic background , and trade explain the diversity of interests .while technological advances in principle have made it possible to communicate with anyone in the world , these social and economic factors limit us from doing so and information interests remain diverse .questions remain how different social and economic factors affect different regions , how they relate to conflicts on a global scale , and how the impact of these factors changes over time. it would therefore be interesting to extend the methodology to track changes over time .to find connections in interest , we measure the co - occurring edits of two countries in the same articles .we quantify the connection with an empirical weight that is computed as the product of the countries edit activities in the article . for a wikipedia article ,if the total edit activity of country is denoted by , and for country is , then we calculate the empirical weight , , according to as the total edit activity of countries differs , the probability that countries appear together in a certain article varies .if the total number of edits for all articles is , then the expected proportion of edits for country is this is the probability of country making a random edit overall , and this is the null model we use to filter noisy connections in the interest model .assume that there is a total of countries , and a total of edits for article .let denote the number of edits from country .then the probability of any particular combination of edits for the various countries follows a multinomial distribution with the above distribution , we can compute the expected probability of the co - occurrence of two countries and in an article following the multinomial theorem , we can now calculate the mean , variance and covariance matrices for the occurrence of a country pair .the mean value of the co - occurrence of two countries becomes using the multinomial theorem multiple times , one can also compute the variance : thus , the standard deviation of the pair , , is the square root of equation .this equation enters into the definition of the -score in equation .the sum of the -scores is then approximated with a gaussian distribution .this approximation is well justified by the large number of articles . in practice ,already articles give a good approximation , as shown by the numerical simulations in the supplementary information .arazy , o. , ortega , f. , nov , o. , yeo , l. , and balila , a. ( 2015 ) . functional roles and career paths in wikipedia . in _ proceedings of the 18th acm conference on computersupported cooperative work & social computing _ , pages 10921105 .auer , s. and lehmann , j. ( 2007 ) . what have innsbruck and leipzig in common ? extracting semantics from wiki content . in _ the semantic web : research and applications _ , pages 503517 .springer .bleich , e. ( 2005 ) .the legacies of history ?colonization and immigrant integration in britain and france ., 34(2):171195 .butts , c. t. ( 2008 ) .social network analysis with sna ., 24(6):151 .cairncross , f. ( 2001 ) . .harvard business press .callahan , e. s. and herring , s. c. ( 2011 ) .cultural bias in wikipedia content on famous persons ., 62(10):18991915 .castells , m. ( 2011 ) . , volume 2 .john wiley & sons .dekker , d. , krackhardt , d. , and snijders , t. a. ( 2007 ) .sensitivity of mrqap tests to collinearity and autocorrelation conditions ., 72(4):563581 .edler , d. and rosvall , m. ( 2015 ) .the infomap software package .http://www.mapequation.org .fagiolo , g. , reyes , j. , and schiavo , s. ( 2010 ) .the evolution of the world trade web : a weighted - network analysis ., 20(4):479514 .feldman - bianco , b. ( 2001 ) .razilians in portugal , portuguese in brazil : constructions of sameness and difference 1 . , 8(4):607650 .friedman , t. l. ( 2000 ) . .macmillan .gelfand , m. j. , raver , j. l. , nishii , l. , leslie , l. m. , lun , j. , lim , b. c. , duan , l. , almaliach , a. , ang , s. , arnadottir , j. , et al .differences between tight and loose cultures : a 33-nation study . , 332(6033):11001104 .hale , s. a. ( 2014 ) .multilinguals and wikipedia editing . in _ proc . conf . on web science _ , pages 99108 .hecht , b. and gergle , d. ( 2010a ) .the tower of babel meets web 2.0 : user - generated content and its applications in a multilingual context . in _ proc .sigchi conf . on human factors in computing systems _ ,pages 291300 .hecht , b. j. and gergle , d. ( 2010b ) . on the `` localness '' of user - generated content . in _ proceedings of the 2010 acm conference on computer supported cooperative work _, cscw 10 , pages 229232 , new york , ny , usa . acm .hennemann , s. , rybski , d. , and liefner , i. ( 2012 ) .the myth of global science collaboration collaboration patterns in epistemic communities ., 6(2):217225 .huntington , s. p. et al .the clash of civilizations ?kaluza , p. , klzsch , a. , gastner , m. t. , and blasius , b. ( 2010 ) .the complex network of global cargo ship movements ., 7(48):10931103 .keegan , b. , gergle , d. , and contractor , n. ( 2012 ) .do editors or articles drive collaboration ? : multilevel statistical network analysis of wikipedia coauthorship . in _ proc .acm conf . on computer supported cooperative work _ ,pages 427436 .kimmons , r. m. ( 2011 ) . understanding collaboration in wikipedia ., 16(12 ) .kose , m. a. and ozturk , e. o. ( 2014 ) .a world of change ., page 7 .krackhardt , d. ( 1988 ) .predicting with networks : nonparametric multiple regression analysis of dyadic data ., 10(4):359381 .lambiotte , r. , blondel , v. d. , de kerchove , c. , huens , e. , prieur , c. , smoreda , z. , and van dooren , p. ( 2008 ) .geographical dispersal of mobile communication networks ., 387(21):53175325 .lancichinetti , a. , sirer , m. i. , wang , j. x. , acuna , d. , krding , k. , and amaral , l. a. n. ( 2015 ) .high - reproducibility and high - accuracy method for automated topic classification . , 5:011007 .lieberman , m. d. and lin , j. ( 2009 ) .you are where you edit : locating wikipedia contributors through edit histories . in _icwsm_. mesgari , m. , okoli , c. , mehdi , m. , nielsen , f. . , and lanamki , a. ( 2014 ) .`` the sum of all human knowledge '' : a systematic review of scholarly research on the content of wikipedia .overman , h. g. , redding , s. , and venables , a. ( 2003 ) . .blackwell publishing .pan , r. k. , kaski , k. , and fortunato , s. ( 2012 ) .world citation and collaboration networks : uncovering the role of geography in science .radinsky , k. , agichtein , e. , gabrilovich , e. , and markovitch , s. ( 2011 ) . a word at a time : computing word relatedness using temporal semantic analysis . in _ proc .on world wide web _ , pages 337346 .risse , t. ( 2001 ) . a european identity ?europeanization and the evolution of nation - state identities . in cowles , m. g. , caporaso , j. a. , and risse - kappen , t. , editors ,_ transforming europe : europeanization and domestic change_. cornell university press ithaca , ny . ronen , s. , gonalves , b. , hu , k. z. , vespignani , a. , pinker , s. , and hidalgo , c. a. ( 2014 ) .links that speak : the global language network and its association with global fame . .rosvall , m. , axelsson , d. , and bergstrom , c. t. ( 2009 ) .the map equation ., 178(1):1323 .rosvall , m. and bergstrom , c. t. ( 2008 ) .maps of random walks on complex networks reveal community structure . , 105(4):11181123 .serrano , m. . ,bogu , m. , and vespignani , a. ( 2007 ) .patterns of dominant flows in the world trade web . , 2(2):111124 .serrano , m. . ,bogu , m. , and vespignani , a. ( 2009 ) .extracting the multiscale backbone of complex weighted networks ., 106(16):64836488 . state , b. , park , p. , weber , i. , and macy , m. ( 2015 ) .the mesh of civilizations in the global network of digital communication ., 10(5):e0122543 .subramanian , a. and wei , s .- j . ( 2007 ) .the wto promotes trade , strongly but unevenly ., 72(1):151175 .tgil , s. ( 1995 ) . .siu press .trk , j. , iiguez , g. , yasseri , t. , san miguel , m. , kaski , k. , and kertsz , j. ( 2013 ) .opinions , conflicts , and consensus : modeling social dynamics in a collaborative environment ., 110(8):088701 .tumminello , m. , miccich , s. , lillo , f. , piilo , j. , and mantegna , r. n. ( 2011 ) . statistically validated networks in bipartite complex systems ., 6(3):e17994 .usunier , j .- c . and lee , j. ( 2005 ) . .pearson education .welser , h. t. , cosley , d. , kossinets , g. , lin , a. , dokshin , f. , gay , g. , and smith , m. ( 2011 ) .finding social roles in wikipedia . in _ proc .iconference _ , pages 122129 .yasseri , t. , spoerri , a. , graham , m. , and kertsz , j. ( 2014 ) .the most controversial topics in wikipedia : a multilingual and geographical analysis . in fichman , p. & hara , n. , editor , _ global wikipedia : international and cross - cultural issues in online collaboration ._ rowman & littlefield .yasseri , t. , sumi , r. , and kertsz , j. ( 2012 ) .circadian patterns of wikipedia editorial activity : a demographic analysis . , 7(1):e30091 .zweig , k. a. and kaufmann , m. ( 2011 ) . a systematic approach to the one - mode projection of bipartite graphs ., 1(3):187218 .we thank alcides v. esquivel , daniel edler , claudia wagner , markus strohmaier and micheal macy for valuable discussions .we also thank the wikimedia foundation for providing free access to the data ., a.l . and m.r .were supported by the swedish research council grant 2012 - 3729 .the authors declare that they have no competing financial interests .correspondence and requests for materials should be addressed to f.k . .the datasets generated during and/or analysed during the current study are available in the `` google sites '' repository , `` https://sites.google.com/site/mappingbilateralwiki/ '' .
* abstract * we live in a global village where electronic communication has eliminated the geographical barriers of information exchange . the road is now open to worldwide convergence of information interests , shared values , and understanding . nevertheless , interests still vary between countries around the world . this raises important questions about what today s world map of information interests actually looks like and what factors cause the barriers of information exchange between countries . to quantitatively construct a world map of information interests , we devise a scalable statistical model that identifies countries with similar information interests and measures the countries bilateral similarities . from the similarities we connect countries in a global network and find that countries can be mapped into 18 clusters with similar information interests . through regression we find that language and religion best explain the strength of the bilateral ties and formation of clusters . our findings provide a quantitative basis for further studies to better understand the complex interplay between shared interests and conflict on a global scale . the methodology can also be extended to track changes over time and capture important trends in global information exchange .
during last one decade , theoretical research on co - evolution of species in eco - systems and the statistics of extinctions have been strongly infleunced by the pioneering interdisciplinary works of per bak and his collaborators . in the same spirit , we address some fundamental questions of evolutionary ecology from the perspective of statistical physics . how did the higher species emerge in eco - systems inhabited initially only by primitive forms of life , like bacteria and plankton ?the available record of the history of life , written on stone in the form of fossils , is incomplete and ambiguous .an alternative enterprise seeks to recreate the evolution on a computer by simulating theoretical models . in this paperwe propose a theoretical model that not only addresses the question raised above but also provides a versatile conceptual tool for studying evolutionary ecology .in particular , it describes both the `` macro''-evolutionary processes ( e.g. , origin , evolution and extinction of species ) as well as `` micro''-evolution , ( e.g. , age - distribution in the population of a species , mortality rates , etc . ) .if watched over a short period of time , the dynamics of the eco - system appears to be dominated by birth and death of the individual organisms as well as by the prey - predator interactions .however , over longer period of time , one would see not only extinction of some species but also the appearance of new ones . besides , in many situations , macro - evolutionary changes occur at rates that are comparable to those of the ecological processes .the artificial separation of this process into `` ecological '' time scales and `` geological '' time scales has been made in many earlier theoretical works only for the convenience of modelling .the `` ecological '' models , that describe population dynamics in detail using , for example , the lotka - volterra equations usually ignore the slow macro - evolutionary changes in the eco - system ; hardly any effects of these would be observable before the computer simulations would run out of computer time . on the other hand , in order to simulate the billion - year old history of life on earth with a computer , the elementary time steps in `` evolutionary '' models have to correspond to thousands of years , if not millions ; consequently , the finer details of the ecological processes over shorter periods of time can not be accounted for by these models in any explicit manner . limitations of these approaches are well known .moreover , most of the recent computer models of ageing focus attention on only one isolated species and , therefore , can not capture macro - evolutionary phenomena like , for example , extinctions which depend crucially on the prey - predator interactions .we wish to develop one single theoretical model which would be able to describe the entire dynamics of an eco - system since the first appearance of life in it up till now and in as much detail as possible .this dream has now come closer to reality , mainly because of the availability of fast computers .it has become feasible now to carry out computer simulations ( in - silico experiments ) of eco - system models where , each time step would correspond to typical times for `` micro''-evolution while each of the simulations is run long enough to capture `` macro''-evolution .the prey - predator relations in any eco - system are usually described graphically in terms of food webs .more precisely , a food web is a directed graph where each node is labelled by a species name and each directed link indicates the direction of flow of nutrient ( i.e. , _ from _ a prey _ to _ one of its predators ) .we incorporate in our model the hierarchical organization of the species at different trophic levels of the food web . in real eco - systems ,the food web is a slowly evolving dynamic network .for example , species are known to change their food habits .these changes in diets may be caused by scarcity of the normal food and abundance of alternative food resources .moreover , higher organisms appear through speciation in an eco - system that initially began with only simple forms of life .these not only occupy new trophic levels but also introduce new prey - predator interactions with the existing species .therefore , it is also desirable that these _ self - organizing _ features of natural eco - systems should be reproduced , at least qualitatively , by the theoretical models . the aim of this paper is to propose a model that would capture the desirable features of eco - systems outlined above .higgs , mckane and collaborators have developed a model , called the webworld model , which was aimed at linking the ecological modeling of food web architecture with the evolutionary modeling of speciation and extinction .the spirit of our model is very similar although the details of the mathematical formulation of the two models are quite different .we model the eco - sytem as a dynamic _ hierarchical _ network .the `` micro''-evolution , i.e. , the birth , growth ( ageing ) and natural death of the individual organisms , in our model is captured by the intra - node dynamics .the macro-evolution , e.g. , adaptive co - evolution of the species , is incorporated in the same model through a slower evolution of the network itself over longer time scales .moreover , as the model eco - system evolves with time , extinction of species is indicated by vanishing of the corresponding population ; thus , the number of species and the trophic levels in the model eco - system can fluctuate with time .furthermore , the natural process of speciation is implemented by allowing re - occupation of vacant nodes by mutated versions of non - extinct species .each node of the network represents a niche that can be occupied by at most one species at a time .the number of nodes in the trophic level is where is a positive integer .we assume only one single species at the highest level .the allowed range of is , where is a time - dependent number in our new model . in other words , in contrast to all cited earlier models , the numerical value of in our new model is not put in by hand , but is an _emergent property _ of the eco - system .the prey - predator interaction between two species that occupy the nodes and at two adjacent trophic levels is represented by ; the three possible values of are and .the sign of indicates the direction of trophic flow , i.e. _ from the lower to the higher _ level . is if eats and it is if eats . if there is no prey - predator relation between the two species and , we must have .although there is no direct interaction between species at the same trophic level in our model , they can compete , albeit indirectly , with each other for the same food resources available in the form of prey at the next lower trophic level .we now argue that the elements of the matrix account not only for the _ inter_-species interactions but also for the _ intra_-species interactions arising from the competition of individual organisms for the same food resources .let be the number of all prey individuals for species on the lower trophic level , and be times the number of all predator individuals on the higher trophic level .since we assume that a predator eats prey per time interval , gives the amount of total food available for species , and is the total contribution of species to the pool of food required for all the predators on the higher level .if the available food is less than the requirement , then some organisms of the species will die of _ starvation _ , even if none of them is killed by any predator . the _ intra_-species competition among the organisms of the same species for limited availability of resources , other than food , imposes an upper limit of the allowed population of each species ; is time - independent parameter in the model .thus , the total number of organisms at time is given by .if is larger than then food shortage will be the dominant cause of premature death of a fraction of the existing population of the species .on the other hand , if , then a fraction of the existing population will be wiped out primarily by the predators . in order to capture the _ starvation deaths and killing by the predators _ , in addition to the natural death due to ageing , a reduction of the population by is implemented at every time step , where is the population of the species that survives after the natural death . is a constant of proportionality .if this leads to , species becomes extinct .we assume that the simplest species occupying the lowest trophic level always get enough resources that neither natural death nor predators can affect their population .an arbitrary species is _ collectively _ characterized by :+ ( i ) the _ minimum reproduction age _ , + ( ii ) the _ birth rate _ , + ( iii ) the _ maximum possible age _ that depends only on the trophic level occupied by the species .+ an individual of the species can reproduce only after attaining the age . whenever an organism of this species gives birth to offsprings , of theseare born simultaneously .none of the individuals of this species can live longer than , even if an individual manages to escape its predators or starvation . with probability per unit time ,each of the species randomly increases or decreases , with equal probability , their and by unity .( is restricted to remain in the interval from to , and . ) moreover , with the same probability per unit time , they also re - adjust one of the links from prey and one of the links to predators .if the link to the species from a _ higher _ level species is non - zero , it is assigned a new value of .on the other hand , if the link to a species from a _ lower _ species is zero , the new values assigned are .these re - adjustments of the incoming and outgoing ( in the sense of nutrient flow ) interactions are intended to capture the facts that each species tries to minimize predators but look for new food resources .the niches ( nodes ) left empty because of extinction are re - filled by new species , with probability per unit time .all the simultaneously re - filled nodes in a trophic level of the network originate from _ one common ancestor _ which is picked up randomly from among the non - extinct species at the same trophic level .all the interactions of the new species are identical to those of their common ancestor .the characteristic parameters , of each of the new species differ randomly by from the corresponding parameters for their ancestor .however , occasionally , all the niches at a level may lie vacant . under such circumstances ,all these vacant nodes are to be filled by a mutant of the non - extinct species occupying the closest _ lower _ populated level .as stated above , the lowest level , that is populated by the simplest species , never goes extinct ; the possible ageing of the species at the lowest level is not relevant here .all the individual organisms of the new species are assumed to be newborn babies that begin ageing with time just like the other species . since spacedoes not enter explicitly in our model , it does not distinguish between sympatric and allopatric speciation . in order to understand why the total number of trophic levels in food webs usually lie between and , we allowed adding a new trophic level to the food web , with a small probability per unit time , provided the total bio - mass distributed over all the levels ( including the new one ) does not exceed the total bio - mass available in the eco - system .this step is motivated by the fact that real ecosystems can exhibit growing bio - diversity over sufficiently long period of time .increase of the number trophic level means the diversification at the erstwhile topmost level as well as all the lower levels and the emergence of yet another dominating species that occupies the new highest level .the total number of levels , which determines the lengths of the food chains , depends on several factors , including the available bio - mass . at each time step , each individual organism of the species gives birth _ asexually _ to offsprings with a probability .we also assume the _ time - dependent _ probability is a product of two factors .one of these two factors decreases linearly with age , from unity , attainable at the minimum reproduction age , to zero at the maximum lifespan .the other factor is a standard verhulst factor which takes into account the fact that the eco - system can support only a maximum of individual organisms of each species .thus , is equal to the verhulst factor at .each individual organism , irrespective of its age , can meet its natural death .however , the probability of this natural death depends on the age of the individual . in order to mimic age - independent constant mortality rate in childhood, we assume the probability of `` natural '' death ( due to ageing ) to be a constant ] .note that , for a given and , the larger is the the higher is the for any age .therefore , in order maximize reproductive success , each species has a tendency to increase for giving birth to larger number of offsprings whereas the higher mortality for higher opposes this tendency .however , even with a constant we found qualitatively similar results .the state of the system is updated in discrete time steps where each step consists of a sequence of six stages : + _ i- birth _ _ ii- natural death _ _ iii- mutation _ _iv- starvation death and killing by prey __ v- speciation_ _ vi- emergence of new trophic level _ in all our simulations we began with random initial condition , except for for all species , mostly with only _ three _ levels in the food web , andlet the eco - system evolve for time steps before we started collecting ecological and evolutionary data from it ; these data were collected for the subsequent time steps where the longest runs were for .we have not observed any qualitative differences in the data for and , keeping all the other parameters same .most of our simulations were carried out with , as we did not observe qualitative differences between the data for and in test runs .the maximum lifespans in the levels were assumed to be starting from the highest level .several theories , based on extremely simple models , claim that the distribution of the lifetimes of the species should follow a power law with a slope of on the log - log plot .the distributions of the lifetimes of the species in our model are shown in fig.[fig-1 ] for a few different sets of parameter values .although our data do not rule out an approximate power law over limited regime of lifetimes , one single power law over the entire range of lifetimes seems impossible . in fig.[fig-2 ] we plot the distributions of the minimum reproductive age of the species for several different sets of values of the model parameters .although over relatively short time scales of observation this distribution appears quite broad it narrows down with evolution and the non - zero values of this distribution correspond to reasonable values of age . due to the randomness in the evolutionary process , occassionally, all of the niches in a level ( except the lowest one ) may lie vacant .we have monitored and also , the number of those levels at time in which at least one niche is occupied by a non - extinct species . in fig.[fig-3 ] we plot as a function of time for one single run .this clearly shows how , over geological time scales , reaches . in this run , the sixth level ( the highest one )emerges after time steps .it also demonstrates that at all stages of evolution , the number keeps fluctuating . during the very late stages, keeps fluctuating between and , although is more often than for all times beyond .the ratio of occurrences of _ six _ levels and _ five _ levels in the eco - system stabilized only after time steps .we have also computed the distributions ( histograms ) of by averaging the data over large number of runs . as shown in fig.[fig-4 ] , the distribution becomes narrower for longer runs and the trend indicates than in the extreme long time limit would be sharply peaked around one single value of , as indicated by the fig.[fig-3 ] .in this paper we have introduced a theoretical model of eco - systems with a generic hierarchical trophic level structure . because of data collection at suffciently short intervals ,we have been able to monitor the ecological phenomena like , for example , birth , ageing and death of individual organisms and , hence , the population dynamics of the species .we have also been able to run our simulations upto sufficiently long times ( , with stationarity achieved at around ) so that the model also accounted for macro - evolutionary phenomena like extinctions of species as well as speciation that leads not only to emergence of species at the existing levels of complexity but also to higher species that occupy an altogether new trophic level in the food web . from the infinite _ possible _ life forms , we start with one or a few , and then let our ecosystem grow in diversity and complexity until the limitations of biomass restrict it to hundreds of individuals in dozens of species , organized into about five trophic levels . although our model food web is hierarchical , it is not a tree - like structure .the hierarchical architecture helps us in capturing a well known fact that in the normal ecosystems the higher is the trophic level the fewer are the number of species .it is well known that the body size and abundance of a species are strongly correlated to their positions as well as to their interactions with other species in the food web . if we neglect parasites and herbivorous insects on trees , then , in general , predators are fewer in number and bigger in size as compared to their prey species .this is very naturally incorporated in the hierarchical food web structure of our model .let us assume that in the model the body size of individual organisms on each level is about times smaller than that on its predator level . on the other hand ,the maximum possible populations of organisms , including all the nodes , in a level is times that at the level .consequently , the maximum amount of biomass on each level is , approximately , the same .since each individual organism appears explicitly in our model , one could , at least in principle , assign a genome to each individual and describe darwinian selection which takes place at the level of organisms .unfortunately , additional ad - hoc assumptions would be required to relate the genome with the reproductive success . instead of introducing an ad - hoc mathematical formula to relate genotype with phenotype , we have worked directly with phenotype , particularly , quantities that decide the reproductive success of the organisms ; these quantities are , and . from the perspective of self - organization ,the new model surpasses all cited previous models as not only the characteristic collective properties of the species but even the nature of inter - species interactions as well as the total number of trophic levels in the food web are determined by self - organization of the eco - system .p. bak and k. sneppen , _ phys .lett . _ * 71 * , 4083 ( 1993 ) ; m. paczuski , s. maslov and p. bak , phys .e * 53 * , 414 ( 1996 ) .sol , s.c.manrubia , m.benton , and p.bak , _ nature _ * 388 * , 764 ( 1997 ) .+ p. bak and s. boettcher , physica d * 107 * , 143 ( 1997 ) .r. v. sol , s. c. manrubia , m. benton , s. kauffman and p. bak , _ trends in ecology and evolution _ , * 14 * , 156 ( 1999 ) .+ j. maynard - smith , _ the theory of evolution _ , ( cambridge university press , 1993 ) ; j. maynard - smith , _ shaping life : genes , embryos and evolution _ ( weidenfeld and nicolson , orion publishing group , london , 1998 ) .+ e. mayr , _ what evolution is _ , ( basic books , 2001 ) .+ s. b. carroll , _ nature _ * 409 * , 1102 ( 2001 ) .+ b. drossel , _ adv .phys . _ * 50 * , 209 ( 2001 ) .+ m. e. j. newman , and r. g. palmer , _ modeling extinction _ , ( oxford university press , 2002 ) .+ j. n. thompson , _ trends in ecol ._ * 13 * , 329 ( 1998 ) .+ g. f. fussmann , s. p. ellner , and n. g. hairston jr , _ proc .b _ * 270 * , 1015 ( 2003 ) .+ a. i. khibnik and a. s. kondrashov , _ proc .soc . _ * 264 * , 1049 ( 1997 ) .+ n. s. goel , s. c. maitra and e. w. montroll , _ rev ._ * 43 * , 231 ( 1971 ) .j. hofbauer , and k. sigmund , k. _ the theory of evolution and dynamical systems _ ( cambridge university press , 1988 ) .+ j. d. murray , _ mathematical biology _( springer , 1989 ) .+ s. kauffman , _ the origins of order : self - organization and selection in evolution _( oxford university press , 1993 ) .+ g. caldarelli , p.g .higgs and a.j .mckane , j. theor .* 193 * , 345 ( 1998 ) .+ b. drossel , p.g .higgs and a.j .mckane , j. theor .biol . * 208 * , 91 ( 2001 ) .+ c. quince , p.g .higgs and a. j. mckane , in : _ biological evolution and statistical physics _, eds . lssig .m. and valleriani , a. ( eds ) ( springer , 2002 ) , p.281 - 298 .+ d. stauffer , in : _ biological evolution and statistical physicslssig , m. and valleriani , a. ( springer , 2002 ) , p.255 - 267. + m. hall , k. christensen , s. a. di collobiano and h. j. jensen , _ phys .e _ * 66 * , 011904 ( 2002 ) .+ s. a. di collobiano , k. christensen and h. j. jensen , _ j. phys . a _ * 36 * , 883 ( 2003 ). + d. chowdhury , d. stauffer and a. kunwar , _ phys .. lett . _ * 90 * , 068101 ( 2003 ) .+ d. chowdhury and d. stauffer , phys .e * 68 * , 041901 ( 2003 ) .+ p. a. rikvold and r. k. p. zia , 2003 punctuated equilibria and 1/f noise in a biological coevolution model with individual - based dynamics , nlin.ao/0306023 .+ a. aszkiewicz , sz .szymczak and s. cebrat , _ int .c _ * 14 * , ( 2003 ) .+ s. l. pimm , _ food webs _ ( chapman and hall , london , 1982 ) .+ g. a. polis and k. o. winemiller , ( eds . ) _ food webs : integration of patterns and dynamics _ ( chapman and hall , new york , 1996 ) .+ b. drossel and a. j. mckane , 2003 modelling food webs , p. 218in : _ handbook of graphs and networks - from the genome to the internet _ , eds . s. bornholdt , and h. g. schuster , ( wiley - vch , weinheim , 2003 ) + j. e. cohen , t. luczak , c. m. newman and z. -m .zhou , _ proc .b _ * 240 * , 607 ( 1990 ) .+ j. n. thompson , _ science _ * 284 * , 2116 ( 1999 ) . + r. v. sol and s. c. manrubia , phys .e * 54 * , r42 ( 1996 ) ; _ phys .e _ * 55 * , 4500 ( 1997 ) ; see also r. v. sol , in : _ statistical mechanics of biocomplexity _ , lec .notes in phys .( springer , 1999 ) . + m. r. orr and t. b. smith , _ trends ecol .* 13 * , 502 ( 1998 ) .+ m. ackermann , s. c. stearns and u. jenal , _ science _ * 300 * , 1920 ( 2003 ) .+ u. dieckmann and m. doebeli , _ nature _ * 400 * , 354 ( 1999 ) ; m. doebeli and u. dieckmann , _ am .* 156 * , s77-s101 ( 2000 ) .see also the other articles on speciation in the same issue .+ d. m. post , 2002 _ trends ecol .* 17 * , 269 ( 2002 ) .+ c. k. ghalambor and t. e. martin , _ science _ * 292 * , 494 ( 2001 ) .+ j. e. cohen , s. l. pimm , p. yodzis and j. saldana , _ j. animal ecology _ , * 62 * , 67 ( 1993 ) .+ j. e. cohen , t. jonsson and s. r. carpenter , 2003 _ proc .usa _ * 100 * , 1781 ( 2003 ) .+ g. c. williams , _ natural selection : domains , levels and challenges _( oxford university press , 1992 ) .+ e. mayr , _ proc .usa _ * 94 * , 2091 ( 1997 ) .
we propose a generic model of eco - systems , with a _ hierarchical _ food web structure . in our computer simulations we let the eco - system evolve continuously for so long that that we can monitor extinctions as well as speciations over geological time scales . _ speciation _ leads not only to horizontal diversification of species at any given trophic level but also to vertical bio - diversity that accounts for the emergence of complex species from simpler forms of life . we find that five or six trophic levels appear as the eco - system evolves for sufficiently long time , starting initially from just one single level . moreover , the time intervals between the successive collections of ecological data is so short that we could also study `` micro''-evolution of the eco - system , i.e. , the birth , ageing and death of individual organisms .
atomic - scale simulations have the capability to predict the properties of defect structures that are often inaccessible by experimental techniques. these predictions require accurate and efficient calculations of energies and forces on atoms in arrangements that sample a variety of atomic environments , and may represent even different binding configurations .accurate quantum mechanical methods are difficult to scale to large systems and long simulation times , while empirical interatomic potentials offer increased computational efficiency at a lower level of accuracy .maximizing the efficiency of computational material science studies requires the development of potentials that are transferrable , i.e. , capable of predicting properties outside their fitting range , and accurate for static and dynamic calculations. however , without direct transferable derivations of interatomic potentials from quantum mechanical methods , empirical interatomic potentials require high - dimensional non - linear fitting .many functional forms for empirical potentials have been proposed , including embedded - atom method ( eam) , modified embedded - atom method ( meam) and charged - optimized many - body potential ( comb). there have been multiple implementations of different potential functional forms for various materials. even for the same type of materials , such as cu and si , different empirical interatomic potential models are proposed for different applications with different transferabilities .there are advanced techniques to optimize the potential parameters based on a weighted least - squares regression to a fitting database of experimental or quantum mechanical calculation data, including the force - matching method for empirical interatomic potential parameter optimization . in force - matching ,a fitting database includes quantum mechanical force calculations for diverse atomic environments to obtain realistic empirical potential models . to study the transferability of the empirical potential model , frederiksen _ et al ._ applied bayesian statistics to empirical interatomic potential models : instead of using the best fit , an ensemble of neighboring parameter sets reveal the flexibility of the model. they showed that the standard deviation of the potential prediction of structure property function is a good estimate of the true error . however , even with these advances , the determination of empirical interatomic potentials relies on the selection and weighting of a fitting database without a clear , quantitative guide for the impact on predictions . to address the issue of fitting database selection , we present an automated , quantitative fitting - database optimization algorithm based on prediction errors for a testing set using bayesian statistics . we construct an objective function of the prediction errors in the testing set to optimize the relative weights of a fitting database .this includes the addition or removal of structures to a fitting database when weights change sign .we demonstrate the viability of the optimization algorithm with a simple interatomic potential model : lennard - jones potential fitting of ti crystal structures .we choose this example as a radial potential has difficulty describing the stability of different crystal structures of a transition metal .the new algorithm also helps to understand the transferability of the empirical potential model for the structures in the testing set .we start with a brief review of empirical potential models and parameter optimization using a fitting database .next , we discuss bayesian error estimation as it applies to our problem .then we define an objective function with a testing set , and use this quantitative measure to devise an algorithm to optimize a fitting database .lastly , we demonstrate this new approach on an example system with clear limitations : lennard - jones potential for titanium .the total energy and forces for the structure of interest are the most basic quantities to calculate since they determine the structural properties . in particular , we are interested in predictions that are derived from energies of atomic arrangements . in atomic - scale simulations ,a structure is a set of atomic positions with chemical identities : .the total energy of a structure is with forces .density - functional theory ( dft) calculations can provide accurate structural energies and forces , but their computational cost limits them to simulation system with at most a few thousand atoms . other structural propertiesare derived from energies and forces , and so without loss of generality , we develop our approach based on energies and forces .parameterized empirical interatomic potentials offer a computationally efficient alternative to dft .potentials provide approximate energies and forces for atomic configurations that are inaccessibly large for dft calculations .generally , an empirical interatomic potential functional can be written as where are parameters , and is an interatomic potential function between atoms of chemical identity .symmetries of the potential functional form , such as permutation symmetry , translational symmetry , rotational symmetry , etc ., can simplify the functional form . a general empirical interatomic potential that reproduces all dft energy calculationsaccurately is computational intractable , since it would require a large number of many - body terms .rather , we are interested in simpler potentials that provide accurate results for a smaller range of atomic configurations including perfect crystals and defect structures under various thermodynamic conditions ; this includes potentials that may not be easily written in the form of eqn .( [ eqn : pot ] ) , such as eam and meam potentials .the optimal potential parameters derive from comparison to predictions of a database of dft calculations , and the performance of the potential is evaluated by a testing set of structure property predictions .a fitting database is a set of structure property functions with an associated structure and positive ( relative ) weight .while a single structure will often have multiple structure property functions with unique weights for fitting , we simplify our notation by indexing on the structure ; in what follows , sums over structures indicate sums over all members of the database .the structure property function may be a scalar such as the energy ( relative to a reference structure ) , vectors such as forces on the atom of the structures , stress tensors and more complicated structure property functions such as lattice constant , bulk modulus or vacancy formation energy or anything that can be defined from the energy . in the fitting database, we will compare the structure property functions evaluated using an empirical potential , with the values from dft , though other choices are possible , such as experimental data . in the weighted least - squares ( described later ), we impose the trivial constraint , as only relative values of are important .a testing set is a set of structures with structure property functions . in a testing set, we will compare structure property functions evaluated using an empirical potential with _ either _ values from dft , or using bayesian sampling of the empirical potential , following frederiksen _et al._, which we will discuss in section [ sec : bayes ] .there are no relative weights for structures in a testing set ; rather , these represent a set of predictions whose errors we will evaluate . in order to assess the prediction errors of the structure property functions , we define the prediction error function as where is the structure property function from the empirical atomic potential with parameters , is the structure property function from dft , and denotes the 2-norm of a -dimensional vector we will take the error evaluation of the energy differences between two structures and forces as examples . for energy calculations , the structure property function is where is the energy of a reference structure , .the potential energy prediction error is the force predictions errors are then , the weighted summed squared error function for a fitting database is introduce bayesian sampling to estimate the errors of structure property function predictions and quantitatively analyse the relative weight values in the fitting database .given a fitting database , we calculate the prediction of structure property function and the error of the structure property function .we then derive the analytical expression of the gradient of the bayesian errors with respect to the weights , .these gradients provide quantitative information on how structure property functions in the fitting database influence the bayesian predictions of structure property functions in the testing set though weight change .bayesian statistics treats model parameters as random variables with a probability distribution given by a posterior distribution. according to the bayes theorem , the posterior distribution of the parameters is a product of the prior distribution and the likelihood function , where the prior distribution includes the information about the potential model before the we take the fitting data into account .here we use the maximally unbiased prior distribution of a uniform distribution over a measurable set of allowed parameters sets , ^{-1},\ ] ] though other choices are possible . assuming the errors are independent and identically normally distributed , the likelihood function is where log - likelihood function is proportional to the squared error function , . since the logarithm is a monotonically increasing function , minimizing is equivalent to maximizing the log - likelihood function .the maximum likelihood estimate ( mle ) of the parameters is a function of the fitting database , and .the bayesian prediction of a function is the mean all the averages are implicit functions of the relative weights in the fitting database .the bayesian error is the mean squared error of the bayesian prediction : where is the variance of the bayesian prediction .the covariance of two functions and represents the correlation between two functions : the derivative of a bayesian prediction with respect to weight is note that the derivative of with respect to weight is found using the chain rule , as is an extremum of . applying eqn .( [ eqn : deriv1])eqn .( [ eqn : deriv3 ] ) to the bayesian error yields \\ & = -\frac{1}{w}\left [ c^f_{\alpha\beta}-\frac{\epsilon_{\alpha}^2({\theta^\text{mle}})}{w}\sum_\gamma w_\gamma c^f_{\beta\gamma}\right ] , \end{split } \label{eqn : dedw}\ ] ] where we define the error of a structure property function in a testing set without dft calculations by approximating the unknown dft calculations of the structure property function with its bayesian prediction . based on eqn .( [ eqn : error ] ) , is approximated by , and so a testing set can include structures _ in the absense of _ dft calculations .we need to evaluate the integral in eqn .( [ eqn : mean ] ) to calculate the bayesian predictions and bayesian error estimation . for complicated high - dimensional, non - linear models such as empirical potentials , the integral can not be evaluated in closed form , and the high - dimensionality makes direct numerical quadrature converge slowly .we instead use markov chain monte carlo ( mcmc ) to numerically integrate .the chain of will contain a set of independent samples ( where is the autocorrelation length ) , and the numerical estimate of the mean is with a sampling error of .hence , once fitting is complete , the `` best '' set of parameters defines the empirical potential for predictions , while the ensemble of parameters allows the estimation of errors on those predictions .we define an optimal fitting database based on bayesian errors in the testing set .an empirical potential model should reproduce dft calculations for a set of atomic environments described by structures in a testing set , and so the bayesian errors of structure property functions in the testing set are the quantities of interest . because different types of structure property functions have different units , different error magnitudes , and different degrees of freedom , we need an unbiased choice of objective function to evaluate different fitting database performances based on the bayesian errors for the same testing set . here, we consider the difference of the logarithm of the bayesian errors for one structure property function for two different fitting databases , and if is small , then , and then the right side of the equation is a relative difference in errors .we propose the objective function of a fitting database with testing set , so that is approximately the sum of relative differences in error .then , the optimal fitting database minimizes the sum log bayesian errors for a testing set .the objective function is implicitly dependent on the relative weights in the fitting database through the bayesian error .the gradient of the objective function with respect to weight is analytically calculable ( c.f .( [ eqn : dedw ] ) ) .we obtain the optimal weights in the fitting database by minimizing the objective function .hence we will be able to compare potentials fitted with different fitting databases with respect to the same testing set. however , the minimum of the objective function can be trivial for pathological fitting databases and testing set combinations .a pathological fitting database and testing set combination is an _ underdetermined _ fitting database , where the mle predictions can match the true values of a dft calculation in both the fitting database and testing set .thus , if for any structure , then approaches logarithmically . in order to eliminate the trivial minimum of pathological databases ,we introduce a threshold function , that creates a finite minimum of at .we can choose different error tolerances for each testing set structure property function .the objective function is then and the derivative of the objective function is where the derivative calculations are from eqn .( [ eqn : dedw ] ) and eqn .( [ eqn : derv_thres ] ) .the derivative of the threshold function is finally , note that as our likelihood function is independent of , so the optimal weights are found by minimizing , and this includes the addition and removal of structures from the fitting database . according to the definition of the likelihood function , eqn .( [ eqn : logl ] ) , the fitting database could include any structures with dft calculations with a non - negative relative weight value .structures with positive weight values are structures to fit , and all the other structures that do not contribute to the fitting will have a weight of zero. the optimal weight value can be determined _ even for structures not presently in the fitting database_. a structure is added to the fitting database if its optimal weight value is positive , as inclusion of that structure decreases the relative error in the testing set .a structure is removed from the fitting database if its optimal weight value is zero or negative , since removing the structure decreases the relative error in the testing set . , and an initial fitting database , we find the best set of potential parameters , given by the maximum likelihood estimate .we then use markov - chain monte carlo ( mcmc ) to generate an ensemble of independent parameters ; with this ensemble , we estimate the prediction errors and compute the gradient of the objective function .if the gradients are nonzero , we determine optimal weights , as well as consider addition ( ) or removal ( ) of structures from the database , and reenter the loop .once the gradients are zero , we can determine if the testing set errors are acceptable for use ; if not , either the range of transferability is lower indicating a smaller testing set is needed or the potential function requires additional flexibility to increase the transferability , and the algorithm is reentered . ] fig .[ fig : flow_chart ] outlines the new interatomic potential development algorithm based on bayesian statistics .it starts with a conventional interatomic potential fitting procedure by selecting a potential functional form and dft structural energies and forces forming a dft data set .we build the fitting database with a set of structures from the dft data set and assign each structure with a relative weight .the testing set also contains a set of structures from the dft data set that test the ability of the potential to model atomic environment outside of the fitting data .we apply non - linear weighted least squares regression to obtain the mle of parameters of the empirical potential model , and use markov chain monte carlo ( mcmc) sampling of the posterior distribution to generate the ensemble of parameters around the mle .we calculate the mean - squared errors of the testing set structures using the parameter ensemble , and construct the objective function and its gradients .next , we apply a conjugate - gradient method to optimize the objective function and obtain the optimal weights of the fitting database ; we can also determine if structures should be added or removed from the fitting database. this step can take advantage of structural searching methods, for example , to identify candidate structures , though we do not do so here .we repeat the circuit with the modified relative weight set of the fitting database until the optimal weights converge .the testing set is the key component of this approach not only because the objective function consists of the mean squared errors of the testing set structures , but also the empirical potential predictions for structures in the testing set should have small errors whether that is known from comparison with dft calculations or estimated from bayesian sampling _ without _ dft . with the relative errors in thetesting set minimized , any weight deviation from the optimal will result in an increase in relative errors .this means that while we could choose weights to reduce the error of one or several testing set structure property function predictions , it will worsen the predictions of other structures and the trade - off is not worthwhile .although we are able to optimize the fitting database of the empirical potential models , an optimal fitting database does not guarantee a reliable empirical potential model .the optimization algorithm provides the best possible empirical interatomic potential for a given a fitting database and a given testing set , but it has no judgment on whether the optimal bayesian errors are acceptable ; they can , in fact , be quite large .this can occur if the empirical potential model does not contain the relevant physics to describe the atomic environments in the testing set , which produces reduced transferability .then , we must for predictive empirical potential methods decide to improve the potential model itself to increase transferability or remove structures from the testing set to optimize for reduced transferability .we apply the database optimization algorithm to a simple empirical interatomic potential model , the lennard - jones potential .the lennard - jones potential is a two - parameter pair potential : & \colon \mbox{}\\ 0 & \colon \mbox{}\end{array } \right.\\ & -v_2(r_\text{cutoff};r_0,e_\text{b } ) \end{split } \label{eqn : lj}\ ] ] where is the binding energy of a dimer with a separation of {2}r_0 12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrevb.29.6443 [ * * , ( ) ] link:\doibase 10.1103/physrevb.46.2727 [ * * , ( ) ] link:\doibase 10.1103/physrevb.54.6941 [ * * , ( ) ] link:\doibase 10.1016/0920 - 2307(93)90001-u [ * * , ( ) ] link:\doibase 10.1016/s0022 - 3115(99)00166-x [ * * , ( ) ] link:\doibase 10.1016/s0022 - 5096(02)00119 - 9 [ * * , ( ) ] link:\doibase 10.1103/physrevb.63.224106 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.100.045507 [ * * , ( ) ] link:\doibase 10.1103/physrevb.67.125101 [ * * , ( ) ] link:\doibase 10.1103/physrevb.40.6085 [ * * , ( ) ] http://stacks.iop.org/0965-0393/2/i=1/a=011 [ * * , ( ) ] link:\doibase 10.1103/physrevb.75.085311 [ * * , ( ) ] link:\doibase 10.1103/physrevb.81.125328 [ * * , ( ) ] http://stacks.iop.org/0965-0393/8/i=6/a=305 [ * * , ( ) ] link:\doibase 10.1103/physrevb.78.054121 [ * * , ( ) ] link:\doibase 10.1103/physrevb.85.214121 [ * * , ( ) ] http://stacks.iop.org/0965-0393/4/i=3/a=004 [ * * , ( ) ] link:\doibase 10.1103/physrevb.81.144119 [ * * , ( ) ] link:\doibase 10.1103/physrevb.83.134118 [ * * , ( ) ] link:\doibase 10.1103/physrevb.33.7983 [ * * , ( ) ] link:\doibase 10.1016/s1359 - 6454(01)00287 - 7 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.59.2666 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.56.632 [ * * , ( ) ] link:\doibase 10.1103/physrevb.46.2250 [ * * , ( ) ] http://stacks.iop.org/0295-5075/26/i=8/a=005 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.93.165501 [ * * , ( ) ] link:\doibase 10.1103/physrev.136.b864 [ * * , ( ) ] link:\doibase 10.1103/physrev.140.a1133 [ * * , ( ) ] _ _ , ed .( , , ) _ _ , ed .( , , ) _ _ ( , , ) * * , ( ) link:\doibase 10.1103/physrevb.47.558 [ * * , ( ) ] link:\doibase 10.1103/physrevb.54.11169 [ * * , ( ) ] * * , ( ) http://dx.doi.org/10.1038/nmat1292 [ * * , ( ) ]
weighted least squares fitting to a database of quantum mechanical calculations can determine the optimal parameters of empirical potential models . while algorithms exist to provide optimal potential parameters for a given fitting database of structures and their structure property functions , and to estimate prediction errors using bayesian sampling , defining an optimal _ fitting database _ based on potential predictions remains elusive . a testing set of structures and their structure property functions provides an empirical measure of potential transferability . here , we propose an objective function for fitting databases based on testing set errors . the objective function allows the optimization of the weights in a fitting database , the assessment of the inclusion or removal of structures in the fitting database , or the comparison of two different fitting databases . to showcase this technique , we consider an example lennard - jones potential for ti , where modeling multiple complicated crystal structures is difficult for a radial pair potential . the algorithm finds different optimal fitting databases , depending on the objective function of potential prediction error for a testing set .
it has been observed that people addicted to watching streaming videos constitute more than half of the internet traffic . with the introduction of smartphones, mobile networks are witnessing an exponential traffic growth every year .this leads to scenario where internet and wireless networks are pushed to operate close to their performance limits , dictated by current architectural considerations .though much effort has been expended and in turn significant progress has been made in recent years to increase the capacity of mobile networks , there is little progress on dealing with the user satisfaction , which is strongly related to the quality of experience ( qoe ) .this is the big challenge that the operators face today because they have to look at both the server side and the client side to make a link between the quality of service ( qos ) of the network and the client satisfaction which depends on the qoe .empirical studies in has identified critical metrics that affect the qoe through the user engagement : * starvation probability . denoting the probability that a streaming user sees frozen images . *average bit - rate . denoting the mean video quality over the entire session .* bit - rate stability . describing the jittering of video quality during the entire session .* start - up delay . denoting the waiting duration between the time that the user requests streaming service and the time that media player starts to play .paper pointed out that the buffering ratio is most critical across genres .for example , 1% increase in buffering reduces 3 minutes for a 90-minutes live video streaming . showed that the total time spent rebuffering and the frequency of rebuffering events have substantial impact on qoe . under this context ,media servers and network operators face a crucial challenge on how to avoid the degradation of user perceived media quality based on these metrics . however , development of new models as function of these metrics can help operators and content publishers to better invest their network and server resources toward optimizing these metrics that really matter for qoe .in this paper we focus on a setting in which a video is streamed over a wireless network which is subject to a lot of constraints like bandwidth limitation and rate fluctuations due to the frequent changes of channel states and mobility .indeed , time varying network capacity is especially relevant when considering wireless networks where such variations can be caused by fast fading and slow fading due to shadowing , dynamic interference , and changing loads . to addresse this issue ,we focus on performance modelling and analysis of a streaming video with long range dependence ( lrd ) traffic and variable service capacity . due to the inherent difficulty and complexity of modelling fractal - like lrd traffic , we assume that the arrival of packets at player buffer are characterized by a markov modulated fluid model , which accurately approximates the traffic exhibiting lrd behaviour and mimics the real behaviour of multimedia traffic with short - term and long - term correlation . in comparison to related works ,our whole analysis is on transient regime .we construct sets of partial differential equations ( pdes ) to derive the starvation probability generating function using the external environment , which is described by the continuous time markov chain ( ctmc ) .this approach predicts the starvation probability as function of the file size as well as the prefetching threshold .moreover we provide relevant results to understand on how the starvation probabilities are impacted by the variation of traffic load and prefetching threshold .we do simulations to show the accuracy of our model using ns-3 . achieving this goal , we are able to identify through our model the dependencies between quality metrics . for example, start - up delay can reduce rebuffing ratio .similarly bitrate rate switching can reduce buffering . with the results developed in this work , we are able to answer the fundamental questions : how many frames should the media player prefetch to optimize the users quality of experience ? from what file size the adaptive coding is relevant to avoid the starvation ?how bit - rate switching impacts the qoe metrics ?knowing these answers enables the user to maximize his qoe realising the tradeoffs among different metrics incorporating user preferences on rebuffering ratio , start - up delay and quality .we further introduce an optimization problem which takes these key factors in order to achieve the optimal tradeoff between them .we adopt a more flexible method by defining an objective of qoe by associated a weight for each metric based on user preferences .qoe analysis over wireless networks has been studied for many years . in , authors study the qoe in a shared fast - fading channel using an analytical framework based on takacs ballot theorem .they use a gi / d/1 queue to model the system , so they assume that the arrival process is independent and identically distributed ( i.i.d ) . in ,the analysis of buffer starvation using m / m/1 queue is performed .they use a recursive method to compare the results with the ballot theorem method even if the recursive method did not offer explicit results .they assume an i.i.d arrival process that is a rough model of streaming services over the wireless networks .since the performance measures depend on the autocorrelation structure of the traffic , a consensus exists about the limitation of the poisson process to model the traffic behaviour . in , authors develop an analytical framework to investigate the impact of network dynamics on the user perceived video quality , they model the playback buffer by a g / g/1 queue and use the diffusion approximation method to compute the qoe .the qoe of streaming from the perspective of the network flow dynamics is studied in .the throughput of a tagged user is governed by the number of the other users in the network .this study shows that the network flow dynamics is the fundamental reason for playback starvation . +the rest of this paper is organized as follows : in section [ model ] , we describe the system model while section [ qsm ] presents the analysis of the queuing system model . section [ paq ] describes the performance analysis of the quality of experience and section [ 2mmpp ] presents explicit results for two states mmfm .section [ na ] shows numerical results and section [ clc ] concludes this paper .we consider a single user receiving a media file with finite size in streaming .generally , media files are divided into blocks of frames . when a user makes a request the server segments this media into frames and transfers them to the user through the network ( wired and wireless links ) .when frames traverse the internet , their arrivals are not deterministic due to the dynamics of the available bandwidth .one of the main characteristics of wireless traffic and internet traffic in general is the rate fluctuation caused by fast fading and slow fading due to shadowing , dynamic interference , and changing load .moreover , data packet arrivals in cellular networks are found to be correlated over both short and long - time scales .this is generally due to the arrival of packets bursts of comparable size , often leading to high instantaneous arrival rates .hence , video flows through the internet with fluctuating speed . in this paper, we assume that frames arrive to the play - out buffer with a rate that can take values from finite set .the rate of arrival frames is governed by a continuous - time markov chain ( ctmc ) with infinitesimal generator . where .the maximum buffer size is assumed to be large enough so that the whole file can be stored . at the user side ,incoming frames are stocked in a buffer and from there they are played with a rate ( e.g. , 25 frames per second ( fps ) ) in the tv and movie - making business .we quantify the user perceived media quality using two measures called start - up delay and starvation .there is ongoing research on mapping these two measures on standard human evaluated qoe measures .as explained earlier , the media player wants to avoid the starvation by prefetching packets .however , this action might incur a long waiting time . in what follows, we reveal the relationship between the start - up delay and the starvation behaviour , with the consideration of the file size .we consider a fluid model that has been proven to be a powerful modeling paradigm in many applications and relevant to capture the key characteristics that determine the performance of networks .let denote the effective input rate in state .hence the matrix of the effective rates is , which is a diagonal matrix .we denote by the length of playout buffer of playback at time t. let be the first time the buffer is empty before reaching the end of the file , i.e. , and be the start - up delay where is the prefetching threshold . in the next section we provide mathematical analysis to compute the distribution of the number of starvation and start - up delay for a general bursty arrival process .we compute the laplace transform of the probability of starvation given the continuous - time markov chain . we define be the probability of starvation in state before time , given the initial state and the initial queue length . for and .it is clear that the ctmc can not be in a state at time if . hence let be the steady state probability vector of the ctmc where is the probability to be in the state at the stationary regime .the expected input and output rates are and respectively .the buffer is stable if .conditioning on the first transition from the state at time we have taking the limit and after some algebraic simplification we obtain the following partial differential equation with the initial conditions where ] . taking the lst of equation ( [ pde ] ) and using the fact that for all , we find for a fixed value of , we take as a solution to equation ( [ ode ] ) . substituting in ( [ ode ] ) we get where the scalar and the vector are to be determined .the theorem from gives where the coefficients are obtained by solving are the roots with negative real parts of and are the corresponding eigenvectors satisfying the equation we consider the previous system during the prefetching process and we denote by the length playout buffer of playback at time .let be the first time that the length playout buffer reaches . is the time that the system will take to accumulate content in the buffer .this distribution is difficult to solve directly , so we resort to the following duality problem : + * duality problem : * _ what is the starvation probability by time if the queue is depleted with rate and the duration of prefetching contents is ? _+ this duality problem allows us to compute the prefetching delay as a probability of starvation .we define to be the probability of starvation before time at the state , conditioning on the initial state and the initial prefetching content , i.e. , the start - up threshold . for and .conditioning on the first transition from the state at time , taking the limit and after some algebraic simplification we obtain the following partial differential equation with the same initial conditions as in section [ first_passage ] , where and ] , conditioning on the first transition from the state at time , we have after some algebraic simplification and letting yields the differential equation with the boundary condition let eq .( [ oded2 ] ) becomes : . is given by using eq.([sol ] ) and the initial conditions , we get where , is the diagonal matrix containing all the eigenvalues of and is an invertible matrix . in the previous sections we derived explicit expressions for the laplace - stieltjes transform of the probability of starvation and the start - up delay . in this section ,we present theoretical models to find the corresponding probability of starvation and start - up delay .the laplace stieljes transform of is \\ & = & \int_{0}^{\infty}{e^{-\omega t}dh_{ij}(x , t ) } = \int_{0}^{\infty}{e^{-\omega t}h_{ij}(x , t)}\end{aligned}\ ] ] where is the probability density function of . given the laplace transform , the function value can be recovered from the contour integral where is a real number to the right of all singularities of , , and the contour integral yields the value for .it is shown in that for real value functions , has the following form according to the _ bromwich inversion integral _ , can be calculated from the transform by performing a numerical integration ( quadrature ) .we use a specific algorithm based on the bromwich inversion integral .it is based on a variant of the fourier - series method - the trapezoidal rule - which proves to be remarkably effective .if we use a step size , then the trapezoidal rules gives where since is real .replacing by which is the laplace transform of , we get the probability of starvation before time \end{gathered}\ ] ] the infinite series in can simply be calculated by simple truncating because it converges , but more efficient algorithm can be obtained by applying a summation acceleration method .an acceleration technique that has proven to be effective in our context is euler summation , after transforming the infinite sum into a nearly alternating series in which successive summands alternate in sign .we convert into a nearly alternating series by letting and where }\bigg)\end{gathered}\ ] ] let be the approximation with the infinite series truncated to terms , i.e. , where t is suppressed in the notation and .we apply euler summation to terms after an initial , so that the euler sum approximation is euler summation can be very simply described as the weighted average of the last partial sums by a binomial probability with parameter and .hence , is the binomial average of the terms .the implementation of the algorithm takes into account the values of . as in , we use where . after simplification and letting 1 , we get the euler approximation of the inverse which seems to be a good approximation \end{gathered}\ ] ] eq .( [ eulerfct ] ) looks complicated , but it consists of only additions , that is a low computation level . to have the cdf , we just replace by .the same formula holds for the start - up delay distribution in replacing by .in this section we compute the qoe metrics based on the analysis derived in the previous section . we consider a single user receiving a media file with size . the necessary time to play the whole videoif there is no starvation is .hence , using the first passage time distribution , the probability of starvation happened in state before reaching the end of file given the initial state , is given by \end{gathered}\ ] ] the starvation of probability before time gives an idea of the severity of the starvation during the video session .let ] because of the prefetching process .since there are starvations in total , the starvation must satisfy .we next compute the remaining case that the and the starvations happen at time and respectively .we compute this probability using the first passage time density when the starvation happens at time and the initial time was with a prefetching process . is expressed as we use in this method a trick that concerns the time scale .every time the player resumes for the prefetching process we resume also the time scale , that means if the starvation happens at time , the player will start playing at the same time with initial contents in the buffer .the probability of having starvations is given by in the next section , we provide explicit expressions of qoe metrics where ctmc has two states .in this section , we consider a special case in which the ctmc has two states : ( see fig .[ 2mpfig ] ) with infinitesimal generator and rate matrix the two - state mmpp source ] our objective is to understand the interaction between the parameters of arrival process and the probability of starvation . using the results of section [ first_passage ] irrespective of the condition of stability of the queue , we get : + \omega ( \omega + \alpha + \beta ) \end{gathered}\ ] ] it is a polynomial of degree 2 in where the two zeros are given by : where .equation contains terms with only .so we have to determine the signs of and .the next propositions give the placement of these two zeros in the complex plane .let , * , so and then no starvation .* and , so and then and .* and , so and then and . * then no starvation . * , and .let , * , so and then no starvation . * and , so and then and . * and , so and then and .* then no starvation . * , and .let , * , no starvation .* , and . * , no starvation because of the prefetching .the lst of the distribution is given in the next theorem .* when , , and * when , , and * when , , where the proof of this theorem can be find in the appendix of .taking gives the first passage time distribution for the on - off source .we use ns-3 simulator in order to compare the dynamics of the process with our model .the simulation topology consists on a server and a client in order to simulate the queue model .the server sends the traffic to the client following the continuous time markov chain .the client holds a buffer where the traffic is stored .the parameters of the traffic depend on the ctmc parameters .then we analyze the behavior of the client buffer content which simulates the player .we run 10 simulations and compute the 95% confidence interval on all observed metrics , but it is not shown on all the figures for improving readability because it is very narrow .we first show the accuracy of the method that we use to invert the laplace transform . in fig .[ testfig ] , we plot the known inverse laplace transform of the function that is and the inverse using formula ( [ eulerfct ] ) of section [ invert ] .[ offinf ] shows the starvation probability for a two states mmfm source for , and , that means the buffer size increases on state and decreases on state .this is done for states transitions and .the accuracy of euler summation algorithm , height=196 ] , height=196 ] , height=196 ] , height=196 ] ] fig .[ nostarvx ] illustrates the impact of the start - up threshold on the probability of no starvation .[ onestarv ] shows the probability of having one starvation .these simulation results validate the correctness of our analysis .hence , in the following experiments , we only illustrate the analytical results . ,height=196 ] , height=196 ] , height=196 ] fig .[ startupdelay ] gives the cdf of the start - up delay for different values of start - up threshold .we can see that the start - up delay increases with . on the other hand figure [ nostarvx ]illustrates the impact of the start - up threshold on the probability of no starvation .when is large enough ( near 300 pkts in the figure ) no starvation will happen until the end of the video . since , the curve grows sharply , it is clear that a slight increase in can greatly improve the starvation probability . in figure [ nstarvx ], we plot the probability of having no more than two starvations with , , irrespective to .when is large enough , no starvation will happen until the end of the video session .on the other side , figure [ nstarvn ] shows that the starvation happens for sure when the file size approaches infinity .the curves of the probability of having one starvation or two starvations increase first , and then decrease to zero .this means that the starvation can be avoided when is large enough .the two curves have a maximum value at a given start - up threshold or a given file size .so one can choose the threshold to have exactly one or two starvations .this is an important measure because it allows one to achieve the buffer requirements in setting up the desired values .indeed , a very small threshold do not help to reduce the starvation probability and very large thresholds do not further reduce the starvation probability .hence the analytical model aims to predict the player buffer behavior in video streaming sessions .the network parameters and the video size are the framework inputs that can be used to improve the qoe related to user preferences . in this section ,we introduce an optimization problem of the qoe by including different metrics and incorporating user preferences by associated a weight to each metric .we denote by the cost of a user watching the media stream , where is the number of starvation , is the start - up delay , is the lost on video quality and is the fraction of the total session time spent in low bit - rate . , and are depending on the user preferences of the three metrics ( starvation , start - up delay and video quality ) .based on the user preferences , we compare the cost of qoe for two scenarios . in scenario 1 ,the adaptive bitrate streaming is not used and for the second scenario , the adaptive bitrate streaming is used in order to adjust the quality of a video stream according the available bandwidth .we consider a network that throughput varies between 200kbps and 400kbps .for the adaptive bitrate streaming , we have two coding rates depending on the throughput .this leads to two different frame sizes ( 10kbits , 20kbits ) .then we compute the cost for and .[ progadap ] and [ progadap2 ] , we compare the cost for the two scenarios . for short video duration, the adaptive bitrate streaming is not benefit because there is a less number of starvation and the quality of the video is degraded .but , for the long video duration , the adaptive bitrate streaming becomes interesting because the low coding rate decreases the number of the starvation . in fig .[ progadap ] and [ progadap2 ] , we can see that the value of the parameter changes the preference of the user for the quality of the video . for , , we use the adaptive bitrate streaming when the size of the file is more than 400 , 600 frames respectively .otherwise , the adaptive bitrate streaming is not necessary . ,height=196 ] , height=196 ]in this paper , we have proposed a new analytical framework to compute the qoe of video streaming in the network modeled by the markov modulated fluid model .we found the probability of starvation and the start - up delay in solving partial differential equations through the laplace transform method .this allowed us to compute the number of starvation during the video session that is an important metric of the quality of experience of the user .in addition , we have presented simulation results using ns3 to show the correctness of our model . we have proposed a method to optimize the quality of experience given a trade - off between the player starvation and the quality of the video .these results show that the adaptive bitrate streaming could impact negatively on the quality of the short video duration . 1 y. xu , e. altman , r. el - azouzi , s. elayoubi and m. haddad , _ qoe analysis of media streaming in wireless data networks _ , networking springer berlin heidelberg , 2012 .xu , y. zhou and d - m . chiu , _ analytical qoe models for bit - rate switching in dynamic adaptive streaming systems _, ieee transactions on mobile computing , 2013 .xu , s.e .elayoubi , e. altman , r. elazouzi and y. yu _ flow level qoe of media streaming in wireless networks _ , eprint arxiv:1406.1255 , 06/2014 .a. balachandran , v. sekar , a. akella , s. seshan , i. stoica and h. zhang , _ developing a predictive model of quality of experience for internet video _ , sigcomm 2013 , hong kong .xu , s.e .elayoubi , e. altman and r. elazouzi , _ impact of flow - level dynamics on qoe of video streaming in wireless networks _ , ieee infocom , 2013 .a. narayanan , and v.g .kulkarni , _ first passage times in fluid models with an application to two priority fluid systems _ , computer performance and dependability symposium , proceedings ieee international , 1996 .f. dobrian , a. awan , i. stoica , v. sekar , a. ganjam , d. joseph , j. zhan , and h. zhang , _ understanding the impact of video quality on user engagement _ , proc . of acm sigcomm 2011 , vol.41 , no.4 , pp .362 - 373 .j. abate , g.l .choudhury and w. whitt , _ computational probability _ , w. grassman ( ed . ) kluwer , boston 1999. y.d .xu , e. altman , r. el - azouzi , m. haddad , s. elayoubi and t. jimenez , _ analysis of buffer starvation with application to objective qoe optimization of streaming services _ , ieee infocom , 2012 . s. shah - heydari , t. le - ngoc , _ mmpp models for multimedia traffic _ , telecommunication systems 15 , march 1999 .a. balachandran , v. sekar , a. akella , s. seshan , i. stoica , and h. zhang . a quest for an internet video quality - of - experience metric . in 11th acm workshop on hot topics in networks ( hotnets - ix ) , 2012 .s. s. krishnan and r. k. sitaraman .video stream quality impacts viewer behavior : inferring causality using quasi - experimental designs . in acm imc , 2012 . h. hu , x. zhu , y. wang , r. pan , j. zhu , and f. bonomi , `` qoebased multi - stream scalable video adaptation over wireless networks with proxy , '' in icc , 2012 d. bethanabhotla , g. caire , and m. j. neely , `` joint transmission scheduling and congestion control for adaptive video streaming in smallcell networks , '' arxiv:1304.8083 [ cs.ni ] , 2013 .v. joseph , s. borst , and m. reiman , `` optimal rate allocation for adaptive wireless video streaming in networks with user dynamics , '' submitted 2013 .vinay joseph and gustavo de vecian , nova : qoe - driven optimization of dash - based video delivery in networks , ieee infocom , april 2014 .r. mok , e. chan , and r. chang , `` measuring the quality of experience of http video streaming , '' ifip / ieee international symposium on integrated network management , 2011 .luan , l.x .cai and x. shen , _ impact of network dynamics on user s video quality : analytical framework and qos provision _ , ieee transactions on multimedia , vol . 12 , january 2010 . _when , , and is the eigenvector correspondind to according to section [ first_passage ] , then when , , we use only in computing the distribution , since .thus the distribution becomes and because if we start without packets in the buffer in state 1 , we ll have starvation with probability 1 within the same state .so , that yields the result of the theorem .the same proof holds in the case , by interchanging and .+ when , , we use both and . thus we have using the initial condition and , we solve the following system and get the same proof yiels for .
we take an analytical approach to study the quality of user experience ( qoe ) for video streaming applications . our propose is to characterize buffer starvations for streaming video with long - range - dependent ( lrd ) input traffic . specifically we develop a new analytical framework to investigate quality of user experience ( qoe ) for streaming by considering a markov modulated fluid model ( mmfm ) that accurately approximates the long range dependence ( lrd ) nature of network traffic . we drive the close - form expressions for calculating the distribution of starvation as well as start - up delay using partial differential equations ( pdes ) and solve them using the laplace transform . we illustrate the results with the cases of the two - state markov modulated fluid model that is commonly used in multimedia applications . we compare our analytical model with simulation results using ns-3 under various operating parameters . we further adopt the model to analyze the effect of bitrate switching on the starvation probability and start - up delay . finally , we apply our analysis results to optimize the objective quality of experience ( qoe ) of media streaming realizing the tradeoff among different metrics incorporating user preferences on buffering ratio , startup delay and perceived quality .
inhomogeneities in the large - scale matter distribution can in many ways affect the light signals coming from very distant objects . these effects need to be understood well if we want to map the expansion history and determine the composition of the universe to a high precision from cosmolgical observations .in particular the evidence for dark energy in the current cosmological concordance model is heavily based on the analysis of the apparent magnitudes of distant type ia supernovae ( sne ) .inhomogeneities can affect the observed sne magnitude - redshift relation for example through gravitational lensing , in a way which essentially depends on size and composition of the structures through which light passes on its way from source to observer .the fundamental quantity describing this statistical magnification is the lensing probability distribution function ( pdf ) .it is not currently possible to extract the lensing pdf from the observational data and we have to resort to theoretical models .two possible alternatives have been followed in the literature . a first approach ( e.g. ref . ) relates a `` universal '' form of the lensing pdf to the variance of the convergence , which in turn is fixed by the amplitude of the power spectrum , .moreover the coefficients of the proposed pdf are trained on some specific n - body simulations . a second approach ( e.g. ref . ) is to build _ ab - initio _ a model for the inhomogeneous universe and directly compute the relative lensing pdf , usually through time - consuming ray - tracing techniques . the flexibility of this method is therefore penalized by the increased computational time . in ref . we introduced a stochastic approach to cumulative weak lensing ( hereafter sgl method ) which combines the flexibility in modeling with a fast performance in obtaining the lensing pdf .the speed gain is actually a sine - qua - non for likelihood approaches , in which one needs to scan many thousands different models ( see ref .the sgl method is based on the weak lensing approximation and generating stochastic configurations of inhomogeneities along the line of sight .the major improvements introduced here are the use of a realistic halo mass function to determine the halo mass spectrum and the incorporation of large - scale structures in the form of filaments .the improved modeling together with the flexibility to include a wide array of systematic biases and selection effects makes the sgl method a powerful and comprehensive tool to study the impact of lensing on observations .we show in particular that the sgl method , endowed with the new array of inhomogeneities , naturally and accurately reproduces the lensing pdf of the millenium simulation .we also study a simple selection effect model and show that selection biases can reduce the variance of the observable pdf . such reduction could at least partly cancel the opposite effect coming from large scale inhomogeneities , masking their effect on the observable pdf .we also show how a jdem - like survey could constrain the lensing pdf relative to a given cosmological model . along with this paper , we release an updated version of the package , which is a simple and very fast mathematica implementation of the sgl method . this paper is organized as follows . in section [ sup ]we introduce the cosmological background , the generic layout of inhomogeneities and review the basic formalism needed to compute the weak lensing convergence . in section [ sec : inhoprop ]we derive the halo mass function and the halo density profiles and define the precise modeling of filaments .in section [ hmf ] we present the revised and extended sgl method .the exact discretization of the model parameters , which is a crucial step in the sgl model building , is explained in section [ mbin ] and in section [ confi ] we explain how the realistic structures where halos are confined in filaments are modelled in the sgl method .section [ results ] contains our numerical results including the comparison with the cosmology of the millennium simulation and , finally , in section [ conco ] we will give our conclusions .we consider homogeneous and isotropic friedmann - lematre - robertson - walker ( flrw ) background solutions to einstein s equations , whose metric can be written as : \ , , \ ] ] where and & \ ; k<0 \\ \end{array}\right.\ ; , \ ] ] where is the spatial curvature of any . in particular we will focus on models whose hubble expansion rate depends on redshift according to : where is given by : here could be taken to follow _e.g. _ the parameterization : for which : for a constant equation of state , the latter reduces to . , and are the present - day density parameters of dark energy , matter and radiation and represents the spatial - curvature contribution to the friedmann equation . introducing the hubble radius and the spatial - curvature radius , we have at any time the relation .we will also need the matter and dark energy density parameters at a given time or redshift : throughout this paper , the subscript will denote the present - day values of the quantities .for example the critical density today is where km s mpc , while is the critical density at any time . substituting in eq .( [ hhh ] ) and we obtain the equation we have to solve in order to find the time evolution of the scale factor , the only dynamical variable in an flrw model .we fix the radiation density parameter to .moreover , that is , dark and baryonic matter contribute together to the total matter density .the line - of - sight and transverse comoving distances are : from which we find the angular and luminosity distances together with the distance modulus : for scales smaller than the spatial - curvature radius , and we can use the euclidean geometry . the aim of this paper is to compute statistical weak lensing corrections to the measured light intensities , generated by the inhomogeneous matter distribution along the line of sight to a distant object .the basic quantity we need to model is the matter density contrast : where the lowercase indicates the local and inhomogeneous matter field while is the time dependent average mass density .the density contrast directly enters the expression for the weak lensing convergence : where is the co - moving position of the source and the integral is along an unperturbed light geodesic . the density is the constant matter density in a co - moving volume and we defined the auxiliary function which gives the optical weight of an inhomogeneity at the comoving radius .the convergence is related to the shift in the distance modulus by : where is the net magnification and the second - order contribution of the shear has been neglected .it is obvious that an accurate statistical modeling of the magnification pdf calls for a detailed description of the inhomogeneous mass distribution . in this paperwe will significantly improve the modeling of the inhomogeneities from our previous work , where only single - mass spherical overdensities were considered .first of all , we improve the modeling of these halos " by using a realistic halo mass function , which gives the fraction of the total mass in halos of mass at the redshift .the function is related to the ( comoving ) number density by : where we defined as the number density of halos in the mass range .the halo function is by definition normalized to unity the idea is of course that halos describe large virialized mass concentrations such as large galaxies , galaxy clusters and superclusters .not all matter is confined into virialized halos however . moreover , only very large mass concentrations play a significant role in our weak lensing analysis : for example the stellar mass in galaxies affects the lensing pdf only at very large magnifications where the pdf is close to zero .therefore the fraction of mass concentrated in large virialized halos can be defined by introducing a lower limit to the integral ( [ eq : fmnorm ] ) : only mass concentrations with are treated as halos . the remaining mass is divided into a family of large mass , but low density contrast objects with a fraction and a uniform component with a fraction , such that the low contrast objects are introduced to account for the filamentary structures observed in the large scale structures of the universe . in our analysis they will be modeled by elongated objects with random positions and orientations .the mass in these objects can consist of a smooth unvirialized ( dark ) matter field and/or of a fine `` dust '' of small virialized objects with . in weak lensing this distinction does not matter because small halos act effectively as a mean field , with a sizeable contribution only at very large magnifications ( see comment above about stellar mass ) . for later usewe define which gives the total mass fraction not in large virialized halos .if we only consider virialized masses larger than companion galaxies ( ) , then typical values for the concordance model are at and at , with a weak dependence on the particular used . although is sensitive on , the lensing pdf depends only weakly on the cut mass .moreover , halo functions obtained from n - body simulations are valid above a mass value imposed by the numerical resolution of the simulation itself and so the use of is also necessary in this case . to obtain an as accurate modeling as possible , we will use realistic mass functions and spatial density profiles for the halo distribution .there is less theoretical and observational input to constrain the mass distribution of the filamentary structures or their internal density profiles .we will therefore parametrize the filaments with reasonable assumptions for their lengths and widths and by employing cylindrical nonuniform density profiles .mpc thick slice generated by our stochastic model .the shaded disks represent clusters and the shaded cylinders the filamentary dark matter structures . only large clusters are displayed.,width=8 ] our modeling allows treating the two families of inhomogeneities independently .both can be given random spatial distributions , or alternatively all halos can be confined to have random positions in the interiors of randomly distributed cylinders .the latter configuration more closely resembles the observed large scale structures , and an illustration created by a numerical simulation using the turbogl package is shown in fig .[ slice ] . in section [ discu ]we will discuss the power spectrum and the large - scale correlations of our model universe .we can now formally rewrite eq .( [ contra1 ] ) for our model universe in which the density distribution is given by : where the index labels all the inhomogeneities , we used eq .( [ eq : sumrule ] ) and we defined , so that both virialized halos and the unvirialized objects are described by a generic reduced density profile which satisfies the normalization .the lensing convergence we are interested in can now be written as where the positive contribution due to the inhomogeneities ( halos and low contrast objects ) is the also positive contribution due to the uniformly distributed matter is and the negative empty beam convergence is : a light ray that misses all the inhomogeneities will experience a negative total convergence , which gives the maximum possible demagnification in a given model universe . in an exactly homogeneous flrw model the two contributions and and there is no net lensing . in an inhomogeneous universe ,on the other hand , a light ray encounters positive or negative density contrasts , and its intensity will be magnified or demagnified , respectively .the essence of the sgl method is finding a simple statistical expression for the probability distribution of the quantity in the inhomogeneous universe described above .for a discussion about the validity of the weak lensing approximation within our setup see ref . , where it was shown that the error introduced is % .we stress again that the lensing caused by stellar mass in galaxies is negligible in the weak lensing regime and so we can focus on just modeling the dark matter distribution in the universe .also , we will treat the inhomogeneities as perturbations over the background metric of eq .( [ metric ] ) . in particular we will assume that redshifts can be related to comoving distances through the latter metric .see ref . for a discussion of redshift effects . in the next sectionwe will give the accurate modeling of the inhomogeneities .we begin by introducing the halo mass function and the detailed halo profiles , and then move on to describe the precise modeling of the cylindrical filaments .we begin by explaining our dark matter halo modeling .the two main concepts here are the halo mass function giving the normalized distribution of the halos as a function of their mass and redshift , and the dark matter density profile within each individual halo .both quantities are essential for an accurate modeling of the weak lensing effects by inhomogeneities. we shall begin with the halo mass function introduced above in eq .( [ eq : halomassf ] ) .the halo mass function acquires an approximate universality when expressed with respect to the variance of the mass fluctuations on a comoving scale at a given time or redshift , . relating the comoving scale to the mass scale by , we can define the variance in a given mass scale by .the variance can be computed from the power spectrum : where is the fourier transform of the chosen ( top - hat in this work ) window function and the dimensionless power spectrum extrapolated using linear theory to the redshift is : here is the spectral index and is the amplitude of perturbations on the horizon scale today , which we fix by requiring : where the value of is estimated by cluster abundance constraints . is the linear growth function which describes the growth speed of the linear perturbations in the universe .fit functions for could be found , _e.g. _ , in ref . , but it can also be easily solved numerically from the equation : \nonumber \\ & -&{3 \over 2 } { d(z ) \over ( 1+z)^{2 } } \ , \omega_{m}(z)=0 \,,\end{aligned}\ ] ] where we have neglected the radiation . as usual, we normalize to unity at the present time , .finally , for the transfer function we use the fit provided by the equations ( 28 - 31 ) of ref . , which accurately reproduces the baryon - induced suppression on the intermediate scales but ignores the acoustic oscillations , which are not relevant for us here .with given , we can now define our halo mass function .several different mass functions have been introduced in the literature , but here we will consider the mass function given in eq .( b3 ) of ref . : which is valid in the range . because of the change of variable , is related to our original definition of by : the mass function of eq . ( [ jenk ] )is defined relative to a spherical - overdensity ( so ) halo finder , and the overdensity used to identify a halo of mass at redshift is with respect to the mean matter density .the so finder allows therefore a direct relation between halo mass and the radius : the subscript will denote the proper values of otherwise comoving quantities throughout this paper .for example , the comoving halo radius is related to the proper value by .a direct relation between the mass and the radius of a cluster is necessary for our sgl method .this is why we prefer mass functions based on so finders over mass functions based on friends - of - friends halo finders ; for the latter an appropriate to be used in eq .( [ mdel ] ) is not directly available .moreover , as shown in ref . , the so(180 ) halo finder gives a good degree of universality to the mass function .another mass function which manifests approximate universality with the so(180 ) halo finder is provided by sheth & tormen in ref . . with the halo mass function giventhe only missing ingredient is the halo profile which , as said after eq .( [ contra2 ] ) , we describe by means of the reduced halo profile .we stress that our halo of a given mass and redshift is an avererage representative of the total ensemble of halos , which in reality have some scatter in the density profiles .we will focus on the navarro - frenk - white ( nfw ) profile : here the scale radius is related to the halo radius by , where is the concentration parameter to be defined below . by integrating equation ( [ eq : nfwprofile ] )one finds the total mass ] , where is the comoving halo radius .it should be noted that through eq .( [ mdel ] ) the radius depends both on the halo mass and the redshift .see fig .[ sketch ] for an illustrative sketch .it is efficient to discretize into bins of constant integrated surface density ( constant equal mass ) : where is the number of -bins used and the surface density is given by ( see fig .[ sketch ] ) : after the redshift and the halo mass is specified , the bin boundaries and the weighted centers of gravity can be computed using eq .( [ binbounds ] ) .after these are defined , the corresponding area functions , needed for the poisson parameters and the binned surface densities , can be computed .overall , our halo model is described by the redshift index and two internal indices for the discretized mass and impact parameter . in terms of the generic parameter can formally express this as : , where the label refers to the halo " family of the inhomogeneities ..,width=8 ] of eq .( [ ltt ] ) as shown in fig .[ cylinder2].,width=8 ] the position and the orientation of a generic object in the three dimensional space is described by three position and three orientation degrees of freedom . sinceour filaments objects are confined to within a given slice , they generically have only five degrees of freedom . special symmetries of a sphere reduced the number of relevant parameters for halos to a sigle one , the impact parameter . for filamentsthe situation is slightly more complicated because a cylinder is invariant only for rotations around one symmetry axis .this leaves us with four degrees of freedom .the relevant parameters , however , are reduced by the symmetries to only two : the angle of the filament main axes with respect to the geodesic and the impact parameter . for illustrationsee figs .[ cylinder2]-[cylinder1 ] .this can be understood by the fact that all the possible filament configurations with the same and have the same surface density and so can be resummed using the theorem given above . in particular , the projected surface density of our filaments does not depend on the coordinate along the projected main axes we move " the triangular section as shown in fig .[ cylinder2 ] . strictly speaking our filamentsdo not have a cylindrical symmetry , but a -dependent tilted " rotation symmetry , such that the projected surface densities follow equation ( [ gammat0 ] ) for all .this approximation should be good for long filaments with . ]allowing to resum the -bins into a single effective bin of the total -dependent length of the projection : the angle is physically important because objects seen with a smaller have a smaller cross section and a higher surface density .accounting for this geometrical effect leads to a more strongly skewed lensing pdf .the expression for the surface density is given by for the uniform filament profile this gives simply , where .for the gaussian profile ( [ filagau ] ) one instead finds \ , .\label{gammat}\ ] ] the angle can be restricted by symmetry to the interval ] , and it will be discretized into bins using the equal mass criteria , as in the halo case defined in eq .( [ binbounds ] ) .that is , our filaments will be represented by a family of bars " of different surface densities and length .the surface area of these bars to be used in the poisson parameters is .overall , our filament model is described by the redshift index , and two internal indices for the discretized angle and impact parameter . in terms of the generic parameter can again formally express this as : , where the family label refers to the filament " family of the inhomogeneity .of course adding , _e.g. _ , a mass distribution for filaments and/or an -dependent density profile would increase the necessary number of indices in the discrete model .until now we have implicitly assumed that the halo and the filament families are independent . we will now show how a more realistic model with halos confined to the filaments can be set up in the sgl approach .the central observation is that the exact positions of halos and filaments within the co - moving distance bins are irrelevant in eq .( [ eq : fullkappa ] ) for the convergence .it then does not matter if we really confine the halos to filaments ; for the same effect it is sufficient to merely confine them into the equivalent volume occupied by the filaments in a given bin .we can actually do even better than imposing a simple volume confinement .indeed , it is natural to assume that the halo distribution follows the smooth density profile that defines the filament , which can be accounted for by weighting the volume elements by their reduced density profiles . because only the projected matter density is relevant for lensing ,this weighting is accounted for by the effective surface densities of the objects .we then introduce the effective co - moving thickness for the low density objects as : to refer to generic internal filamentary degrees of freedom . ] where is the average mass density of the filament with volume .the total effective length of a configuration of filaments along the line of sight is then : it is easy to show that the statistical average of over the configuration space is just the expected distance covered by filaments : where we defined the comoving average filament volume fraction : the confining simulation can now be performed in two steps : in the first step a random configuration of filaments is generated and eq .( [ eq : master1 ] ) is used to compute the effective lengths at different co - moving slices .these lengths define then the poisson parameters for the halos through eq .( [ deltan ] ) : note that the halo densities are corrected to account for the reduced volume available due to confinement : where are the usual halo densities of eq .( [ nbins ] ) when the halos are uniformly distributed through all space .inserting eq .( [ eq : haloweights ] ) back to ( [ deltanhalos ] ) and using eqs .( [ eq : master1]-[eq : master2 ] ) we can rewrite eq .( [ deltanhalos ] ) simply as where are the usual configuration independent poisson parameters computed without confinement .note that the central quantities needed in the evaluation of eq .( [ deltanf ] ) are the filamentary weights and surface densities . withthese given , all the information about the specific low contrast objects used is condensed into the single stochastic variable : which has an expectation value of unity and a mode smaller than unity . the lensing effects due to large - scale structuresare thus tied to the probability distribution of : its added skewness is the effect coming from confining halos within filaments .this opens the possibility to investigate and compare different filamentary geometries by means of the -pdf .we will develop these thoughts further in a forthcoming paper .we can further generalize the picture given in this section by considering different levels of confinement for halos of different masses . indeed , because halos of different mass are treated independently in eq .( [ deltanf ] ) , one could have small halos populate also the voids and massive halos only the filamentary structure .let us finally point out that in practical computations the convergence pdf is obtained by creating a large number of independent halo configurations for each master " filament configuration .typically we use simulations with few hundred master configurations with a few hundred halo configurations each . in this sectionwe will discuss the power spectrum of our model universe and the importance of large - scale correlations for the lensing pdf .we start by making a connection between the sgl method and the so - called `` halo model '' ( see , for example , and for a review ) , where inhomogeneities are approximated by a collection of different kinds of halos whose spatial distributions satisfies the linear power spectrum .the idea behind the halo model is that on small scales ( large wavenumbers ) the statistics of matter correlations are dominated by the internal halo density profiles , while on large scales the halos are assumed to cluster according to linear theory .the two components are then combined together .the sgl modeling of the inhomogeneities can be thought as a two - step halo model where we first create the random filamentary structures and then place the halos randomly within these structures .similarly to the halo model , one can then combine the linearly - evolved power spectrum with the nonlinear one coming from the filaments and the halos they contain . in this senseour filamentary structures extend the halo model by introducing correlations in the intermediate scales between the halo substructures and the cosmological scale controlled by the linear power spectrum . in the halo modelthe power spectrum can be computed analytically , but here the calculation has to be done numerically .we are currently extending the sgl method such that the simulation will produce also the power spectrum in addition to the lensing pdf .the power spectrum can indeed place useful constraints on the filament parameters which , as remarked in section [ mati ] , are not tightly constrained by observations .while the power spectrum is relevant for understanding the correlations at the largest observable scales , the lensing pdf depends mainly on the much stronger inhomogeneities at smaller scales .this can be understood from eq .( [ eq : kappa1 ] ) which shows the direct dependence of the lensing convergence on the density contrast .the small lensing variance from the large - scale correlations induced by the linear power spectrum was numerically computed , for example , in .moreover , weak lensing and the power spectrum probe somewhat different aspects of the inhomogeneities .the web - like structures of filaments and voids that affect weak lensing are mainly described by higher order correlation terms beyond the power spectrum , and so special care has to be put in designing the filamentary structure .this is indeed the goal of the sgl method : to give an accurate and flexible modeling of the universe as far as its lensing properties are concerned .our main goal in this section is to compare the sgl approach with the convergence pdf computed from large scale simulations .the idea is that by achieving a good agreement with numerical simulations we are proving that the sgl method does provide a good and accurate description of the weak lensing phenomena .as we shall see , this is indeed the case , as we can naturally reproduce the lensing pdf of the millenium simulation .let us stress that while a comparison to simulations is a good benchmark test for the sgl approach , the simulations and underlying model do not necessarily describe nature .indeed observations do not yet provide strong constraints to the lensing pdf , which leaves room for very different types of large scale structures , examples of which have been studied for example in ref . . with the accuracy of the method tested, one can reliably compute the effects of selection biases using sgl . while we lack the necessary expertise to quantitatively estimate the possible bias parameters , we will study a simple toy model for the survival probability function to show qualitatively how selection effets might bias the observable pdf .finally , we will also show how a jdem - like survey could constrain the lensing pdf relative to a given cosmological model .we shall begin , however , by the comparison with the simulations .we shall now confront the sgl method with the cosmology described by the millennium simulation ( ms ) .accordingly , we will fix the cosmological parameters to , , , , and .moreover , the mass function of eq .( [ jenk ] ) agrees with the ms results .these parameters completely fix the background cosmology and the halos .we are then left with the filament parameters to specify .first , we fix , which means that half of the unvirialized mass forms filaments , while the other half is uniformly distributed .this value determines the maximum possible demagnification in the lensing pdf , corresponding to light that misses all the inhomogeneities . for the background model described above thisimplies mag . for each sgl method evaluates the corresponding occupational number from the poisson statistics .the program mathematica using one core of a cpu at ghz takes a time s to produce an array of poisson numbers .other programs will likely have similar performances . determines the statistics with which the lensing pdf is generated . using this information wecan then estimate the performance of the sgl method by evaluating .the results are shown in table [ turbo ] where we fixed , and .confining halos within filaments typically makes calculations about ten times longer .p. valageas , astron .astrophys . * 356 * , 771 ( 2000 ) .d. munshi and b. jain , mon . not .soc . * 318 * , 109 ( 2000 ) .y. wang , d. e. holz and d. munshi , astrophys .j. * 572 * , l15 ( 2002 ) .s. das and j. p. ostriker , astrophys .j. * 645 * , 1 ( 2006 ) .l. amendola , k. kainulainen , v. marra and m. quartin , phys .* 105 * , 121302 ( 2010 ) .s. hilbert , s. d. m. white , j. hartlap and p. schneider , mon . not .* 382 * , 121 ( 2007 ) .s. hilbert , s. d. m. white , j. hartlap and p. schneider , mon . not .* 386 * , 1845 ( 2008 ) .v. springel _ et al ._ , nature * 435 * , 629 ( 2005 ) . m. chevallier and d. polarski , int . j. mod. phys .d * 10 * , 213 ( 2001 ) ; e. v. linder , phys .lett . * 90 * , 091301 ( 2003 ) . m. bartelmann and p. schneider , phys .rept . *340 * , 291 ( 2001 ) .a. jenkins _ et al ._ , mon . not .soc . * 321 * , 372 ( 2001 ) .r. kantowski , astrophys .j. * 155 * , 89 ( 1969 ) .v. marra , e. w. kolb , s. matarrese and a. riotto , phys .d * 76 * , 123004 ( 2007 ) ; v. marra , e. w. kolb and s. matarrese , phys .d * 77 * , 023003 ( 2008 ) ; v. marra , arxiv:0803.3152 [ astro - ph ] ; e. w. kolb , v. marra and s. matarrese , gen .grav . * 42 * , 1399 ( 2010 ) .e. rozo _ et al ._ , astrophys . j. * 708 * , 645 ( 2010 ) . w. j. percival , astron .astrophys . * 443 * , 819 ( 2005 ) ; see also : s. basilakos , astrophys .j. * 590 * , 636 ( 2003 ) .d. j. eisenstein and w. hu , astrophys .j. * 496 * , 605 ( 1998 ) .m. j. .white , astrophys. j. suppl .* 143 * , 241 ( 2002 ) .r. k. sheth and g. tormen , mon . not .soc . * 308 * ( 1999 ) 119 .j. f. navarro , c. s. frenk and s. d. m. white , astrophys . j. * 462 * , 563 ( 1996 ) .a. r. duffy , j. schaye , s. t. kay and c. dalla vecchia , mon . not .390 * , l64 ( 2008 ) .e. komatsu _ et al ._ [ wmap collaboration ] , astrophys . j. suppl . * 180 * , 330 ( 2009 ) .
we revise and extend the stochastic approach to cumulative weak lensing ( hereafter the sgl method ) first introduced in ref . . here we include a realistic halo mass function and density profiles to model the distribution of mass between and within galaxies , galaxy groups and galaxy clusters . we also introduce a modeling of the filamentary large - scale structures and a method to embed halos into these structures . we show that the sgl method naturally reproduces the weak lensing results for the millennium simulation . the strength of the sgl method is that a numerical code based on it can compute the lensing probability distribution function for a given inhomogeneous model universe in a few seconds . this makes it a useful tool to study how lensing depends on cosmological parameters and its impact on observations . the method can also be used to simulate the effect of a wide array of systematic biases on the observable pdf . as an example we show how simple selection effects may reduce the variance of observed pdf , which could possibly mask opposite effects from very large scale structures . we also show how a jdem - like survey could constrain the lensing pdf relative to a given cosmological model . the updated code is available at .
shannon s source - channel separation theorem essentially states that asymptotically there is no loss from optimum by decoupling the source coding component and channel coding component in a point - to - point communication system .this separation result tremendously simplifies the concept and design of communication systems , and it is also the main reason for the division between research in source coding and channel coding .however , it is also well known that in many multi - user settings , such a separation indeed incurs certain performance loss ; see , _e.g. _ , . for this reason ,joint source - channel coding has attracted an increasing amount of attention as the communication systems become more and more complex .one of the most intriguing problems in this area is joint source - channel coding of a gaussian source on a gaussian broadcast channel with users under an average power constraint .it was observed by goblick that when the source bandwidth and the channel bandwidth are matched , _i.e. _ , one channel use per source sample , directly sending the source samples on the channel after a simple scaling is in fact optimal , but the separation - based scheme suffers a performance loss .however , when the source bandwidth and the channel bandwidth are not matched , such a simple scheme is no longer optimal .many researchers have considered this problem , and significant progress has been made toward finding better coding schemes based on hybrid digital and analog signaling ; see , _e.g. _ , and the references therein . in spite of the progress on the achievability schemes , our overall understanding on this problem is still quite limited . as pointed out by caire , the key difficulty appears to be finding meaningful outer bounds .such outer bounds not only can provide a concrete basis to evaluate various achievability schemes , but also may provide insights into the structure of good or even optimal codes , and may further suggest simplification of the possibly quite complex optimal schemes in certain distortion regimes . in this regard , the result by reznic _ is particularly important , where they derived a non - trivial outer bound for the achievable distortion region for the two - user system .this outer bound relies on a technique previously used in the multiple description problem by ozarow , where one additional random variable beyond those in the original problem is introduced .the bound given in is however rather complicated , and was only shown to be asymptotically tight for certain high signal to noise ratio regime .in this work , we derive an outer bound for the -user problem using a similar technique as that used in , however , more than one additional random variable is introduced .the technique used here also bears some similarity to that used in .the outer bound has a more concise form than the one given in , but for the case , it can be shown that they are equivalent . this outer bound is in fact a set of outer bounds parametrized by non - negative variables .though one can optimize over these variables to find the tightest one , this optimization problem appears difficult .thus we take an approach similar to the one taken in , and choose some specific values for the variables which gives specific outer bounds . moreover , by combining these specific outer bounds with the simple achievability scheme based on source - channel separation , we provide approximate characterizations of the achievable distortion region within some universal constant multiplicative factors , independent of the signal to noise ratio and the bandwidth mismatch factor . in one of the approximations , the multiplicative factor is roughly of form for the distortion at the -th user , while in the other , the factor is for all the distortions .thus although shannon s source - channel separation result does not hold strictly in this problem , it indeed holds in an approximate manner .in fact , this set of results is extremely flexible , and can be applied in the case with an infinite number of users but the minimum achievable distortion is bounded away from zero , for which we can conclude that the source - channel separation based approach is also within certain finite constant multiplicative factors of the optimum .in this case , these constants can be upper bounded by factors related to the disparity between the best and worse distortions , which is not influenced by the number of users being infinite .though the outer bound is derived using techniques that have some precedents in the information theory literature , the difficulty lies in determining which terms to bound . in contrast to pure source coding problems or pure channel coding problems , where we can usually meaningfully bound a linear combination of rates , in a joint source - channel coding problem the notion of rates does not exist . in ,the lower bound on one distortion is given in terms of a function of the other distortion in the two - user problem .it is clear that such a proof approach becomes unwieldy for the general -user case . in this work ,we instead derive bounds for some quantity which at the first sight may seem even unrelated to the problem , but eventually serves as an interface between the source and channel coding components , thus replacing the role of `` rates '' in traditional shannon theory proofs . inspired by a recent work of avestimehr , caire and tse , where source - channel separation in more general networks is considered ,we further show that our technique can be conveniently extended to general broadcast channels , and the source - channel separation based scheme is within the same multiplicative constants of the optimum as for the gaussian channel case .the rest of the paper is organized as follows .section [ sec : definition ] gives the necessary notation and reviews an important lemma useful in deriving the outer bound .the main results are presented in section [ sec : mainresult ] , and the proofs for these results are given in section [ sec : proof ] .the extension to general broadcast channels is given in section [ sec : general ] , and section [ sec : conclusion ] concludes the paper .in this section , we give a formal definition of the gaussian source broadcast problem in the context of gaussian broadcast channels ; the notation will be generalized in section [ sec : general ] when other broadcast channels are considered .let be a stationary and memoryless gaussian source with zero - mean and unit - variance .the vector will be denoted as .we use to denote the domain of reals , and to denote the domain of non - negative reals .the gaussian memoryless broadcast channel is given by the model where is the channel output observed by the -th receiver , and is the zero - mean additive gaussian noise on the channel input .thus the channel is memoryless in the sense that is a stationary and memoryless process .the variance of is denoted as , and without loss of generality , we shall assume the mean squared error distortion measure is used , which is given by . the encoder maps a source sample block of length into a channel input block of length , and each decoder maps the corresponding channel output block of length into a source reconstruction block of length .the bandwidth mismatch factor is thus defined as which is essentially the ( possibly fractional ) channel uses per source sample ; see fig . [fig : systemdiag ] .the channel input is subject to an average power constraint ., width=302 ] we can make the codes in consideration more precise by introducing the following definition . an gaussian source - channel broadcast code is given by an encoding function such that and decoding functions and their induced distortions where is the expectation operation .note that there are two kinds of independent randomness in the system , the first of which is by the source , and the second is by the channel noises ; the expectation operation in ( [ eqn : expectation ] ) is taken over both of them . in the definition , in the expression is understood as the length- vector addition . from the above definition ,it is clear that the performance of any gaussian joint source - channel code depends only on the marginal distribution of , but not the joint distribution .this implies that physical degradedness does not differ from statistical degradedness in terms of the system performance .since the gaussian broadcast channel is always statistically degraded , we shall assume physical degradedness from here on without loss of generality .the channel noises can thus be written as where is a zero - mean gaussian random variable with variance , which is independent of everything else ; for convenience , we define , and it follows that and .[ def : distortionvector ] a distortion vector , where is achievable under power constraint and bandwidth mismatch factor , if for any and sufficiently large , there exist an integer and an gaussian source - channel broadcast code such that note that the constraint is without loss of generality , because otherwise the problem can be reduced to an alternative one with fewer users due to the assumed physical degradedness . the collection of all the achievable distortion vectors under power constraint and bandwidth mismatch factor is denoted by , and this is the region that we are interested in .one important result we need in this work is the following lemma , which is a slightly different version of the one given in .[ lemma : difference ] let be a random variable jointly distributed with the gaussian source vector in the alphabet , such that there exists a deterministic mapping satisfying let and , where and are mutually independent gaussian random variables independent of the gaussian source and the random variable , with variance and , respectively . then with and , we have 1 . * mutual information bound * 2 . * bound on mutual information difference * the proof of this lemma is almost identical to the one given in .the only difference between the two versions is that in the random variable is in fact a deterministic function of , however it is rather straightforward to verify that this condition was never used in the proof given in ; we include the proof of this lemma in the appendix for completeness .our main results for gaussian source broadcast on gaussian broadcast channels are summarized in theorem [ theorem : maintheorem ] , corollary [ corollary : firstcorollary ] , proposition [ proposition : firstcorollary ] , corollary [ corollary : infiniteusers ] and corollary [ corollary : secondcorollary ] , the proofs of which are given in the next section ; extensions of these results to general broadcast channels are given in section [ sec : general ] .define the region in ( [ eqn : innerbound ] ) on the top of next page , which is in fact the inner bound via source - channel separation .next define the regions in ( [ eqn : firstoutbound ] ) and ( [ eqn : secondouterbound ] ) also on the top of next page , which are in fact outer bounds to the achievable distortion region .we have the the following theorem . [ theorem : maintheorem ] theorem [ theorem : maintheorem ] is stated as inner and outer bounds to the achievable distortion region , however it can be observed that the bounds have similar forms , and their difference , in terms of distortions , can be bounded by some multiplicative constants .the following corollary follows directly from theorem [ theorem : maintheorem ] , by comparing ( [ eqn : innerbound ] ) and ( [ eqn : firstoutbound ] ) .[ corollary : firstcorollary ] if , and if for , then .the condition in corollary [ corollary : firstcorollary ] is to ensure that the distortion vector satisfies the monotonicity requirement in definition [ def : distortionvector ] and ( [ eqn : innerbound ] ) .this result has the following intuitive interpretation if the condition indeed holds that for all : if a genie helps the separation - based scheme by giving each individual user half a bit information per source sample , and at the same time all the better users also receive this half a bit information for free , then the separation - based scheme is as good as the optimal scheme. this approximation can in fact be refined , and for this purpose , the following additional definition is needed . for any , we associate with it a _ relaxed distortion vector _ and a binary labeling vector in a recursive manner for , and we have defined for convenience . it is easily verified that for , and moreover .[ proposition : firstcorollary ] let be the relaxed distortion vector of . if , then .the notion of relaxed distortion vector essentially removes the rather artificial condition in corollary [ corollary : firstcorollary ] .when this condition does not hold for some , the relaxed distortion vector is introduced to replace , which in this case does not satisfy the monotonicity requirement in definition [ def : distortionvector ] and thus is not a valid choice of a distortion vector ; nevertheless , in this case , the difference between the original distortion vector and its relaxed version is in fact smaller , being , instead of for as in the case already considered in corollary [ corollary : firstcorollary ] .proposition [ proposition : firstcorollary ] can be used in the situation where there are an infinite number of users such as in a fading channel .let the set of users indexed by and their associated distortions be denoted as , since there may be an uncountably infinite many of them .if we apply the construction given in ( [ eqn : enhanceddistortion ] ) , with replaced by , taking the role of and taking the role of , then the following lemma is straightforward by observing that and .[ lemma : maximumgroups ] the sequence specified by ( [ eqn : enhanceddistortion ] ) satisfies . it is clear that the maximum multiplicative constant is less than in the statement of proposition [ proposition : firstcorollary ] . if there exists a lower bound on the achievable distortion for the best user , denoted as , which is strictly positive , _i.e. _ , , then since , the multiplicative factor can be bounded as thus even when the number of users is infinite , as long as the lower bound is bounded away from zero , the multiplicative factors are in fact finite .more formally , we have the following corollary , however a more rigorous approach is to derive the outer bounds for this case and show the result holds .this can indeed be done either along the line of the proof given in section [ sec : proof ] with careful replacement of summation by integral , or more straightforwardly along the line of proof given in section [ sec : general ] . ] .[ corollary : infiniteusers ] for an infinite number of users indexed by with , let be the relaxed distortion vector of . if , then , and furthermore , .the next corollary gives another version of the approximation , essentially stating that for any achievable distortion vector , its -fold multiple is achievable using the separation approach . in terms of the genie - aided interpretation, the genie only needs to provide bits common information to the users in the separation - based scheme , then it is as good as the optimal scheme .more formally , the following corollary follows directly from theorem [ theorem : maintheorem ] .[ corollary : secondcorollary ] if , then , where .theorem [ theorem : maintheorem ] , proposition [ proposition : firstcorollary ] and the corollaries provide approximate characterizations of the achievable distortion region , essentially stating that the loss of the source - channel separation approach is bounded by constants .the bound on the gap is chosen to be ( largely ) independent of a specific distortion tuple on the boundary of , but it will become clear in the next section that such a choice is not necessary .the proofs of theorem [ theorem : maintheorem ] and proposition [ proposition : firstcorollary ] rely heavily on the following outer bound , which is one of the main contributions of this work . [theorem : outerbound ] let be any non - negative real values , and .if , then ^{\frac{1}{b}}\leq p+n_1.\label{eqn : outerbound}\end{aligned}\ ] ] with the above theorem in mind , let us denote the set of distortion vectors satisfying ( [ eqn : outerbound ] ) for a specific choice of as , _i.e. _ , ( [ eqn : defineoutbounds ] ) as given on the top of next page .^{\frac{1}{b}}\leq p+n_1,\right.\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\qquad\left.\phantom{\sum_{k=1}^k \delta n_k \left[\frac{(1+\tau_k)\prod_{j=2}^k(d_j+\tau_{j-1})}{\prod_{j=1}^k ( d_j+\tau_j)}\right]^{\frac{1}{b } } } 1\geq d_1\geq d_2 \geq\ldots\geq d_k\geq 0\right\}.\label{eqn : defineoutbounds}\end{aligned}\ ] ] thus theorem [ theorem : outerbound ] essentially states that for any valid choice of .the following corollary is then immediate .[ corollary : outerbound ] to illustrate corollary [ corollary : outerbound ] , let us consider the case for which the bound involves only one parameter .for this case , it can be shown through some algebra that this outer bound is equivalent to the one given in . in fig .[ fig : outerbound ] , we illustrate the outer bounds for several specific choices of . for comparison , the achievable region using the proposed scheme in also given .note that although the inner bound given by this scheme is extremely close to the outer bound , it appears they do not match exactly .it is worth emphasizing that we view this outer bound differently from the authors in : for each possible value of we view the condition ( [ eqn : outerbound ] ) as specifying an outer bound for the distortion region ; in contrast , the authors of viewed the distortion as being lower bounded by a function of , and the parameter was viewed as an additional variable that is subject to optimization , and consequently only the optimal choice of value was of interest .these two views are complementary , however the former view appears to be more natural for the -user problem , which also readily leads to the approximate characterizations . in certain cases , the second view may be more convenient , such as when we are given a specific achievable distortion tuple , and wish to determine how much further improvement is possible or impossible . for , the properties of the outer bound were thoroughly investigated in . in certain regimes ,this outer bound in fact degenerates for the case of bandwidth compression , and it is looser than the trivial outer bound with each user being optimal in the point - to - point setting .due to its non - linear form , the optimization of this bound is rather difficult , and it also appears difficult to determine whether it is always looser than the trivial outer bound in all distortion regimes with bandwidth compression .nevertheless , it is clear that this outer bound always holds whether the bandwidth is expanded or compressed , and the approximate characterizations are valid in either case .a different and simpler approximate characterization may in fact be more useful for the bandwidth compression case .consider a different genie who helps the separation - based scheme by giving each individual user half a bit information _ per channel use _ , and at the same time all the better users also receive this half a bit information for free , then the genie - aided separation - based scheme is as good as the optimal scheme , and moreover each user can in fact achieve the optimal point - to - point distortion . to see this approximation holds , first observe that the following broadcast channel rates are achievable by using the gaussian broadcast channel capacity region characterization ( it is particularly easy by using the alternative gaussian broadcast channel capacity characterization given in ( [ eqn : capacity ] ) ) the -th user can thus utilize a total rate of per channel use on this broadcast channel ; together with the genie - provided rates , it will have at least a total rate of per channel use , _i.e. _ , the optimal point to point channel rate .since the gaussian source is successively refinable , it is now clear that each user can achieve the optimal point - to - point distortion with this genie - aided separation - based scheme .note that though this approximation is good for bandwidth compression , it can be rather loose when the bandwidth expansion factor is large .in contrast , the approximations given in theorem [ theorem : maintheorem ] and proposition [ proposition : firstcorollary ] are independent of the bandwidth mismatch factor ( the genie provides information in terms of _ per source sample _ ) ; another difference is that the approximations given in theorem [ theorem : maintheorem ] and proposition [ proposition : firstcorollary ] rely on the new outer bound , instead of the simple point - to - point distortion outer bound .it is clear from the above discussion that the outer bound in theorem [ theorem : maintheorem ] may be further improved by taking its intersection with the trivial point - to - point outer bound . in the remainder of this paper , we do not pursue such possible improvements , but instead focus on the proofs for the results stated in theorem [ theorem : maintheorem ] and proposition [ proposition : firstcorollary ] .the proofs of the main results for gaussian source broadcast on gaussian broadcast channels are given in this section .we start by establishing a simple inner bound for the distortion region based on source - channel separation , and then focus on deriving an outer bound , or more precisely a set of outer bounds .the approximate characterizations are then rather straightforward by combining these two bounds . from here on , we shall use natural logarithm for concreteness , though choosing logarithm of a different base does not make any essential difference .the source - channel separation based coding scheme we consider is extremely simple , which is the combination of a gaussian successive refinement source code and a gaussian broadcast channel code ; this scheme was thoroughly investigated in , and a solution for the optimal power allocation was given to minimize the expected end - user distortion . since gaussian broadcast channel is degraded , a better user can always decode completely the messages sent to the worse users , and thus a successive refinement source code is a perfect match for this channel .note that such a source - channel separation approach is not optimal in general for this joint source - channel coding problem ; see for example .the gaussian broadcast channel capacity region is well known , which is usually given in a parametric form in terms of the power allocation . in this work , we will use an alternative representation , which first appeared in and was instrumental for deriving the optimal power allocation solution in .the gaussian broadcast channel capacity region ( per channel use ) can be written in the form in ( [ eqn : capacity ] ) as given on the top of next page . the rate is the individual message rate intended only to the -th user , however due to the degradedness , all the better users can also decode this message . since the gaussian source is successively refinable , by combining an optimal gaussian successive refinement source code with a gaussian broadcast code that ( asymptotically ) achieves ( [ eqn : capacity ] ), we have the following theorem .[ theorem : innerbound ] we wish to show that any is indeed achievable . using the separation scheme ,we only need to show the channel rates specified by are achievable on this gaussian broadcast channel .the non - negative vector is uniquely determined by , and it is straightforwardly seen that it indeed satisfies the inequality in ( [ eqn : capacity ] ) .the proof is thus complete .next we derive a set of conditions that any achievable distortion vector has to satisfy , _i.e. _ , theorem [ theorem : outerbound ] .let us first introduce a set of auxiliary random variables , defined as where s are zero gaussian random variables , with variance , and furthermore where is a zero - mean gaussian random variable , independent of everything else , with variance . for convenience ,we define , which implies ; furthermore , define , _i.e. _ , being a constant .this technique of introducing auxiliary random variables beyond those in the original problem was previously used in to derive outer bounds , and specifically in more than one random variable was introduced , whereas in only one was introduced . for any encoding and decoding functions ,we consider a quantity which bears some similarity to the expression for the gaussian broadcast channel capacity ( [ eqn : capacity ] ) , and we denote this quantity as due to its sum exponential form .\end{aligned}\ ] ] the subscript makes it clear that this quantity depends on the specific encoding and decoding functions .next we shall derive universal upper and lower bounds for this quantity regardless the specific choice of functions , which eventually yield an outer bound for .let be any encoding and decoding functions that ( asymptotically ) achieve the distortions .we first derive a lower bound for . observe that for , where the equality is due to the markov string , and the inequality is by lemma [ lemma : difference ] .moreover , also by lemma [ lemma : difference ] , we have it follows that summarizing the above bounds , we have .\end{aligned}\ ] ] next we turn to upper - bounding , and first write the following .\nonumber\\ & = \frac{2}{n}\sum_{j=1}^k \left[h(y^n_j|u^m_{j-1})-h(y^n_j|u^m_j)\right]\nonumber\\ & = \frac{2}{n}\sum_{j=1}^k h(y^n_j|u^m_{j-1})-\frac{2}{n}\sum_{j=1}^k h(y^n_j|u^m_j).\end{aligned}\ ] ] applying the entropy power inequality for , we have \nonumber\\ & \geq \exp\left[\frac{2}{n}h(y^n_{j+1}|u^m_j)\right]+\exp\left[\log(2\pi e\delta n_j)\right]\nonumber\\ & = \exp\left[\frac{2}{n}h(y^n_{j+1}|u^m_j)\right]+2 \pi e\delta n_j .\label{eqn : applyentropypower}\end{aligned}\ ] ] for ,it is clear that =\exp\left[\frac{2}{n}h(y^n_k|s^m)\right]\nonumber\\ & \qquad\qquad\qquad\qquad\qquad= 2 \pi en_k=2 \pi e \delta n_k.\end{aligned}\ ] ] by defining \triangleq 0 ] , as long as there exists a unique solution of such that ( [ eqn : kfactor ] ) holds with equality .this is indeed true for , which gives it follows that and thus our claim holds for .next suppose the claim is true for and we prove it is also true , for which we wish to find such that where the last equality is by the supposition that the claim holds true for .again by the monotonicity and continuity of , as long as the choice of satisfies there exists a valid solution in {\mathbb e}{\mathbb e}{\mathbb e}{\mathbb e}{\mathbb e}$ } } d(s(i),\hat{s}(i))+\tau}\\ & \stackrel{(e)}{\geq } \frac{m}{2}\log \frac{d+\tau'}{d+\tau},\end{aligned}\ ] ] where ( a ) is because is independent of ; ( b ) is because conditioning reduces entropy ; ( c ) is by applying the chain rule , and the facts that is an i.i.d . sequence and conditioning reduces entropy ;( d ) is by applying the mutual information game result that gaussian noise is the worst additive noise under a variance constraint ( pg .1 ) , and taking as channel input ; finally ( e ) is due to the convexity and monotonicity of in when . this completes the proof for the second claim .the authors would like to thank david tse for the stimulating discussions at several occasions as well as his insightful comments .the authors are also grateful to the anonymous reviewers for their comments .u. mittal and n. phamdo , `` hybrid digital - analog ( hda ) joint source - channel codes for broadcasting and robust communications , '' _ ieee transactions on information theory _ , vol .48 , no . 5 , pp .10821102 , may 2002 .m. skoglund , n. phamdo , and f. alajaji , `` hybrid digital - analog source - channel coding for bandwidth compression / expansion , '' _ ieee transactions on information theory _ , vol .52 , no . 8 , pp . 37573763 , aug .2006 . c. tian , a. steiner , s. shamai , and s. diggavi , `` successive refinement via broadcast : optimizing expected distortion of a gaussian source over a gaussian fading channel , '' , vol .54 , no . 7 , pp .29032918 , jul .2008 .d. n. c. tse , `` optimal power allocation over parallel gaussian broadcast channels , '' in ; available at http://www.eecs.berkeley.edu/pubs/techrpts/1999/3578.html.[http://www.eecs.berkeley.edu/pubs/techrpts/1999/3578.html ] c. tian , j. chen , s. diggavi and s. shamai , `` optimality and approximate optimality of source - channel separation in networks , '' , austin , tx , usa , jun .2010 , pp . 495499 .see also http://arxiv.org/abs/1004.2648 chao tian(s00 , m05 ) received the b.e .degree in electronic engineering from tsinghua university , beijing , china , in 2000 and the m.s . andd. degrees in electrical and computer engineering from cornell university , ithaca , ny in 2003 and 2005 , respectively .tian was a postdoctoral researcher at ecole polytechnique federale de lausanne ( epfl ) from 2005 to 2007 .he joined at&t labs research , florham park , new jersey in 2007 , where he is now a senior member of technical staff .his research interests include multi - user information theory , joint source - channel coding , quantization design and analysis , as well as image / video coding and processing .suhas n. diggavi ( s93 , m99 ) received the b. tech .degree in electrical engineering from the indian institute of technology , delhi , india , and the ph.d .degree in electrical engineering from stanford university , stanford , ca , in 1998 . after completing his ph.d . , he was a principal member technical staff in the information sciences center , at&t shannon laboratories , florham park , nj .since then he had been in the faculty of the school of computer and communication sciences , epfl , where he directed the laboratory for information and communication systems ( licos ) .he is currently a professor , in the department of electrical engineering , at the university of california , los angeles .his research interests include wireless communications networks , information theory , network data compression and network algorithms .he is a recipient of the 2006 ieee donald fink prize paper award , 2005 ieee vehicular technology conference best paper award and the okawa foundation research award .he is currently an editor for acm / ieee transactions on networking and ieee transactions on information theory .he has 8 issued patents .shlomo shamai ( shitz)(s80 , m82 , sm89 , f94 ) received the b.sc ., m.sc . , anddegrees in electrical engineering from the technion israel institute of technology , in 1975 , 1981 and 1986 respectively . during 1975 - 1985he was with the communications research labs in the capacity of a senior research engineer .since 1986 he is with the department of electrical engineering , technion israel institute of technology , where he is now the william fondiller professor of telecommunications .his research interests encompasses a wide spectrum of topics in information theory and statistical communications .shamai ( shitz ) is an ieee fellow , and the recipient of the 2011 claude e. shannon award .he is the recipient of the 1999 van der pol gold medal of the union radio scientifique internationale ( ursi ) , and a co - recipient of the 2000 ieee donald g. fink prize paper award , the 2003 , and the 2004 joint it / com societies paper award , the 2007 ieee information theory society paper award , the 2009 european commission fp7 , network of excellence in wireless communications ( newcom++ ) best paper award , and the 2010 thomson reuters award for international excellence in scientific research .he is he is also the recipient of 1985 alon grant for distinguished young scientists and the 2000 technion henry taub prize for excellence in research .he has served as associate editor for the shannon theory of the ieee transactions on information theory , and has also served on the board of governors of the information theory society .
we consider the joint source - channel coding problem of sending a gaussian source on a -user gaussian broadcast channel with bandwidth mismatch . a new outer bound to the achievable distortion region is derived using the technique of introducing more than one additional auxiliary random variable , which was previously used to derive sum - rate lower bound for the symmetric gaussian multiple description problem . by combining this outer bound with the achievability result based on source - channel separation , we provide approximate characterizations of the achievable distortion region within constant multiplicative factors . furthermore , we show that the results can be extended to general broadcast channels , and the performance of the source - channel separation based approach is also within the same constant multiplicative factors of the optimum . gaussian source , joint source - channel coding , squared error distortion .
in this paper , our goal is to design new decoding algorithms that can enhance techniques known to date for rm codes .in general , rm codes can be designed from the set of all -variate boolean polynomials of degree or less . hereeach polynomial is defined on the -dimensional space for any we consider the sequence of binary values obtained as argument runs through .these sequences - codewords - form an rm code , which is below denoted and has length dimension and distance as follows: the decoding algorithms discussed in this paper ( including the new algorithms ) can be applied to any rm code .however , we will mostly focus on their asymptotic performance obtained for long rm codes of _ fixed order _ to define their error - correcting performance , we use the following definition . given an infinite sequence of codes we say that a decoding algorithm has a sequence and a sequence if for correctly decodes all but a vanishing fraction of error patterns of weight or less ; fails to decode a nonvanishing fraction of error patterns of weight or less can satisfy the same definition nonexponential decoding algorithms known for rm codes can be loosely separated into three groups .first , _ __ was developed in the seminal paper .the algorithm requires complexity of order or less . for rm codes of fixed order it was proven in that majority decoding achieves maximum possible threshold ( here and below we omit index with a residual where is a constant that does not depend on and the second type of decoding algorithms makes use of the symmetry group of rm codes .one very efficient algorithm is presented in . for long rm codes this algorithm reduces the residual term from ( [ eps - maj ] ) to its square where . on the other hand ,the complexity order of of majority decoding is also increased in algorithm to almost its square the corresponding thresholds for higher orders are yet unknown .another result of concerns maximum - likelihood ( ml ) decoding .it is shown that ml decoding of rm codes of fixed order yields a substantially lower residual where however , even the best known algorithm of ml decoding designed by the multilevel trellis structure in has yet complexity that is exponent in . finally , various techniques were introduced in , , , and .all these algorithms use different recalculation rules but rely on the same code design based on the _ _plotkin construction _ _ the construction allows to decompose rm codes onto shorter codes , by taking subblocks * * * * and from codes and the results from , , and show that this recursive structure enables both encoding and bounded distance decoding with the lowest complexity order of known for rm codes of an arbitrary order . in the same vein , belowwe also employ plotkin construction .the basic recursive procedure will split rm code of length into two rm codes of length .decoding is then relegated further to the shorter codes of length and so on , until we reach basic codes of order or at these points , we use maximum likelihood decoding or the variants derived therefrom .by contrast , in all intermediate steps , we shall only recalculate the newly defined symbols . hereour goal is to find efficient _ recalculation rules _ that can _ provably _ improve the performance of rm codes .our results presented below in theorems [ th:1 - 2 ] and [ th:1 - 1 ] show that recursive techniques indeed outperform other polynomial algorithms known for rm codes .these results also show how decoding complexity can be traded for a higher threshold. [ th:1 - 2 ] long rm codes of fixed order can be decoded with linear complexity and decoding threshold [ th:1 - 1 ] long rm codes of fixed order can be decoded with quasi - linear complexity and decoding threshold rephrasing theorems [ th:1 - 2 ] and [ th:1 - 1 ] , we obtain the following long rm codes of fixed order can be decoded with vanishing output error probability and linear complexity ( or quasi - linear complexity on a binary channel with crossover error probability ( correspondingly , as note that theorem [ th:1 - 2 ] increases decoding threshold of the recursive techniques introduced in and from the order of to while keeping linear decoding complexity .th:1 - 1 ] improves both the complexity and residual of majority decoding of rm codes .when compared with the algorithm of , this theorem reduces the quadratic complexity to a quasi - linear complexity and also extends this algorithm to an arbitrary order of rm codes .the algorithms designed below differ from the former algorithms of , , and in both the intermediate recalculations and the stopping rules .firstly , we employ new _ intermediate recalculations _ , which yield the exact decoding thresholds , as opposed to the bounded distance threshold established in and .this leads us to theorem [ th:1 - 2 ] .secondly , by analyzing the results of theorem [ th:1 - 2 ] , we also change the former _ stopping rules _, all of which terminate decoding at the repetition codes .now we terminate decoding earlier , once we achieve the biorthogonal codes .this change yields theorem [ th:1 - 1 ] and substantially improves decoding performance ( this is discussed in section 7 ) .finally , we employ a new probabilistic analysis of recursive algorithms . in section 7, we will see that this analysis not only gives the actual thresholds but also shows how the algorithms can be advanced further .below in section 2 we consider recursive structure of rm codes in more detail . in section 3, we proceed with decoding techniques and design two different recursive algorithms and these algorithms are analyzed in sections 4 , 5 , and 6 , which are concluded with theorems [ th:1 - 2 ] and [ th:1 - 1 ] . in section 7, we briefly discuss extensions that include decoding lists , subcodes of rm codes , and soft decision channels . for the latter case, we will relate the noise power to the quantity thus , the residual will serve as a measure of the highest noise power that can be withstood by a specific low - rate code .consider any -variate boolean polynomial and the corresponding codeword with symbols below we assume that positions are ordered lexicographically , with being the senior digit .note that any polynomial can be split as where we use the new polynomials and these polynomials are defined over variables and have degrees at most and respectively .correspondingly , one can consider two codewords and that belong to the codes and . then representation ( [ poly ] ) converts any codeword to the form this is the well known by continuing this process on codes and we obtain rm codes of length and so on . finally , we arrive at the end nodes , which are repetition codes for any and full spaces for any this is schematically shown in fig . 1 for rm codes of length 8. now let be a block of bits that encode a vector by decomposing this vector into and we also split into two information subblocks and that encode vectors and respectively .in the following steps , information subblocks are split further , until we arrive at the end nodes or .this is shown in fig .2 . note that only one information bit is assigned to the left - end ( repetition ) code while the right - end code includes bits .below these bits will be encoded using the unit generator matrix . summarizing , we see that any codeword can be encoded from the information strings assigned to the end nodes or , by repeatedly combining codewords and in the -construction .0,0 1,0 1,1 2,02,1 2,2 3,0 3,1 3,2 3,3 fig .1 decomposition of rm codes of length 8 . given any algorithm in the sequel we use notation for its complexity .let denote the encoding described above for the code taking a complexity estimate from and its enhancement from , we arrive at the following lemma . fig .2 . decomposition of information paths rm codes can be recursively encoded with complexity _ proof ._ first , note that the end nodes and require no encoding and therefore satisfy the complexity bound ( [ encoding ] ) .second , we verify that code satisfies ( [ encoding ] ) if the two constituent codes do .let the codewords and have encoding complexity and that satisfies ( [ encoding ] ) .then their -combination requires complexity where extra additions ( were included to find the right half .now we substitute estimates ( [ encoding ] ) for quantities and . if then the two other cases , namely and can be treated similarly . now consider an information bit associated with a left node where . ] and let be any right - end path that ends at this node . then is associated with information bits. therefore we extend to the full length by adding any binary suffix this allows us to consider separately all information bits and use common notation .when all left- and right - end paths are considered together , we obtain all paths of length and binary weight or more .this gives one - to - one mapping between information bits and extended paths below all are ordered lexicographically , as -digital binary numbers .now we turn to recursive decoding algorithms .we map any binary symbol onto and assume that all code vectors belong to obviously , the sum of two binary symbols is being mapped onto the product of their images .then we consider any codeword transmitted over a binary symmetric channel with crossover probability the received block consists of two halves and , which are the corrupted images of vectors and .we start with a basic algorithm that will be later used in recursive decoding . in our decoding, vector will be replaced by the vectors whose components take on real values from the interval . } % \ ] ] in a more general scheme , we repeat this recursion by decomposing subblocks and further . on each intermediate step, we only recalculate the newly defined vectors and using ( [ 1 ] ) when decoder moves left and ( [ 3 ] ) when it goes right . finally , vectors and are decoded , once we reach the end nodes and . given any end code of length and any estimate employ the ( soft decision ) ( md ) decoding that outputs a codeword closest to in the .equivalently , maximizes the inner product the algorithm is described below .{l}% \text{algorithm } \psi_{\,r}^{m}(\mathbf{y)}.\medskip\\ \text{1 . if } 0<r < m\text { , perform } \psi_{\text{rec}}(\mathbf{y)}\text { } \\ \text{using } \psi(\mathbf{y}^{v})=\psi_{\,\,r-1}^{m-1}\text { and } % \psi(\mathbf{y}^{u})=\psi_{\,\,\,\,\,r}^{m-1}\text{.}\medskip\\ \text{2 . if } r=0,\text { perform md decoding } \\\psi(\mathbf{y}^{v})\text { for code } \left\ { % tcimacro{\qatop{r}{0}}% % beginexpansion \genfrac{}{}{0pt}{}{r}{0}% % endexpansion \right\ } .\medskip\\ \text{3 .if } r = m,\text { perform md decoding } \\ \psi(\mathbf{y}^{u})\text { for code } \left\ { % tcimacro{\qatop{r}{r}}% % beginexpansion \genfrac{}{}{0pt}{}{r}{r}% % endexpansion \right\ } .\end{array } } % \ ] ] thus , procedures and have a recursive structure that calls itself until md decoding is applied on the end nodes . nowthe complexity estimate follows. for any rm code algorithms and have decoding complexity and md decoding can be executed in operations and satisfies the bound ( [ comp - fi ] ) ( here we assume that finding the sign of a real value requires one operation ) . for biorthogonal codes , their md decoding can be executed in operations using the green machine or operations using fast hadamard transform ( see or , section 14.4 ) .obviously , this decoding satisfies the upper bound ( [ comp - f ] ) .second , for both algorithms and , vector in ( [ 1 ] ) can be calculated in operations while vector in ( [ 3 ] ) requires operations .therefore our decoding complexity satisfies the same recursion finally , we verify that ( [ comp - fi ] ) and ( [ comp - f ] ) satisfy the above recursion , similarly to the derivation of ( [ encoding ] ) . _ discussion ._ both algorithms and admit bounded distance decoding .this fact can be derived by adjusting the arguments of for our recalculation rules ( [ 1 ] ) and ( [ 3 ] ) .algorithm is also similar to decoding algorithms of and .however , our _ recalculation rules _ are different from those used in the above papers . for example, the algorithm of performs the so - called `` min - sum '' recalculation instead of ( [ 1 ] ) .this ( simpler ) recalculation ( [ 1 ] ) will allow us to substantially expand the _ `` provable '' _ decoding domain versus the bounded - distance domain established in and .we then further extend this domain in theorem [ th:1 - 1 ] , also using the new _ stopping rule _ that replaces in with in however , it is yet an open problem to find the decoding domain using any other recalculation rule , say those from , , , or . finally , note that the scaling factor in recalculation rule ( [ 3 ] ) brings any component back to the interval \mu(\xi)+1=(\mu(\,\underline{\xi}\,)+1)^{2}, \text{if}\;\xi_{s}=0, \qquad\qquad\mu(\xi)=\mu(\,\underline{\xi}\,)/2, \text{if}\;\xi _ { s}=1.\medskip ] the weakest path on the subset is its leftmost path _ proof . _first , note that on all left - end paths , the variances are calculated after steps , at the same node by contrast , all right - end paths end at different nodes therefore their variances are found after steps . to use lemma [ lm : nei1 ] , we consider an extended right - end path obtained by adding zeros .then we have inequality since the variance increases after zeros are added . despite this fact ,below we prove that from ( [ w1 ] ) and from ( [ w2 ] ) still represent the weakest paths , even after this extension .indeed , now all paths have the same length and the same weight so we can apply lemma [ lm : nei1 ] . recall that each path ends with the same suffix . in this case , is the leftmost path on . by lemma [ lm : nei1 ] , maximizes the variance over all finally , note that is the leftmost path on the total set since all zeros form its prefix .thus , is the weakest path .now we find the variances and for the weakest paths and [ lm : mu ] for crossover error probability , the weakest paths and give the variances _ proof ._ consider the weakest path from ( [ w1 ] ) .the recursion ( [ t12 ] ) begins with the original quantity after completing left steps the result is then we proceed with right steps , each of which cuts in half according to ( [ t1 ] ) .thus , we obtain equality ( [ t0 ] ) .formula ( [ mu1 ] ) follows from representation ( [ w2 ] ) in a similar ( though slightly longer ) way . lemma [ lm : mu ] allows us to use chebyshev s inequality for any path however , this bound is rather loose and insufficient to prove theorem 1 .therefore we improve this estimate , separating all paths into two different sets .namely , let be the subset of all left - end paths that enter the node with we will use the fact that any path satisfies the central limit theorem as grows .however , we still use chebyshev s inequality on the complementary subset in doing so , we take equal to the from theorem 1 : [ lm : fi]for rm codes with and fixed order used on a binary channel with crossover error probability algorithm gives on a path the asymptotic bit error rate with asymptotic equality on the weakest path . _ proof ._ according to ( [ md2 ] ) , any left - end path gives the rv which is the sum of i.i.d .limited rv for this number grows as or faster as in this case , the normalized rv satisfies the central limit theorem and its probability density function ( pdf ) tends to the gaussian distribution according to lemmas [ lm : path ] and [ lm : mu ] , the weakest path gives the maximum variance in particular , for equality ( [ t0 ] ) gives using gaussian distribution to approximate we take standard deviations and obtain ( see also remark 1 following the proof ) here we also use the asymptotic{ll}% & \\ & % \end{tabular } \ \ \\ ] ] valid for large this yields asymptotic equality for in ( [ gauss1 ] ) .for any other path is approximated by the normal rv with a smaller variance therefore we use inequality in ( [ gauss1 ] ) : finally , consider any path with in this case , we directly estimate the asymptotics of namely , we use the substitution in ( [ mu1 ] ) , which gives a useful estimate:{ll}% 2^{-m - r+g}(2r\ln m)^{-1 } , & \text{if}\;g>\frac{m - r}{2}+\ln m,\smallskip\\ 2^{-(m - r-2)/2}(2r\ln m)^{-1/2 } , & \text{if}\;g<\frac{m - r}{2}-\ln m , \end{array } \right .\label{mu - low}%\ ] ] thus , we see that for all variances have the same asymptotics and decline exponentially in as opposed to the weakest estimate ( [ small ] ). then we have which also satisfies ( [ gauss1 ] ) as _ discussion .considering approximation ( [ gauss ] ) for a general path we arrive at the estimate according to theorem xvi.7.1 from , this approximation is valid if the number of standard deviations is small relative to the number of rv in the sum in particular , we can use ( [ gauss ] ) for the path since ( [ small ] ) gives 2 .note that for variance in ( [ mu - low ] ) declines exponentially as moves away from . on the other hand, we can satisfy asymptotic condition ( [ gauss3 ] ) for any path if in ( [ gauss3 ] ) is replaced with parameter of a fixed degree as ] as we then use inequality valid for any instead of the weaker inequality ( [ gauss2 ] ) .thus , the bounds on probabilities rapidly decline as moves away from and the total block error rate also satisfies the same asymptotic bound ( [ gauss1]) \3 .note that the same minimum residual can also be used for majority decoding .indeed , both the majority and the recursive algorithms are identical on the weakest path .namely , both algorithms first estimate the product of channel symbols and then combine different estimates in ( [ md2 ] ) .however , a substantial difference between the two algorithms is that recursive decoding uses the previous estimates to process any other path . because of this , the algorithm outperforms majority decoding in both the complexity and ber for any \4 .theorem [ lm : fi ] almost entirely carries over to any .namely , we use the normal pdf for any . hereany variance declines as grows. therefore we can always employ inequality ( [ gauss2 ] ) , by taking the _ maximum possible _ variance obtained in ( [ small ] ) . on the other hand ,asymptotic equality ( [ gauss ] ) becomes invalid as grows . in this case ,tighter bounds ( say , the chernoff bound ) must be used on however , in this case , we also need to extend the second - order analysis of lemma [ lm : nei1 ] to exponential moments .such an approach can also give the asymptotic error probability for any however , finding the bounds on is an important issue still open to date . \5 .it can be readily proven that for sufficiently large the variance becomes independent of similar to the estimates obtained in the second line of ( [ mu - low ] ) .more generally , more and more paths yield almost equal contributions to the block error rate as grows .this is due to the fact that the neighboring paths exhibit similar performance on sufficiently good channels .now theorem 1 directly follows from theorem [ lm : fi ] ._ proof of theorem 1 ._ consider a channel with crossover probability for the output block error probability of the algorithm has the order most where is the number of information symbols .this number has polynomial order of on the other hand , formula ( [ gauss1 ] ) shows that declines faster than for any as a result , next , we note that the error patterns of weight or less occur with a probability since the above argument shows that decoder fails to decode only a vanishing fraction of error patterns of weight or less next , we need to prove that fails to correct nonvanishing fraction of errors of weight or less . in proving this , consider a higher crossover probability where for this our estimates ( [ t0 ] ) and ( [ gauss ] ) show that and also , according to ( [ tot3 ] ) , on the other hand , the central limit theorem shows that the errors of weight or more still occur with a vanishing probability thus , we see that necessarily fails on the weights or less , since the weights over still give a vanishing contribution to the nonvanishing error rate . proceeding with a proof of theorem [ th:1 - 1 ] , we summarize three important points that will be used below to evaluate the threshold of the algorithm . * 1 . *the received rv , all intermediate recalculations ( [ main ] ) , and end decodings on the right - end paths are identical in both algorithms and by contrast , any left - end path first arrives at some biorthogonal code of length and is then followed by the suffix also , let denote the codeword of where here we also assume that the first two codewords form the repetition code : for each define its support as the subset of positions that have symbols . here codewords with have support of the same size whereas also , below denotes any information symbol associated with a path .let the all - one codeword be transmitted and be received .consider the vector obtained on some left - end path that ends at the node by definition of md decoding , is incorrectly decoded into any with probability where in our probabilistic setting , each event is completely defined by the symbols which are i.i.d .recall that lemma [ lm : prob0 ] is `` algorithm - independent '' and therefore is left intact in .namely , we again consider the events and from ( [ events ] ) .similarly to ( [ tot4 ] ) , we assume that all preceding decodings are correct and replace the unconditional error probability with its conditional counterpart this probability satisfies the bounds here we take the probability of incorrect decoding into any single codeword as a lower bound ( in fact , below we choose ) , and the union bound as its upper counterpart .now we take parameters for rm codes with and fixed order used on a binary channel with crossover error probability algorithm gives for any path a vanishing bit error rate _ proof ._ consider any left - end path that ends at the node for any vector and any subset of positions , define the sum here form i.i.d .thus , the sum has the same pdf for any . in turn , this allows us to remove index from and use common notation then we rewrite bounds ( [ b - fi ] ) as equivalently , we use the normalized rv with expectation 1 and rewrite the latter bounds as similarly to the proof of theorem [ lm : fi ] , note that the sum also satisfies the central limit theorem for any and has pdf that tends to as thus , we see that depends only on the variance obtained on the sum of i.i.d .this variance can be found using calculations identical to those performed in lemmas [ lm : nei1 ] to [ lm : mu ] . in particular , for any we can invoke the proof of lemma [ lm : path ] , which shows that achieves its maximum on the leftmost path similarly to ( [ t0 ] ), we then find direct substitution in ( [ t14 ] ) gives now we almost repeat the proof of theorem [ lm : fi ] . for the first path we employ gaussian approximation as for maximum the latter inequality and ( [ main6 ] ) give the upper bound also , for any other path with we can use the same estimates in ( [ main6 ] ) due to the inequalities and finally , consider any path with in this case , we use chebyshev s inequality instead of gaussian approximation .again , for any node we can consider its leftmost path similarly to our previous calculations in ( [ t0 ] ) and ( [ mu1 ] ) , it can be verified that then for small , substitution gives the equality thus , we obtain chebyshev s inequality in the form and complete the proof , since bound ( [ log2 ] ) combines both estimates ( [ t15 ] ) and ( [ cheb3]). _ proof of theorem 2 ._ we repeat the proof of theorem 1 almost entirely .consider a channel with crossover probability as the output block error probability of the algorithm satisfies the estimate where the number of different paths is bounded by formula ( [ log2 ] ) shows that all decline exponentially in as a result , we obtain asymptotic estimate on the other hand , the error patterns of weight or less occur with a total probability that tends to so decoder fails to decode only a vanishing fraction of these error patterns .now we take a smaller residual and consider a channel with crossover probability then direct substitution of in ( [ t14 ] ) gives .then formula ( [ lo1 ] ) shows that the decoding block error rate is note also that errors of weight or more occur with vanishing probability .thus , fails on errors of weight or less . _ discussion ._ the proofs of theorems 1 and 2 also reveal the main shortcoming of our probabilistic technique , which employs rather loose estimates for probabilities indeed , the first two moments of the random variables give tight approximation only for gaussian rv .by contrast , error probabilities slowly decline as whenever chebyshev s inequality ( [ cheb3 ] ) is applied for small parameters as a result , we can obtain a vanishing block error rate only if this is the case of rm codes of fixed order by contrast , the number of information symbols is linear in for rm codes of fixed rate this fact does not allow us to extend theorems 1 and 2 for nonvanishing code rates .more sophisticated arguments - that include the moments of an arbitrary order - can be developed in this case .the end result of this study is that recursive decoding of rm codes of fixed rate achieves the error - correcting threshold this increases times the threshold of bounded distance decoding .however , the overall analysis becomes more involved and is beyond the scope of this paper .now consider an infinite sequence of optimal binary codes of a low code rate used on a channel with high crossover error probability . according to the shannon coding theorem , ml decoding of such a sequence gives a vanishing block error probability if where is the inverse ( binary ) entropy function .note that for correspondingly , a vanishing block error probability is obtained for any residual next , recall that rm codes of fixed order have code rate for this rate , ml decoding of optimal codes gives thus , we see that optimal codes give approximately the same residual order ( [ ord - ml ] ) as the former order ( [ ml ] ) derived in for rm codes .in other words , rm codes of low rate can achieve nearly optimum performance for ml decoding .by contrast , low - complexity algorithm has a substantially higher residual that has the order of .this performance gap shows that further advances are needed for the algorithm the main problem here is whether possible improvements can be coupled with low complexity order of the performance gap becomes even more noticeable , if a binary symmetric channel is considered as a `` hard - decision '' image of an awgn channel .indeed , let the input symbols be transmitted over a channel with the additive white gaussian noise for code sequences of rate we wish to obtain a vanishing block error probability when in this case , the transmitted symbols are interchanged with very high crossover probability which gives residual thus , serves ( up to a small factor of as a measure of noise power in particular , ml decoding operates at the above residual from ( [ ord - ml ] ) and can withstand noise power of order up to by contrast , algorithm can successfully operate only when noise power has the lower order of similarly , algorithm is efficient when is further reduced to the order of . therefore for long rm codes , algorithm can increase times the noise power that can be sustained using the algorithm or majority decoding .however , performance of also degrades for longer blocks when compared to optimum decoding , though this effect is slower in than in other low - complexity algorithms known for rm codes . for moderate lengths ,this relative degradation is less pronounced , and algorithm achieves better performance . in particular , some simulation results are presented in fig .[ fig2 ] to fig .[ fig4 ] for rm codes , , and , respectively . on the horizontal axis ,we plot both input parameters - the signal - to noise ratio of an awgn channel and the crossover error probability of the corresponding binary channel. the output code word error rates ( wer ) of algorithms and represent the first two ( rightmost ) curves decoding is performed on a binary channel , without using any soft - decision information .these simulation results show that gains about db over on the code and about 0.5 db on the code even for high wer . a subsequent improvement can be obtained if we consider _ soft - decision _ decoding which recursively recalculates the _ posterior probabilities _ of the new variables obtained in both steps 1 and 2 .these modifications of algorithms and - called below and - are designed along these lines in .the simulation results for the algorithm are also presented in fig .[ fig2 ] to fig .[ fig4 ] , where these results are given by the third curve .this extra gain can be further increased if a few most plausible code candidates are recursively retrieved and updated in all intermediate steps .we note that the list decoding algorithms have been of substantial interest not only in the area of error control but also in the learning theory . for _ long _ _ biorthogonal codes _ the pioneering randomized algorithm is presented in . for and this algorithm outputs a complete list of codewords located within the distance from any received vector , while taking only a polynomial time to complete this task with high probability substantial further advances are obtained for some low - rate -ary rm codes in and the papers cited therein . for binary rm codes of _ any order _ we mention three different soft decision list decoding techniques , all of which reduce the output wer at the expense of higher complexity .the algorithm of and reevaluates the most probable information subblocks on a _single run_. for eachpath the decoder - called below - updates the list of most probable information subblocks obtained on the previous paths this algorithm has overall complexity of order the technique of proceeds recursively at any intermediate node , by choosing codewords closest to the input vector processed at this node .these lists are updated in _multiple recursive runs ._ finally , the third novel technique executes sequential decoding using the main stack , but also utilizes the complementary stack in this process .the idea here is to lower - bound the minimum distance between the received vector and the closest codewords that will be obtained in the _ future steps_. computer simulations show that the algorithm of achieves the best complexity - performance trade - off known to date for rm codes of moderate lengths 128 to 512 . in fig .[ fig2 ] to fig .[ fig4 ] , this algorithm is represented by the fourth curve , which shows a gain of about 2 db over .here we take in fig .[ fig2 ] and in fig .[ fig3 ] and [ fig4 ] .finally , complexity estimates ( given by the overall number of floating point operations ) are presented for all three codes in table 1 .[ c]|c|c|c|c|c|code & & & & + & 857 & 1264 & 6778 & 29602 , + & 1753 & 2800 & 16052 & 220285 , + & 2313 & 2944 & 12874 & 351657 , + recall also that different information bits - even those retrieved in consecutive steps - become much better protected as recursion progresses .this allows one to improve code performance by considering a subcode of the original code , obtained after a few least protected information bits are removed .the corresponding simulation results can be found in and . summarizing this discussion ,we outline a few important open problems .recall that the above algorithms and use two simple recalculation rules therefore the first important issue is to define whether any asymptotic gain can be obtained : the second important problem is to obtain tight bounds on the decoding error probability in addition to the decoding threshold derived above .this is an open problem even for the simplest recalculations ( [ rule1 ] ) utilized in this paper , let alone other rules , such as ( [ boss ] ) or those outlined in . from the practical perspective ,recursive algorithms show substantial promise at the moderate lengths up to 256 , on which they efficiently operate at signal - to - noise ratios below 3 db .it is also interesting to extend these algorithms for low - rate subcodes of rm codes , such as the duals of the bch codes and other sequences with good auto - correlation . in summary ,the main result of the paper is a new probabilistic technique that allows one to derive exact asymptotic thresholds of recursive algorithms .firstly , we disintegrate decoding process into a sequence of recursive steps .secondly , these dependent steps are estimated by independent events , which occur when all preceding decodings are correct .lastly , we develop a second - order analysis that defines a few weakest paths over the whole sequence of consecutive steps .i. dumer and k. shabunov , `` recursive constructions and their maximum likelihood decoding , '' _ _ proc .38__ _ allerton conf . on commun ., cont . , and comp . , _ ,monticello , il , usa , 2000 , pp .71 - 80 .r. lucas , m. bossert , and a. dammann , `` improved soft - decision decoding of reed - muller codes as generalized multiple concatenated codes , '' _ proc .itg conf . on source and channel coding , _aahen , germany , 1998 , pp .137 - 141 .n. stolte and u. sorger , `` soft - decision stack decoding of binary reed - muller codes with `` look - ahead '' technique , '' _ _ proc .7__ _ int .workshop `` algebr . and comb .coding theory '' _ , bansko , bulgaria , 2000 , pp .293 - 298 .
recursive decoding techniques are considered for reed - muller ( rm ) codes of growing length and fixed order an algorithm is designed that has complexity of order and corrects most error patterns of weight up to given that exceeds this improves the asymptotic bounds known for decoding rm codes with nonexponential complexity . to evaluate decoding capability , we develop a probabilistic technique that disintegrates decoding into a sequence of recursive steps . although dependent , subsequent outputs can be tightly evaluated under the assumption that all preceding decodings are correct . in turn , this allows us to employ the second - order analysis and find the error weights for which the decoding error probability vanishes on the entire sequence of decoding steps as the code length grows . * keywords * - decoding threshold , plotkin construction , recursive decoding , reed - muller codes .
currently , one of the biggest worries of our society is the future of the climate .common belief is that our planet is heating up at an accelerated rate , caused by the rapid increase in carbon dioxide ( co ) concentration in the atmosphere , henceforth called [ co ] .this increased carbon dioxide finds its origin in human activity ; humans burn fossil fuels , thereby injecting large quantities of carbon into the troposphere by converting it into co .the co contributes to the greenhouse effect of our atmosphere and it is believed that the anthropogenic co will heat by up the planet by up to six degrees during this century ( page 45 of ipcc 2007 report ) . herewe will analyze these ideas and come up with some remarkable conclusions .for that , while the subject is the atmosphere , we do not have to go into much detail of atmospheric science .there are ( nearly philosophical ) observations one can make about climate systems , even without going into technical details .they are in the realm of signal processing and feedback theory .the model of anthropogenic global warming ( agw ) stands or falls with the idea that temperature is strongly correlated to [ co ] by the so - called greenhouse effect .serious doubt is immediately found by anybody analyzing the data .the contribution of co to the greenhouse effect can easily be estimated to be about 3.612% .the total greenhouse effect is also well known ; without our atmosphere our planet would be 32 degrees colder .this makes the co greenhouse effect only 1 kelvin in a simple analysis .we arrive at a similar value if we use statistics and do a linear regression on contemporary [ co ] and temperature data , the maximum of the effect we can thus expect in a linear model when doubling the concentration artificially by burning up fossil fuels .this is far below the global warming models even if we were to use a linear model .it is however , unlikely that the effects are linear .the system is more likely to be sublinear .that is because the greenhouse effect is governed by absorption of light which unavoidably follows the beer - lambert law : the absorption is highly sublinear ; twice as much co will not cause twice as much absorption .the classical arrhenius greenhouse law states that the forcing is logarithmic .yet , later models incorporating non - linear positive - feedback effects as proposed by many climate scientists do predict a super - linear behavior and come up with an estimate of between 1.1 and 6.4 degrees heating for the next century as caused by our carbon dioxide injection into the atmosphere .the positive feedback can come from secondary effects such as an increase in water in the atmosphere , a strong greenhouse agent , or a co-degassing of ground in the permafrost regions when these thaw .climate scientists are basing these conclusions mainly on research of the so - called finite - elements type , dividing the system in cells that interact , the same way the weather is studied .such systems are complicated , but by tuning the processes and parameters that are part of the simulations they manage to explain the actual climate data to an impressive accuracy , as evidenced by the quality of pictures presented in the official climate reports , see for example the ipcc 2007 report where simulation and reality are as good as indistinguishable , and , moreover , alarmingly , they conclude that the recent rise in temperature can only be attributed to co .but , from a philosophical point of view , the fact that the past was explained very accurately does not guarantee the same quality for the prediction of the future .the climate system is chaotic .small deviations in parameters and initial conditions or assumptions made in the simulations can cause huge changes in the outcome .this is easily explained in an example from electronics .if we have a chaotic circuit with , for instance , critical feedback , we can go to our spice or cadence simulator and find the parameters of our components that exactly explain the behavior of our circuit .so far so good .the problem is that if we now go back and switch on the same circuit , we will get a different result .( just take an operational amplifier with 100% positive feedback , it can saturate at the output at the positive as easily as the negative supply voltage , either one can be simulated ) .an additional problem is that even the parameters themselves are not constant and seem to change without any apparent reason , for instance the el nio phenomena in the climate .this is one of the reasons electronic engineers talk about phase margins , the zone in nyquist plots , real vs. imaginary parts of gain , that should be avoided because the circuit will become unpredictable even if it is perfectly simulatable .in fact , recent temperature data fall way out of the prediction margins of earlier models . in view of the discussion above, this does not come as a surprise . where extrapolation from the 2007 ipcc report predicted 2011 to be a year with an anomaly of close to one degree ( 0.95 is our personal estimate based on fig .2.5 of the ipcc 2007 report ) , in reality the anomaly is closer to zero .since 1998 , the hottest year in recent history , the planet has actually been cooling , something that was not foreseen by the predictions of 2007 where a continuing exponential increase in temperature was forecasted by the then generally accepted model .the scientific community is now going back to their drawing boards and fine - tunes its models to new perfection and manages to simulate the new data as well .this is a bayesian way of doing science and is significantly less reliable .the correctness of this statement is evidenced by the fact that there now apparently exist many models that explain the data up to a certain point in time ; every correction of the model that is still consistent with earlier data proves this .apparently , there are a manifold of models that can explain certain data quite satisfactorily ( but that diverge for future predictions ) . in view of this, one should be reluctant in making strong claims about the correctness of the latest model . just like in the weather , where the same simulation - evaluation techniques are used ,we can only hope to get the predictions reasonably under control after thousands of iterations between predictions and reality .each iteration takes about the amount of time as the prediction span one week with the weather , 30 years with the climate . honestly speaking , before we get it right, it ll take at least some hundreds of centuries if we uniquely use the approach of finite - elements calculations on supercomputers . in the meantime, we should not see any climate models as proven indisputable facts .a skeptic approach to any scientific model is not an illness , it is an essential ingredient in science .theories are correct until proven wrong . ideas that stand up to scrutiny are more likely to be correct than ideas one is supposed to not question . still , undeniably , a strong correlation is found between the co concentrations and the temperatures as measured by gas - analysis in drillings in ice shelves , see for example the data of the pangaea project indicating that one is the function of the other for the past hundreds of thousands of years . that is a very strong point .however , proving only statistical correlation , it is not clear from these data which one comes first .are temperature variations the result of [ co ] variations , or vice versa ? while the data are consistent with the model of agw they can not serve as proof of these models .in fact , upon closer scrutiny , the temperature always seems to be _ahead _ of co variations . see figure [ fig : algore ] , where a detail of the temperature and [ co ] history as measured by ice - trapped gases is plotted , picturing the most blatant example of this effect . a simulation ( dashed line ) is also shown with an exponential - decay convolution of 15 kyr , quite adequately reproducing the results .indermhle and coworkers made a statistical analysis and find a value of 900 yr for the delay and note that `` this value is roughly in agreement with findings by fischer et al . who reported a time lag of co to the vostok temperature of ( 600 400 ) yr during early deglacial changes in the last 3 transitions glacial - interglacial'' .this is inexplicable in the framework of global warming models and we honestly start having some legitimate doubts .the apparent time lag may possibly be due to a calibration problem of the measurements , and indeed corrections have been made to the data since then , to make [ co ] variations and temperature variations coincide . while these corrections are the result of circular reasoning , where the magnitude is found by modeling the behavior of ice based on climate models and the climate models based on the ice behavior , these corrections are not even sufficient to remove our doubts . if the correlations are true and we continue to claim that temperature variations are the result of [ co ] variations , something is still not correct .the vostok data of figure [ fig : algore ] show a sensitivity of 10 degrees for 50 ppm [ co ] .contemporary [ co ] are of the order of 80 ppm rise from the preindustrial value .we are thus in for a 16-degree temperature - rise . the fact that we did not reach that level means that either co is not climate forcing , or that there is a delay between [ co ] variations ( cause ) and temperature variations ( effect ) . to get a rough idea of the magnitude of this delay , in 25 years, only 2.5% ( 0.4 of 16 degrees ) of this rise occurred .the relaxation time is thus ( 25 years)/ , which is about 1000 years .these are back - of - the - envelope calculations any real values used for the calculation could anyway be debated by anybody . yet, the outcome will always be more or less this order of magnitude .in other words , either the vostok plots should show a delay between [ co ] and of the order of 1000 years , or the carbon dioxide is not climate forcing .the data , however , show a delay of years or zero , the latter value resulting from questionable corrections . as far as we know, no correction was proposed to result in the + 1000 yr delay necessary to explain contemporary behavior . what is more , modern correlation figures such as given in fig .[ fig : algore ] also include methane ch ( available at noaa paleoclimatology ) and , remarkably , this methane shows the same correlation with [ co ] and .this leaves us flabbergasted .we know that methane is also ( assumed to be ) a strong climate - forcing greenhouse agent .the enigma is then , how did the information from the [ co ] variations percolate to [ ch ] variations ? was this information from [ co ] transmitted to the methane through the temperature variations ?in other words , [ ch ] variations are the _ result _ of variations , rather than their cause? then we may equally assume that [ co ] variations are the _ effect _ of variations rather than their _cause_. there are several mechanisms that may explain such an inverse phase relation , such as outgassing of co ( and ch ) from the warming oceans and thawing permafrost , the correlation between [ co ] and [ ch ] then stems from a common underlying cause .if that is the case , artificially changing the co in the atmosphere will not change the temperature of our planet , just like heating up a can of soda will liberate the gases contained therein into the atmosphere , while increasing the concentrations of gases above the can of soda will not raise its temperature .this unidirectional relation between temperature and gas concentrations is what is called henry s law ; the ratio of concentrations of gas dissolved in the liquid and mixed in the air above it in equilibrium is a parameter that depends on temperature .al - anezi and coworkers have studied this effect in more detail in a laboratory setup under various conditions of salinity and pressure , etc . for co in and above wateran increase in temperature will cause outgassing with a proportionality that is consistent with the correlation found by the historic correlations of global temperature and co in the atmosphere .also , fischer and coworkers find the delay of [ co ] relative to , as discussed above , likely caused by this ocean outgassing effects and find that at colder times , the delay is longer , which is itself consistent with arrhenius - like behavior of thermally - activated processes , such as most in nature . in the presence of an alternative explanation ,there is room for doubt in the agw ideas that increased [ co ] will cause an increased temperature .inspired by this uncertainty in the ( anthropogenic ) global warming model , we tried to see if we can find more evidence for this failure of the cause - and - effect idea .we looked at the recent historic climate data ( from just before the agw model prevalence ) and meticulously - measured [ co ] data and came to the same conclusion , as we will present here .we started with the data of a climate report from before the global warming claims .we deem these data more reliable since they were for sure not produced under the tutelage of a political committee ( ipcc ) .at least we are more convinced about the neutrality of the scientists reporting these data .moreover , the work contains all the useful data and are even available on - line ; the ideas presented here do not need recent data and thus we refrained from looking at them altogether .the authors of the work , balling and coworkers analyzed the global warming ( without capitals because it is not the name of a model ) and concluded `` our analysis reveal a statistically significant warming of approximately 0.5 over the period 1751 to 1995 .the period of most rapid warming in europe occurred between 1890 and 1950 , ... no warming was observed in the most recent half century '' .note that at the onset of the global warming ideas , no warming was observed that can be correlated to the ( accelerated ) increase of [ co ] .note also that since 1998 it has not warmed up at all , as confirmed by satellite data ( 1998 was the warmest year) , in spite of the continuing exponential increase in atmospheric co .the temperature seems to be unaffected by the anthropogenic co .balling and coworkers then went on to analyze the increase in temperature as a function of the time of the year for the data between 1851 and 1991 .they calculated for each of the twelve months the increase in temperature .they found a distribution as given in figure [ fig : year ] ( open circles ) .+ this figure based on the data of balling is again remarkable .the first thing we note is that , while there has been an average of warming , this is not spread equally over the year .in fact , summer months have become cooler . without knowing the underlying reason , this is remarkable , since [ co ]has increased in all months .there are seasonal fluctuations of the co concentrations , see the black dots which represent the monthly [ co ] fluctuations relative to the yearly average at the mauna loa site ( source : noaa , visited 2008 ) .these rapid fluctuations are mainly attributed to biological activity ( the northern hemisphere has more land and in colder times - in winter - more plants are converted into co and in warmer times - in summer - more photosynthesis takes place converting co into biomass , i.e. , [ co ] is a natural function of temperature ) .part of the fluctuations , however , are attributed to human activity ( in winter the northern hemisphere - where more people live - is cold and humans thus burn more fuel to warm their houses , i.e. , [ co ] is a function of temperature ) . as a side note , these two things show us that it is very straightforward to understand how [ co ] can be a function of temperature , in these cases through biological activity , including that of humans , in this case resulting in a rapid inverse proportionality ( warmer less co ) .other , long - term processes such as degassing of oceans can have opposite effects , i.e. , warmer more co .while we bear this in mind , we will continue the reasoning of anthropogenic global warming and assume an opposite correlation , that is , temperature is a function of [ co ] , and analyze the oscillations .we will show that this assumption is inconsistent with the data .while the natural oscillations have always existed and thus do nt result in seasonal oscillations of global warming , the human - caused fluctuations should be represented in the temperature fluctuations .what we would expect in the framework of agw is that all months have warmed up ( because of general injection of anthropogenic co into the atmosphere ) , but winter months a little bit more ( because of seasonal fluctuations of these injections ) . as a response to the sinusoidal [ co ] oscillations ,a sinusoidal oscillation in temperature is to be expected that is i ) offset vertically by an amount to make it fully above the zero line , ii ) offset ( delayed in time ) by a time that can be up to 3 months maximum , as will be discussed here .neither is the case .+ + + comparing the monthly fluctuations in temperature increase with monthly fluctuations in [ co ] we see again that the latter lags behind , this time by about 3 months ( to be precise , fitting sine curves to the data give a difference of 2.9 months ) .one might think that the temperature lags behind 9 months after all , months are periodic but upon second thought , this is not possible .this is best explained in a relaxation model .electronic engineers model things with electronic circuits and this case of temperature and co is also very adequately studied by such circuits . using an equivalent electronic circuit does not mean that the processes are electronic , but that they can be modeled by such circuits , as in an analog computer .( the appendix gives the mathematical link between a relaxation model and the equivalent electronic circuit ) . in this casewe have a model between driving force ( either [ co ] , as we are wo nt to believe , or temperature ) and the response ( respectively , or [ co ] ) .for instance , an increase in [ co ] will cause an increase in by the greenhouse effects .this is necessarily a simple relaxation system , where the changes of the force cause the system to be off - equilibrium until a new equilibrium is reached .this restoring of the equilibrium comes with a certain relaxation time .the reasons for relaxation can be various .for instance , co has to diffuse to places where it can do its temperature effect .there can even be more than a single relaxation process , and instead be a complicated multi - relaxation process comparable to multi - stage nuclear decay .the fact is that one of the relaxation times is dominant , and we can describe the relaxation by a single relaxation time ( that is the sum of all relaxation times ) . as long as there is no resonance in the system ( something that can only be achieved with positive feedback ) it will behave as described here .we will model our climate system with a simple electronic relaxation system consisting of a resistance and a capacitance , and respectively , see figure [ fig : relax ] .the product of the two yields the relaxation time , . at the entrance of this systemwe connect our oscillating driving voltage ( representing , for example , [ co ] oscillations ) , in which is time .the response is measured as the charge in the capacitor which represents for instance the temperature variations .this charge is also measured by the output voltage by the standard capacitor relation .thus our output voltage represents the response ( for example temperature ) . applying a sinusoidal input signal , [ co] ( with the frequency of oscillation ) we get a sinusoidal wave at the output , with the same frequency , but with a phase at the output that is not equal to the phase at the input signal , .the phase difference is directly and uniquely determined by the relaxation time of the system and the oscillation frequency , see figure [ fig : relax ] . for very low oscillating frequencies , the system can easily relax and the phase of the output signal is equal to that of the input signal . for increased frequencies or for increased relaxation times the system has difficulty accompanying the driving force .the amplitude at the output drops and starts lagging behind the input .the maximum phase difference for infinite frequencies or infinite relaxation time is exactly one - quarter period . in our caseour oscillating period is one year .one quarter period is thus 3 months and that is the maximum delay we can expect between driving force and response . for relaxation times much longer than the oscillating period of one year , that is the delay one expects .the delay time provides information about the system . as an example, the comparable system of solar radiation and temperature comparable in that the oscillating period is one year and both deal with the weather and climate has a delay of one month ; the solar radiation and temperature oscillate with one year period , but the warmest day is nearly everywhere one month after the day with the most daylight and the on average coldest day is one month after the day with least daylight . in figure[ fig : relax](bottom ) we see that the relaxation time of the \{radiation temperature } system therefore must be about 0.1 year ( 1.2 months ) . in the plotthis is indicated with a dot .we can get a similar estimation value of the relaxation time of the atmosphere temperature through daily oscillations . as a rough figure , the temperature drops by about 4 degrees at night in about 8 hours after the sun has set .assume that the relaxation upon this step - like solar radiation is a simple exponential ( situation b shown in the appendix ) and would finish eventually at close to absolute zero ( say 10 kelvin ) , and starts at 290 k , 4 degrees in 8 hours , we solve the equation which yields 23 days , similar to the value found above from yearly oscillations .going back to the data of [ co ] and temperature ( fig . [fig : year ] ) we can now understand the behavior , that is , the phase difference .but only if we assume the temperature to be the driving force .for instance : for some reason the temperature has increased more in winter months , and , as a result , to the natural [ co ] oscillations has been added a component with a maximum in spring months .the alternative , [ co ] being the driving force and a delay of 9 months ( 3 quarter periods ) is mathematically not possible .another explanation , which we do not consider a valid alternative , the temperature might be lagging behind [ co ] if it has a negative gain , i.e. , [ co ] increments lower the temperature .this negative sign of the gain would add another 180 phase shift and a total apparent phase shift of 270 would be possible .this goes even more against agw models and we do not see an easy physical explanation how co might lower the temperature .this simple analysis opposes the hypothesis that [ co ] is causing serious temperature rises .as said , the model assumes that no resonance occurs that can possibly cause longer delay times .this , in our opinion , is a valid assumption since resonance is not likely .first of all , for this strong positive feedback effects would be needed and they are not likely .although many climate scientists have proposed positive feedback as discussed in the introduction and they make heavy use of them in order to explain and model the needed non - linear behavior of the greenhouse effect , this goes against intuition . in a chaotic systemthese feedback factors are then extremely critical .scientists of any plumage , when making such simulations , know this ; if they change their parameters just slightly ( sometimes even in the scale of the numerical resolution of their floating point numbers ) , the outcome can be hugely different .there is also an experimental argument against positive feedback factors , namely the conscientious satellite measurements , see for instance the work of lindzen and choi , roy spencer , or wielicki et al. .these , in fact , prove a _ negative _ feedback in the climate system . without feedback , in standard theory ,if the earth warms up ( by global warming in a radiation imbalance ) , the temperature rises and the outward earth radiation increases by a certain amount , until establishing a new equilibrium . in the agw model ,a positive feedback is used of the form : if the temperature increases , the outward earth radiation is less than that predicted by standard theory or the incoming solar radiation increases because of reasons like cloud ( non)forming , thus increasing the temperature even further .the contrary can also happen : in negative feedback , if the planet heats up by a radiation imbalance for whatever reason , new channels of earth radiation can be opened or incoming solar radiation blocked ( for instance , by increased cloud cover ) , thus reducing the temperature with respect to standard theory . as demonstrated by the scientists mentioned above ,the earth climate is a negative - feedback auto - stabilizing system , without going into detail what these feedbacks are .this is also in agreement with the fact that , whereas the conditions on our planet have significantly changed over the geological history ( the sun for instance has been 25% less bright than today ) , the climate has been rather stable , always restoring from climate perturbations to median values instead of saturating in extreme values ; the latter one would expect in a thermal - runaway positive - feedback climate system .note that , if large positive feedback exists , the temperature is unstable and will change until it saturates , that is until negative feedback becomes important .in other words , it is technically not even possible that we are in a positive - feedback situation , considering the stable temperatures .( compare this to the positive - feedback of a shop - a - holic buying always makes him buy even more his funds are acceleratingly depleted or his credit increasingly rising , until the banks put a lid on his spending , i.e. , negative feedback ) .we _ must _ be in a negative - feedback situation and lindzen and choi , spencer , and wielicki , et al ., have proven this by measurements .negative feedback was already argued to be significant when the consensus of the scientists was for a global cooling , see the work of idso .additional arguments against positive feedback come from the fact that every day , and every year the temperature system is brought off equilibrium . at nightit cools down , in the daytime it warms up . in the winterit cools down and in summer it warms up .these temperature disturbances are much larger and much faster than those that may have been produced by greenhouse gases ( 20 degrees / day or 30 degrees / year vs 0.7 degrees/100 years ) . the same accounts for co disturbances .the human - caused co is insignificant compared to the large and noisy emissions naturally occurring on this planet ( only the accumulated effect of the tiny human - originated co is supposed to have an effect ) .to give an idea , segalstad and coworkers established that of the current rise in [ co ] levels relative to the preindustrial level , only 12 ppm is attributable to human activity while 68 ppm is attributed to natural phenomena .these fluctuations are also visible in the extensive summary of beck and show that even in recent history the [ co ] levels were sometimes higher than the modern values , while as everyone knows , the human emissions have monotonously increased , showing that these huge fluctuations can only have a natural origin .relevant for the discussion here , the fluctuations would rapidly push the climate off equilibrium if it were unstable . yet , in spite of these huge disturbances , both in temperature and co , the equilibrium is restored every day and every year and every century . had the earth climate been a positive - feedback system , in summer or in winter the temperature would have been in a runaway situation , unrecoverable in the following compensating half - period .apparently the system can recover very easily and repeatedly from such huge disturbances .the reason is that the climate is a negative - feedback system that stabilizes itself .this is an unavoidable conclusion .one might think that the seasonal fluctuations are too fast to be causing a runaway scenario and that before the system runs away it already recovers .that is a misapprehension ; changes can not be too fast .if the system is unstable , it is unstable .if starting oscillations are much faster than the response time of the system , the effective amplitude is reduced , but in a runaway system they will be amplified up to the point of saturation .the system can only be stable if the feedback factor at that specific frequency is not positive .look at it like this : in the first half of the year , it is hot and the system tries to runaway . in the second half of the year it is colder and it will restore , but it has a minute memory that the temperature has already run off a little in the first half and the second half therefore does not compensate completely . in the first yearwe remain with a tiny temperature offset .once this offset is introduced , the system will runaway .of course , it can runway in both directions .chance will determine which one , but if the system is unstable ( positive feedback ) , the system will runaway . like the metastable system of a ball placed on top of a hill .it can only stay there in the absence of noise or any fluctuation in general . in conclusion, only negative feedback makes sense .relevant to the current work , such negative feedback will make any delay longer than 1/4 period impossible .thus , the fact that we find a delay close to a quarter period means that i ) the temperature signal is the origin for [ co ] signal ( or the two are uncorrelated ) and ii ) the relaxation time linking the two is ( much ) longer than the period ( 12 months ) of oscillation .moreover , even if positive feedback were present , for the resonance itself to be significant , the oscillating frequency needs to be close to the resonance frequency , i.e. , 12 months .it is highly unlikely that the natural frequency of the climate-[co ] system is close to the 12-months - periodic driving force .even more so , since also the long - term ice - drilling data need to be explained somehow , where delays of several thousands of years are observed . in our analysis ,relaxation times of several thousands of years will explain both the ice - drilling data , as well as the yearly temperature and [ co ] oscillations . finally , the set of data we used is rather limited .we only used data presented by balling , et al ., that ends at the end of the 20th century .moreover , they only have data from the northern hemisphere .future research should tell if the ideas presented here can stand up to scrutiny when more recent data and pan - global data are used . as a note of proof , humlum et al. ,have recently investigated correlation between temperature and [ co ] variations on the time scale of decades , similarly concluding that [ co ] changes are delayed in relation to temperature , and can therefore not be the reason for temperature changes .in conclusion , the idea tested here that [ co ] is the _ cause _ of temperature changes does not pass our signal analysis .it goes a little too far to say that this what we present here is proof for the opposite , namely that [ co ] is the _ effect _ of temperature , but our analysis does not contradict this .future will tell if such an hypothesis may be postulated with some confidence .acknowledgements : this research was paid by no grant .it received no funding whatsoever , apart from our salaries at the university where we work . nor are we members of any climate committees ( political or other ) or are we linked to companies or ngos , financially or otherwise .this is an independent opinion that does not necessarily represent the opinion of our university or of our government .in simple relaxation models the ( negative ) change of a quantity is proportional to the magnitude of the remaining quantity .simple examples are nuclear decay , in which the change of number of atoms at a certain time is given by , or the velocity of an object under friction is given by .( and positive constants ) . from experience , and by solving the differential equation , we know that such systems show exponential decay , and respectively .now , we can take a function that is the driving force of another quantity , the response function , respectively the cause and the effect .we can decompose the function into an integral of dirac - delta functions .the response to each delta function is given by the function .assuming linearity , the total response is then a convolution where the heaviside function ( for and 0 otherwise ) was used to force the causality ; the response can only come after the driving force .( note that non - linearities will not change the sign of these calculations , i.e. , a delay can not become an advance . ) for instance , if the response function is an exponential decay , as mentioned above , substituting a delta - function at for the driving force will reproduce the exponential decay : ){\updelta}(s){\rm d}s\\ & = & g_0\exp(-{\upalpha}t)u(t)\end{aligned}\ ] ] in other words , the response to a spike , a delta function at is an exponential decay with an amplitude , and time constant .the response to a heaviside ( step)function is then given by /{\upalpha}\end{aligned}\ ] ] more interesting more relevant for our work is the case of a sinusoidal driving force .this can now easily be calculated by substituting the driving - force function into eq .[ eq : conv ] : {\rm d}s\\ \nonumber & = & \frac{f_0g_0}{{\upalpha}^2+{\upomega}^2 } \left [ { \upalpha}\sin({\upomega}t ) - { \upomega}\cos({\upomega}t)\right]\\ & = & \frac{f_0g_0}{\sqrt{{\upalpha}^2+{\upomega}^2 } } \sin\left({\upomega}t - \tan^{-1}[{\upomega}/{\upalpha}]\right)\end{aligned}\ ] ] ( for the second step in eq .[ eq : grad ] gradshteyn and ryshik was used ) .figure [ fig : apndx ] shows these three cases of driving forces and response functions . figure [ fig : algore ] shows a simulation with the driving function equal to the measured temperature and a delay of ( ) = 15 kyr , which results in a quite good representation of the [ co]curve . an electronic circuit such as presented here has these properties of exponential response to a heaviside function and linearity and the response of equation [ eq : grad ] .for this reason , such ( virtual ) circuits are widely used in simulations of phenomena including phenomena far away from electronics .the interesting and relevant conclusion of eq .[ eq : grad ] is that the maximum phase shift is 90 and this occurs for frequencies that are much higher than the relaxation speed , .h. fischer , m. wahlen , j. smith , d. mastroianni , and b. deck , 1999 : ice core records of atmospheric co2 around the last three glacial terminations , science 283 , 17121714 .( doi : 10.1126/science.283.5408.1712 ) .% noindent r. c. balling , r. s. vose , g .-weber , 1998 .analysis of long - term european records : 1751 - 1995 .climate research 10 , 193 - 200 . also available on - line at http://www.int-res.com/articles/cr/10/c010p193.pdf ( visited sept . 2012 ) .r. spencer , 2008 .satellite and climate model evidence against substantial manmade climate change ._ journal of climate _ , submitted .available at : http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/ ( visited sept .2012 ) .b. a. wielicki , t. wong , r. p. allan , a. slingo , j. t. kiehl , b. j. soden , c. t. gordon , a. j. miller , s .- k .yang , d. a. randall , f. robertson , j. susskind , h. jacobowitz , 2002 .evidence for large decadal variability in the tropical mean radiative energy budget .science 295 , 841 - 843 .
the primary ingredient of anthropogenic global warming hypothesis is the assumption that atmospheric carbon dioxide variations are the cause for temperature variations . in this paper we discuss this assumption and analyze it on basis of bi - centenary measurements and using a relaxation model which causes phase shifts and delays . note : this paper was ( and is ) submitted to various journals and received no scientific criticism whatsoever . it is being rejected principally on reasons of format or for having an alleged political agenda . scientific comments can be sent to our contact addresses mentioned above . key words : global warming , cause and effect , relaxation model
entanglement and nonlocality are considered to be significant resources that quantum mechanics offers.they have a ubiquitous role in information processing tasks and foundational principles , whenever one has to certify quantum advantage over conventional classical procedures .entanglement is a physical phenomenon in which many particles interact in a manner that the description of each particle separately does not suffice to describe the composite system .a resource of this kind can be used to demonstrate one of the strongest form of non classical feature i.e non - locality where the statistics generated from each subsystem can not be reproduced by any local realistic theory analogous to classical physics .bell nonlocal correlation along with entanglement are found to be key resources for many information processing tasks such as teleportation , dense coding , randomness certification , key distribution , dimension witness , bayesian game theoretic applications .+ however , the question of identifying an entangled state remains one of the most involved problems in quantum information . commonly phrased as the , it has been shown to be np hard . in lower dimensions viz .( and ) there is an elegant necessary and sufficient criterion criterion to identify entangled states .negative partial transpose of a quantum state is considered to be a signature of entanglement whereas states having positive partial transpose(ppt ) are separable .the solution in higher dimensions lacks a bi - directional logic to certify a state to be entangled , more so with the presence of ppt entangled states .nevertheless , an extremely useful operational criteria to detect entanglement is provided through entanglement witnesses(ew).an outcome of the well - known hahn - banach theorem in functional analysis , entanglement witnesses are hermitian operators having at least one negative eigenvalue which satisfy the inequalities ( i ) separable states and ( ii ) for at at least one entangled state .the geometric form of the theorem states that points lying outside a convex and closed set can be efficiently separated from the set by a hyperplane .the completeness of this existence guarantees that whenever a state is entangled there is a ew to detect it . on the virtue of being hermitian, entanglement witnesses have proved their efficacy in experimental detection of entanglement .the notion of this separability axiom has been extended to identify useful resources for teleportation using teleportation witnesses .an elegant procedure to capture non - locality is through a bell - chsh witness .this bell - chsh witness is a translation of an ew to detect states which violate the bell - chsh inequality . on a different note , for two qubits , all quantum states do not violate the bell - chsh inequality iff , where is defined as the sum of the two largest eigenvalues of the matrix , being the correlation matrix in the hilbert - schmidt representation of .thus , is a signature of the non - locality of the state .+ the prominent role of non - local states are highlighted through non - local games and randomness certification .randomness plays a key role in many information theoretic tasks .it has already been shown to be an important resource for quantum key distribution and cryptography .so an interesting question could be whether one can classify the states which are helpful to certify randomness , which has been answered affirmatively in recent times .now the question arises whether this class of resources can be expanded in a scenario where prior to the task all the subsystems are subjected to a global unitary operation . in this paperwe deal with this question by characterizing the class of local states which can never be _ bell chsh - nonlocal _ under any possible global unitary operation termed as absolutely _ bell chsh - local _ states .as one can see that this class of states can never be useful for randomness certification task .we have further shown that for systems with hilbert space these states form a convex and compact set .this implies the existence of a hermitian operator which can detect such non - absolutely _ bell chsh - local _ states , a potential resource in the modified scenario .+ in the following section ( sec.[motivate ] ) we first outline the need for an operator to identify non - absolutely _ bell chsh - local _ states and its importance for a number of information theoretic tasks .parallely , the question of existence of a set containing absolutely _ bell chsh - local _ states has been addressed in relation to problems of an analogous type . in sec.[def ] we introduce the relevant notations and definitions to prepare a mathematical formulation . in sec.[proof ] we present the proof of the existence and a definite scheme of constructing such operators and illustrate with examples of absolutely _ bell chsh - local _ states in sec.[examples ] .finally we conclude in sec.[discuss ] .( b ) . now the unitary - evolved system is used to play a non - local game ( c ) . ] pertaining to separability of quantum states , questions have been raised on the characterization of absolutely separable and absolutely ppt states .precisely , a quantum state which is entangled(respectively ppt ) in some basis might not be entangled(resp .ppt ) in some other basis .this depends on the factorizability of the underlying hilbert space .thus , the characterization of states which remain separable(resp .ppt ) under any factorization of the basis is pertinent .literature already contains results in this direction . precisely for two qubits ,a state is absolutely separable iff its eigenvalues(arranged in descending order ) satisfy . in a different perspective ,violation of bell - chsh inequalities exhibits non - local manifestations of a quantum state . a state which violates the bell - chsh inequalityis considered non - local . however , the violation of such an inequality invariably depends on the factorization of the underlying hilbert space and in this regard the study on states which do not violate the bell - chsh inequality under any factorization assumes significance .this study forms the main context of our present work.while intuition permits one to state that states which are absolutely separable will be eligible candidates under this classification , it is interesting to probe the existence of entangled states which come under this category .our results underscore the existence of states outside the absolute separable class , which do not violate bell - chsh inequality under any global unitary operation .global unitary operations play the anchor role here as they transform states to different basis .+ in standard bell - scenario the subsystems are allowed to share classical variables prior to the game and perform local operations only . for our purposewe consider a modified bell - scenario ( depicted in fig.[belldiag ] ) in which two parties are allowed to perform a global unitary prior to the collection of statistics from the joint - system .one can easily note that by performing a cnot operation on the initial state ( which is bell - chsh local ) , it can be transformed to which has a maximum bell - chsh violation of .this clearly shows that the set of resources for this modified bell - chsh scenario can be expanded to a plausible extent .+ now the question is how far can this extension is possible .contrary to the common intuition there exists states which do not violate bell - chsh inequality under any global unitary operation. then one can depict the set of resources in this modified bell - scenario as the green and yellow regions in a schematic diagram as shown in fig.[sets ] . to deal with this not - so - obvious scenariowe prepare the mathematical framework hereafter beginning with a description relevant notations and definitions in the following section .we begin with some notations and definitions needed for our analysis. denotes the set of bounded linear operators acting on x. the density matrices that we consider here , are operators acting on two qubits , i.e. , . denotes the set of all density matrices .we denote by , the set of all states which do not violate the bell - chsh inequality , i.e. , .we denote by as the set containing states which do not violate the bell - chsh inequality under any global unitary operation ( ) i.e. .one can easily see that forms a non - empty subset of , as .a schematic diagram of the sets has been shown in fig.[sets ] . .the set is characterized by the existence of the bell - chsh witness .however , we give a formal characterization below : + is a convex and compact subset of .first note that the statements below are equivalent : + ( i ) + ( ii ) bell - chsh operator + ( iii ) bell - chsh witness + in view of the above , we can rewrite as , .now consider a function , defined as where , is a fixed bell - chsh witness .let . will have a maximum value ( say ) .therefore , one may write ] for any , ] for any .thus is convex . since is compact , being a closed subset of , is thus compact .hence the lemma .this theorem facilitates the characterization of the set as stated in the theorem below : + is a convex and compact subset of .we only show that is convex as the compactness follows from a retrace of the steps presented in .+ take two arbitrary .one may rewrite \geq 0 , \forall b_{chsh}^{w } , \forall u \rbrace ] .this follows , since is convex .[ . +as noted earlier , one may see the compactness with a re - run of the steps in .hence , the theorem .the above characterization enables to formally define an operator( ) which detects states that violate bell - chsh inequality under global unitary . consider .there exists a unitary operator such that violates bell - chsh inequality .consider a bell - chsh witness that detects , i.e. , .using the cyclic property of the trace , one obtains .we thus claim that is our desired operator .to see that it satisfies inequality ( [ ineq1 ] ) , we consider its action on a state from .we have . as ,and is a bell - chsh witness .this implies that has a non - negative expectation value on all states .while the above theorem has provided a tool to identify states which can augment non - local resources under global unitary , it has also highlighted the existence of a set which contains states from which no non - local resource ( in terms of the bell - chsh inequality ) can be generated .it is therefore important to look for certain states which can belong to the absolutely bell - chsh local set .it is evident that , any separable state obviously belongs to . from the definition of absolutely separable states ,i.e. , being any global unitary operation , it is clear that after the operation of the global unitary the state remains in , i.e. all the absolutely separable states are absolutely bell - chsh local states .consider the bell - diagonal states , i.e. , where are the usual bell states . if now one imposes the dual conditions , it is easy to see that this state is not only separable but also belongs to and hence .werner states , where being singlet state , in are absolutely separable for , as a result it is also absolutely local here .it can now be asked whether there exist states which are not separable but belong to .it is well known that is entangled but does not violate bell - chsh for ] are absolutely bell - chsh local . + hence if one considers the state having weights diagonal in the computational basis is a separable state but not absolutely separable for and it belongs to $ ] .it is evident from above that the state is in for .in standard bell - scenario the free resources are local operation and shared randomness . herewe have considered an modified scenario where prior to the non - local game the subsystems are allowed to undergo a global unitary evolution .contrary to common intuition all quantum states can not be made to violate bell - chsh inequality even in this modified scenario . in this work we have shown that for two qubit systems these useless states which we call absolutely _ bell chsh - local _ , form a convex and compact set implying the existence of a hermitian operator which can detect non - absolutely _ bell chsh - local _ states , a potential resource in the modified bell - scenario .we also present a characterization of absolutely _ bell chsh - local_ states for a number of generic class of states .this analysis of bell non - locality presents a new paradigm for asking a number of important questions .firstly , one could seek for a generic characterization of absolutely _ bell chsh - local _ states even for two qubits .secondly , the question remains whether one could demonstrate the existence of non - absolutely _ bell - local _ witness operators for higher dimensional systems in different bell - scenarios and subsequently characterizing the set of absolutely _ bell - local _states for such systems .* acknowledgment : * we would like to gratefully acknowledge fruitful discussions with prof .guruprasad kar .we also thank tamal guha and mir alimuddin for useful discussions .am acknowledges support from the csir project 09/093(0148)/2012-emr - i . c. h. bennett , g. brassard , c. crpeau , r. jozsa , a. peres , and w. k. wootters , `` teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels '' , http://dx.doi.org/10.1103/physrevlett.70.1895[phys . rev. lett . * 70 * , 1895 ( 1993 ) ] .s. pironio , a. acn , s. massar , a. boyer de la giroday , d. n. matsukevich , p. maunz , s. olmschenk , d. hayes , l. luo , t. a. manning and c. monroe , `` random numbers certified by bell s theorem '' , http://www.nature.com/nature/journal/v464/n7291/full/nature09008.html[nature * 464 * , 1021 - 1024 ] .r. colbeck and r. renner , free randomness can be amplified " , http://www.nature.com/nphys/journal/v8/n6/full/nphys2300.html[nat .phys.*8 * , 450 ( 2012 ) ] ; a. chaturvedi and m. banik , measurement - device independent randomness from local entangled states " , http://iopscience.iop.org/article/10.1209/ 0295 - 5075/112/30003/meta;jsessionid = d6c96abb3e61 c42c542a9553e8a4f4dc.c3.iopscience.cld.iop.org[epl * 112 * , 30003 ( 2015 ) ] .j. barrett , l. hardy , and a. kent , no signaling and quantum key distribution " , http://dx.doi.org/10.1103/physrevlett.95.010503[phys .lett . * 95 * , 010503 ( 2005 ) ] ; a. acn , n. gisin , and l. masanes , from bells theorem to secure quantum key distribution " , http://dx.doi.org/10.1103/physrevlett.97.120405[phys .lett . * 97 * , 120405 ( 2006 ) ] ; n. brunner , s. pironio , a. acn , n. gisin , a. a. methot , and v. scarani , testing the dimension of hilbert spaces " , http://dx.doi.org/10.1103/physrevlett.100.210503[phys . rev. lett . * 100 * , 210503 ( 2008 ) ] ; r. gallego , n. brunner , c. hadley , and a. acn , device independent tests of classical and quantum dimensions " , http://dx.doi.org/10.1103/physrevlett.105.230501[phys .lett . * 105 * , 230501 ( 2010 ) ] ; s. das , m. banik , a. rai , md r. gazi , and s.kunkri , hardy s nonlocality argument as a witness for postquantum correlations " , https://journals.aps.org/pra/abstract/10.1103/physreva.87.012112[phys .a * 87 * , 012112 ( 2013 ) ] ; a. mukherjee , a. roy , s. s. bhattacharya , s. das , md .r. gazi , and m. banik , hardy s test as a device - independent dimension witness " , https://journals.aps.org/pra/abstract/10.1103/physreva.92.022302[phys .a * 92 * , 022302 ( 2015 ) ] ; n. brunner and n. linden , connection between bell nonlocality and bayesian game theory `` , http://www.nature.com/ncomms/2013/130703/ncomms3057/full/ncomms3057.html#references[nature communications * 4 * , 2057 ( 2013 ) ] .et al . _ ' ' nonlocality and conflicting interest games `` , http://journals.aps.org/prl/abstract/10.1103/physrevlett.114.020401[phys . rev .lett . * 114 * , 020401 ( 2015 ) ] .a. roy , a. mukherjee , t. guha , s. ghosh , s. s. bhattacharya , m. banik , `` nonlocal correlations : fair and unfair strategies in bayesian game '' , http://arxiv.org/abs/1601.02349[arxiv : 1601.02349 ] .l.gurvits , _ proceedings of the thirty - fifth annual acm symposium on theory of computing _ , eds . l. l. larmore and m. x. goemans , 10 ( 2003 ) .a. peres,''separability criterion for density matrices``,http://dx.doi.org/10.1103/physrevlett.77.1413[phys .77 , 1413 ( 1996 ) ] .m. horodecki , p. horodecki , r. horodecki,''separability of mixed states : necessary and sufficient conditions `` , doi:10.1016/s0375 - 9601(96)00706 - 2[phys .lett . a 223 , 1 ( 1996 ) ] .p. horodecki,''separability criterion and inseparable mixed states with positive partial transposition `` , doi:10.1016/s0375 - 9601(97)00416 - 7[phys .a 232 , 333 ( 1997 ) ] ; m. horodecki , p. horodecki , r .horodecki,''mixed - state entanglement and distillation : is there a `` bound '' entanglement in nature?``,http://dx.doi.org/10.1103/physrevlett.80.5239[phys .80 , 5239 ( 1998 ) ] . b. m. terhal,''bell inequalities and the separability criterion " , doi:10.1016/s0375 - 9601(00)00401 - 1[phys .lett . a 271 , 319 ( 2000 ) ] .m. barbieri , f. de martini , g. di nepi , p. mataloni , g. m. dariano , c. macchiavello,``detection of entanglement with polarized photons : experimental realization of an entanglement witness'',http://dx.doi.org/10.1103/physrevlett.91.227901 [ phys .91 , 227901 ( 2003 ) ] .w. wieczorek , c. schmid , n. kiesel , r. pohlner , o. guhne , h. weinfurter,``experimental observation of an entire family of four - photon entangled states'',http://dx.doi.org/10.1103/physrevlett.101.010503[phys .lett . 101 , 010503 ( 2008 ) ] .n. ganguly , s. adhikari , a. s. majumdar , j. chatterjee,``entanglement witness operator for quantum teleportation'',http://dx.doi.org/10.1103/physrevlett.107.270501[phys .107 , 270501 ( 2011 ) ] .s. adhikari , n. ganguly , a. s. majumdar,``construction of optimal teleportation witness operators from entanglement witnesses'',http://dx.doi.org/10.1103/physreva.86.032315[phys .a 86 , 032315 ( 2012 ) ] .d.collins , n. gisin , n. linden , s. massar , s. popescu,``bell inequalities for arbitrarily high - dimensional systems'',http://dx.doi.org/10.1103/physrevlett.88.040404[phys .88 , 040404 ( 2002 ) ] .
the action of global unitary operations can have intriguing effects on the non - local manifestations of a quantum state . states which admit a local hidden variable model can violate bell s inequality when transformed by a global unitary . this phenomena assumes significance when one exercises to augment non - local resources from these seemingly useless states(i.e . , in terms of non - local tasks ) . however , equally intriguing is the existence of states from which no non - local resource can be generated with a global unitary.the present work confirms the existence of such a set , pertaining to states living in . the set exhibits counter intuitive features by containing within it some entangled states which remain local on the action of any unitary . furthermore through an analytic characterization of the set we lay down a generic prescription through which one can operationally identify states which can generate non - local resources under the action of a unitary on the composite system .
one of the fundamental questions in quantum statistical inference problem is to establish the ultimate precision bound for a given quantum statistical model allowed by the laws of statistics and quantum theory . mainly due to the non - commutativity of operators and nontrivial optimization over all possible measurements ,this question still remains open in full generality .this is very much in contrast to the classical case where the precision bounds are obtained in terms of information quantities for various statistical inference problems .the problem of point estimation of quantum parametric models is of fundamental importance among various quantum statistical inference problems .this problem was initiated by helstrom in the 1960s and he devised a method to translate the well - known strategies developed in classical statistics into the quantum case . a quantum version of fisher information was successfully introduced and the corresponding precision bound , a quantum version of cramr - rao ( cr ) bound , was derived .it turned out , however , that the obtained bound is not generally achievable except for trivial cases .a clear distinction regarding the quantum parameter estimation problem arises when exploring possible estimation strategies since there is no measurement degrees of freedom in the classical estimation problem .consider identical copies of a given quantum state and we are allowed to perform any kinds of quantum measurements according to quantum theory .a natural question is then to ask how much can one improve estimation errors by measurements jointly performed on the copies when compared to the case by those individually performed on each quantum state . the former class of measurements is called _ collective _ or _ joint _ and the latter is referred to as _separable _ in literature .it is clear that the class of collective measurements includes separable ones and one expects that collective measurements should be more powerful than separable ones in general .since one can not do better than the best collective measurement , the ultimate precision bound is the one that is asymptotically achieved by a sequence of the best collective measurements as the number of copies tends to infinity .this fundamental question has been addressed by several authors before .it was holevo who developed parameter estimation theory of quantum states by departing from a direct analogy to classical statistics .he proposed a bound , known as the holevo bound , in the 1970s aiming to derive the fundamental precision limit for quantum parameter estimation problem , see ch .6 of his book . at that time, it was not entirely clear whether or not this bound is a really tight one , i.e. , the asymptotic achievability by some sequence of measurements . over the last decade, there have been several important progress on asymptotic analysis of quantum parameter estimation theory revealing that the holevo bound is indeed the best asymptotically achievable bound under certain conditions .these results confirm that the holevo bound plays a pivotal role in the asymptotic theory of quantum parameter estimation problem . despite the fact that we now have the fundamental precision bound, the holevo bound has a major drawback : it is not an explicit form in terms of a given model , but rather it is written as an optimization of a certain nontrivial function . therefore , unlike the classical case , where the fisher information can be directly calculated from a given statistical model , the structure of this bound is not transparent in terms of the model under consideration .having said the above introductory remarks , we wish to gain a deeper insight into the structure of the holevo bound reflecting statistical properties of a given model . to make progress along this line of thoughts ,we take the simplest quantum parametric model , a general qubit model , and analyze its holevo bound in detail . since explicit formulas for the holevo bound for mixed - state models with one and three parameters and pure - state models are known in literature , the case of two - parameter qubit model is the only one left to be solved .the main contribution of this paper is to derive an explicit expression for the holevo bound for any two - parameter qubit model of mixed - states without referring to a specific parametrization of the model .remarkably , the obtained formula depends solely on a given weight matrix and three previously known bounds : the symmetric logarithmic derivative ( sld ) cr bound , the right logarithmic derivative ( rld ) cr bound , and the bound for d - invariant models .this result immediately provides necessary and sufficient conditions for the two important cases .one is when the holevo bound coincides with the rld cr bound and the other is when it does with the sld cr bound .we also show that a general model other than these two special cases exhibits an unexpected property , that is , the structure of the holevo bound changes smoothly when the weight matrix varies .we note that similar transition has been obtained by others for a specific parametrization of two - parameter qubit model . herewe emphasize that our result is most general and is expressed in terms only of the weight matrix and two quantum fisher information .the main result of this paper is summarized in the following theorem ( the detail of these quantities will be given later ) : consider a two - parameter qubit model of mixed states , which changes smoothly about variation of the parameter .denote the sld and rld fisher information matrix by and , respectively , and define the sld and rld cr bounds by &={\mathrm{tr}\left\{w g_\theta^{-1}\right\}},\\ c_\theta^r[w]&={\mathrm{tr}\left\{w \re\tilde{g}_\theta^{-1}\right\ } } + { \mathrm{trabs}\left\{w\im\tilde{g}_\theta^{-1}\right\}},\end{aligned}\ ] ] respectively . here , is a given positive definite matrix and is called a weight matrix ( is defined after eq . . ) .introduce another quantity by c_^z[w]:=\{w z _ } + \{wz _ } , where is a hermite matrix defined by z_:=_i , j\{1,2},z_^ij:= ( _ l_^j l_^i ) .here are a linear combination of the sld operators ; with denoting the component of the inverse of sld fisher information matrix and sld operators . with these definitions ,we obtain the following result : [ thm1 ] the holevo bound for any two - parameter qubit model under the regularity conditions is [ eq1 ] c_^h[w]= c_^r[w ] c_^r[w ] + c_^r[w]+s _ , where the function ] is well defined .( see the discussion after eq . .) the above main result [ thm1 ] sheds several new insights on the quantum parameter estimation problems .first note that the form of the holevo bound changes according to the choice of weight matrices .this kind of transition phenomenon has never occurred in the classical case .second surprise is to observe the appearance of the rld cr bound in the generic two - parameter estimation problem .as we will provide in the next section , the rld cr bound has been shown to be important for a special class of statistical models , known as a d - invariant model . here, we explicitly show that it also plays a major role for non d - invariant models .last , in many of previous studies parameter estimation problems , the precision bound is either expressed in terms of the sld or rld fisher information , but the second case of the above expression depends _ both _ on the sld and rld fisher information . to see this explicitly ,we can rewrite it as +s_\theta[w]= { \mathrm{tr}\left\{wg_\theta^{-1}\right\ } } \\ + \frac14\ \frac{\left({\mathrm{trabs}\left\{w\im\tilde{g}_\theta^{-1}\right\}}\right)^2}{{\mathrm{tr}\left\{w(g_\theta^{-1}-\re\tilde{g}_\theta^{-1})\right\}}}. \end{gathered}\ ] ] all these findings will be discussed in details together with examples .the rest of this paper continues as follows .section [ sec2 ] provides definitions and some of known results for parameter estimation theory within the asymptotically unbiased setting . in sec .[ sec3 ] , a useful tool based on the bloch vector is introduced and then the above main theorem is proved .discussions on the main theorem are presented in sec .section [ sec5 ] gives several examples to illustrate findings of this paper .concluding remarks are listed in sec .most of the proofs for lemmas are deferred to appendix [ sec : appb ] .supplemental materials are given in appendix [ sec : appc ] .in this section , we establish definitions and notations used in this paper .we then list several known results regarding the holevo bound to make the paper self - contained .consider a -dimensional hilbert space ( ) and a -parameter family of quantum states on it : : = \{_| = ( ^1,^2, ,^k)^k } , where is an open subset of -dimensional euclidean space .the family of states is called a _ quantum statistical model _ or simply a _ model_. the model discussed throughout the paper is assumed to satisfy certain regularity conditions for the mathematical reasons . for our purpose ,the relevant regularity conditions are : i ) the state is faithful , i.e. , is strictly positive .ii ) it is differentiable with respect to these parameters sufficiently many times .iii ) the partial derivatives of the state are all linearly independent . in the rest of this paper ,the regularity conditions above are taken for granted unless otherwise stated .for a given quantum state , we define the sld and rld inner products by respectively , for any ( bounded ) linear operators on . here , denotes the hermite conjugation . given a -parameter model , the sld operators and rld operators are formally defined by the solutions to the operator equations : the sld fisher information matrix is defined by {i , j\in\{1,\dots , k\ } } \\\nonumber g_{\theta , ij}&:={\langlel_{\theta , i},l_{\theta , j}\rangle_{\rho_{\theta}}}={\mathrm{tr}\left(\rho_{\theta}\frac12 \big(l_{\theta , i}l_{\theta , j}+l_{\theta , j}l_{\theta , i } \big)\right ) } , \end{aligned}\ ] ] and the rld fisher information is {i , j\in\{1,2,\dots , k\ } } \\ \label{rldfisher } \tilde{g}_{\theta , ij}&:={\langle\tilde{l}_{\theta , i},\tilde{l}_{\theta , j}\rangle_{\rho_{\theta}}^{+}}={\mathrm{tr}\left(\rho_{\theta}\tilde{l}_{\theta , j}\tilde{l}_{\theta , i}^*\right)}. \end{aligned}\ ] ] define the following linear combinations of the sld and rld operators : l_^i:= _j=1^k(g_^-1)^jil_,j _ ^i:=_j=1^k(_^-1)^ji_,j . from these definitions ,the following orthogonality conditions hold .[ orthcond ] _^i , l_,j__=^i_j , _ ^i,_,j__^+=^i_j .these operators with upper indices are referred to as the _ sld and rld dual operators _ , respectively .consider i.i.d .extension of a given state and we define extended model by [ nmodel ] ^(n):=\{_^n | ^k } .the main objective of quantum statisticians is to perform a measurement on the tensor state and then to make an estimate for the value of the parameter based on the measurement outcomes . heremeasurements are described mathematically by a positive operator - valued measure ( povm ) and is denoted as .an estimator , which is a purely classical data processing , is a ( measurable ) function taking values on and is denoted as .they are where is the identity operator on and we assume that povms consist of discrete measurement outcomes . for continuous povms, we replace the summation by an integration .a pair is called a _ quantum estimator _ or simply an _ estimator _ when is clear from the context and is denoted by .the performance of a particular estimator can be compared to others based on a given figure of merit and then one can seek the best " estimator accordingly . as there is nouniversally accepted figure of merit , one should carefully adopt a reasonable one depending upon a given situation .for example , a specific prior distribution for the parameter is known , the bayesian criterion would be meaningful to find the best bayesian estimator .if one wishes to avoid bad performance of estimators , the min - max criterion provides an optimal one that suppresses such cases . in this paper , we are interested in analyzing estimation errors at specific point , that is , the pointwise estimation setting . for a given model and an estimator , we define a _ bias _ at a point as &:=\sum_{x\in\cx_n}(\hat\theta_{n}(x)-\theta)p^{(n)}_\theta(x)=\eof{\hat\theta_{n}}-\theta,\\ \mathrm{with}&\quad p^{(n)}_\theta(x ) : = { \mathrm{tr}\left(\rhon\pi^{(n)}_x\right)},\end{aligned}\ ] ] where denotes the expectation value of a random variable with respect to the probability distribution .note that the bias ] in the large expansion : _^(n)[^(n)|w ] + o(n^-2 ) , i.e. , the fastest decaying rate for the mse .mathematically , we define the cr type bound for the mse by the following optimization problem : c_:=_\{^(n ) } \ { _ nn_^(n)[^(n)|w ] } , where the infimum is taken over all possible sequences of estimators that is asymptotically unbiased ( a.u . ) .note that this bound depends both on the weight matrix and the model at .the symbol appearing in the bound ] is [ zmatrix ] z_:= _ i , j\{1, ,k } , and trabs denotes the sum of the absolute values of with for some invertible matrix .we note the following relation also holds for any anti - symmetric operator : \{wx}=_i|_i|=\{|w^1/2xw^1/2 | } , where denotes the absolute value of a linear operator .the holevo bound is defined through the following optimization : [ hbound ] c_^h[w]:=__h_. the derivation of the above optimization is well summarized in hayashi and matsumoto .holevo showed that this quantity is a bound for the mse for estimating a single copy of the given state under the locally unbiased condition : _^(1)[|w]c_^h[w ] , holds for any locally unbiased estimator .the nontrivial property of the holevo bound is the additivity : c_^h[w,]=n^-1c_^h[w , _ ] , where the notation ] holds for all weight matrices .there exist several different approaches upon proving the above theorem .hayashi and matsumoto proved the case for a full qubit model first .gu and kahn introduced a different tool based on ( strong ) quantum local asymptotic normality to prove the qubit case .this was further generalized to full models on any finite dimensional hilbert space . however , all these proofs depend on a specific parametrization of quantum states .more general proof has been recently established by yamagata , fujiwara , and gill .this theorem implies that if we choose an optimal sequence of estimators , the mse behaves as \{w v^(n ) _ } + o(n^-2 ) , for sufficiently large .that is the holevo bound is the fastest decaying rate for the mse .although the holevo bound stands as an important cornerstone to set the fundamental precision bound , the definition contains a nontrivial optimization .the main motivation of our work , as stated in the introduction , is to perform this optimization explicitly for any given model for qubit case .the result shows several nontrivial aspects of parameter estimation in quantum domain . before going to present our result ,we summarize several known results . in this subsection, we consider two special cases where analytical forms of the holevo bound are known . for a given -parameter model on the hilbert space ,let us denote sld and rld fisher information matrices by and , respectively , eqs .( [ sldfisher ] , [ rldfisher ] ) .define the sld and rld cr bounds by &:={\mathrm{tr}\left\{wg_\theta^{-1}\right\}},\\ \label{rldbound } c_\theta^r[w]&:={\mathrm{tr}\left\{w\re\tilde{g}_\theta^{-1}\right\}}+{\mathrm{trabs}\left\{w{\im}\tilde{g}_\theta^{-1}\right\}},\end{aligned}\ ] ] respectively . throughout the paper ,we use the notation ( ) representing the real ( imaginary ) part of the inverse matrix of the rld fisher information matrix .the well - known fact is that the sld and rld cr bounds can not be better than the holevo bound : [ lem2 ] for a given model satisfying the regularity conditions , the holevo bound is more informative than the sld and the rld cr bound , i.e. , \ge c_\theta^s[w] ] hold for an arbitrary weight matrix .proof can be found in the original work by holevo that is summarized in his book .more compact proof was stated by nagaoka .see also hayashi and matsumoto .when the number of parameters is one , the problem can be reduced significantly . in this case , there can not be any imaginary part for the matrix and thus the minimization is reduced to minimizing the mse itself .[ thm3 ] for any one - parameter model , the holevo bound coincides with the sld cr bound , i.e. , [ 1bound ] c_^h= , holds for all where is the sld fisher information at .note that there is no weight matrix since we are dealing with a scalar mse for the one - parameter case .importantly , there is no gain from collective povms for one - parameter models .existence of a povm whose mse is equal to this bound is discussed independently by several authors .consider an arbitrary -parameter model and let ( ) be the sld operators at .the linear span of sld operators with real coefficients is called the _ sld tangent space _ of the model at : t_():=_\{l_,1,l_,2, ,l_,k } .any elements of the sld tangent space , , satisfy and it is not difficult to see that the space is essentially a real vector space with the dimension .holevo introduced a super - operator , called a _ commutation operator _ , as follows .given a state on , let be the set of linear operators on , then is a map from to itself defined through the following equation : y,(x)_=[y , x]_,x , y . here, is the sld inner product and \rho={\mathrm{tr}\left(\rho[x^*,\,y]\right)}/(2{\mathrm{i}}) ] .when the model is d - invariant at , holds at and further the holevo bound is expressed as w c_^h[w]=c_^r[w ] = h_. this statement can be proven in several different manners .in passing , we note that the expression ] in terms of the matrix .when the model is not d - invariant , ] for all weight matrices . nevertheless ,as will be shown in this paper , this is an important quantity and we call it as the _ d - invariant bound _ in our discussion. we note that this quantity was also named as the _ generalized rld cr bound _ by fujiwara and nagaoka in the following sense .when a model fails to satisfy some of the regularity conditions , the rld operators do not exist always . even in this case , when the model is d - invariant , then the above bound is well defined and provides the achievable bound for a certain class of models , known as the coherent model .another remark regarding this proposition is that the converse statement also holds .[ thmdinv ] for any -parameter model on any dimensional hilbert space under the regularity conditions , the following equivalence holds : =c_\theta^r[w].\\ \leftrightarrow&\ \forall w\in\cw\ c_\theta^h[w]=c_\theta^z[w ] .\end{aligned}\ ] ] this equivalence for the d - invariant model might have been known for some experts , but it was not stated explicitly in literature to our knowledge .sketch of proof is given in appendix [ sec : appc-2 ] for the sake of reader s convenience .we remark that the holevo bound for a general model , which is not d - invariant , exhibits a gap among ] , and ] and ] matrix and the holevo function read &= { \langle\v x^i|\tilde{q}_\theta^{-1}\v x^j\rangle } , \\\nonumber h_\theta[\vec{x}|w]&=\sum_{i , j=1}^2 \left [ w_{ij}{\langle\v x^i|q_\theta^{-1}\v x^j\rangle } + \sqrt{\det w}\big| { \langle\v x^i|f_\theta\v x^j\rangle } \big| \right],\end{aligned}\ ] ] for a given weight matrix {i , j\in\{1,2\}} ] is defined by ={\mathrm{tr}\left\{wg_\theta^{-1}\right\ } } + { \langle\v\ell_{\theta}^\bot|{q}_{\theta}^{-1}\v\ell_{\theta}^\bot\rangle } ( \vec\xi|w\vec\xi)\\ + 2\sqrt{\det w}\left| { \langle\v \ell_\theta^1|f_\theta\v \ell_\theta^2\rangle}+(1-s_\theta^2)(\vec{\gamma}_\theta|\vec{\xi } ) \right| . \end{gathered}\ ] ] in this expression , we introduce the standard inner product for two - dimensional real vector space by and is given by eq . .the derivation for this lemma is given in appendix [ sec : appb-0 ] .in the following , we carry out the above optimization to derive the main result of this paper , theorem [ thm1 ] .we first list several definitions and lemmas . for a given two - parameter qubit mixed - state model ,the sld cr , rld cr , and d - invariant bounds are defined by &={\mathrm{tr}\left\{w g_\theta^{-1}\right\}},\\ \nonumber c_\theta^r[w]&={\mathrm{tr}\left\{w \re\tilde{g}_\theta^{-1}\right\ } } + { \mathrm{trabs}\left\{w\im\tilde{g}_\theta^{-1}\right\}},\\ \nonumber c_\theta^z[w]&={\mathrm{tr}\left\{w \re z_\theta\right\ } } + { \mathrm{trabs}\left\{w\im z_\theta\right\ } } , \end{aligned}\ ] ] respectively , where is the sld fisher information matrix , is the rld fisher information matrix , and {i , j\in\{1,2\}} ] .[ lem8 ] for any two - parameter qubit model , the following conditions are equivalent . 1 . is d - invariant at .2 . at .3 . at .furthermore , we have the following equivalent characterization for global d - invariance . 1 . is globally d - invariant . for all . is independent of .three remarks regarding the above lemmas are listed : first , imaginary parts of the inverse of the rld fisher information matrix and the matrix are always identical for two - parameter qubit mixed - state models , i.e. , , see proof in the appendix .second , if a model expressed as the bloch vector contains the origin , the model is always d - invariant at this point .this is because the condition is met at .last , a globally d - invariant model is possible if and only if the state is generated by some unitary transformation .this is because the condition [ lem8]-6 in lemma [ lem8 ] is equivalent to preservation of the length of the bloch vector .finally , we need the following lemma for the optimization : [ lem9 ] for a given positive matrix , a real vector , and a real number , the minimum of the function f()=(|a)+2|(|)+c| , is given by _ ^2f()=2|c|-(|a^-1 ) |c|(|a^-1 ) + |c|<(|a^-1 ) , where the minimum is attained by _ * = -(c)a^-1|c|(|a^-1 ) + -a^-1 |c|<(|a^-1 ) , where is the sign of .proofs for the above three lemmas are given in appendix [ sec : appb-2]-[sec : appb-4 ] .we now prove theorem [ thm1 ] . from the expression of the holevo function, we can apply lemma [ lem9 ] by identifing we need to evaluate and and they are calculated as follows .-c_\theta^r[w ] , \end{aligned}\ ] ] where lemma [ lem7]-1 and [ lem7]-3 are used to get the last line .lemma [ lem7]-2 immediately gives 2|c|=\{wz_}=c_^z[w]-c_^s[w ] .we obtain if |c|(|a^-1 ) c_^r[w]12 ( c_^z[w]+c_^s[w ] ) is satisfied , the holevo bound is &=\nonumber \ds c_\theta^s[w]+ ( c_\theta^z[w]-c_\theta^s[w])- ( c_\theta^z[w]-c_\theta^r[w])\\ & = c_\theta^r[w ] . \end{aligned}\ ] ] if <({c_\theta^z[w]+c_\theta^s[w]})/{2} ] is defined in eq . .this proves the theorem . we remark that from lemma [ lem7 ] and the positivity of we always have the relation [ zger ] c_^z[w]c_^r[w ] , and the equality if and only if is d - invariant at by lemma [ lem8]-3 .note if is d - invariant at ( ) , the condition <({c_\theta^z[w]+c_\theta^s[w]})/{2} ] can not be satisfied .thus , the obtained holevo is well defined for all and for arbitrary weight matrix .the optimal set of hermite operators attaining the holevo bound can be given by lemma [ lem9 ] as follows .define an hermite matrix by l_^:= -|v_^i+v_^ , and the function by \ge\frac12(c_\theta^z[w]+c_\theta^s[w])\\[1ex ] \frac{w^{1/2}\im z_\theta^{12}}{c_\theta^z[w]-c_\theta^r[w ] } \quad \mathrm{otherwise } \end{cases}. \end{gathered}\ ] ] then , we have ] .this case should be understood as the limit . the other expression shown in eq. follows from the first line of eq . by noting -c_\theta^r[w]= \mathrm{tr}\{w(g_\theta^{-1}-\re\tilde{g}_\theta^{-1})\} ] .in this section , we shall discuss the consequences of theorem [ thm1 ] .this brings several important findings of our paper .first is two conditions that characterize special classes of qubit models .second is a transition in the structure of the holevo bound depending on the choice of the weight matrix .the general formula for the holevo bound for any two - parameter model is rather unexpected in the following sense .first of all , it is expressed solely in terms of the three known bounds and a given weight matrix .second , a straightforward optimization for a nontrivial function reads to the exactly same expression as the rld cr bound when the condition \ge(c_\theta^z[w]+c_\theta^s[w])/2 ] can not be satisfied .( see the remark after eq . . ) therefore , the holevo bound is always identical to the rld cr bound in this case .next we show the left condition implies the right in eq . .if = c_\theta^r[w] ] is always satisfied , and hence the holevo bound coincides with the sld cr bound for all choices of the weight matrix .this follows from the second line of the expression . : the condition implies an existence of a weight matrix satisfying =c_\theta^s[w] ] for some weight matrix .the expression immediately implies which gives .therefore , three conditions are all equivalent .we have three remarks regarding this proposition .first , in terms of the sld bloch vectors , the necessary and sufficient condition is also written as \right)}=0\\ & \leftrightarrow{\langle\stheta|\v \ell_{\theta,1}\times\v \ell_{\theta,2}\rangle}=0\\ & \leftrightarrow\ { \langle\stheta| \del_{1}\stheta\times\del_2\stheta\rangle}=0,\end{aligned}\ ] ] which is easy to check by calculating the bloch vector of a given model .second , we note that given a symmetric matrix , for all positive weight matrix implies as a matrix inequality . when the holevo bound is same as the sld cr bound , we see that the mse matrix ] , i.e. , commutativity of and on the trace of the state . when this holds , the quantum parameter estimation problem becomes similar to the classical case asymptotically . in the rest of the paper, we call a model _ asymptotically classical _ when this condition is satisfied. a similar terminology , `` quasi classical model , '' was used by matsumoto in the discussion of parameter estimation of pure states . here , we emphasize that classicality arises only in the asymptotic limit and hence , this terminology is more appropriate .we also note that the equivalence between was stated in the footnote of the paper based on the unpublished work of hayashi and matsumoto . hereour proof is shown to be simple owing to the general formula obtained in this paper .last , a great reduction occurs in the structure of the fundamental precision bound for this class of models .we note that achievability of the sld cr bound for specific models have been reported in literature in discussion on quantum metrology . here, we provide a simple characterization , the necessary and sufficient condition , of such special models in the unified manner .having established the above two propositions , we can conclude that a generic two - parameter qubit model other than d - invariant or asymptotically classical ones exhibits the nontrivial structure for the holevo bound in the following sense : the structure changes smoothly as the weight matrix varies . for a certain choice of , it coincides with the rld cr bound and it becomes different expression for other choices .put it differently , consider any model that is not asymptotically classical , then we can always find a certain region of the weight matrix set such that the holevo bound is same as the rld cr bound . this point is examined in detail in the next subsection and examples are provided in the next section for illustration .let us consider a two - parameter qubit model that is neither d - invariant nor asymptotically classical . in this case, the set of all possible weight matrices is divided into three subsets .the first two sets are and in which -({c_\theta^z[w]+c_\theta^s[w]})/{2} ] for and it is expressed as =c^r_\theta[w]+s_\theta[w] ] holds for the boundary . in the following we characterize these sets explicitly . to do this, we first note that the degree of freedom for the weight matrix in our problem is three due to the condition of being real symmetric .second , we can show that a scalar multiplication of the weight matrix does not change anything except for the multiplication of over all expression of the holevo bound .thus , we can parametrize the weight matrix by two real parameters . for our purpose ,we employ the following representation up to an arbitrary multiplication factor : w = u _ ( cc1 & w w_2 + w w_2 & w_2 ^ 2 ) u_^ * , where and are imposed from the positivity condition and the real orthogonal matrix is defined in terms of eq . by u_= ( cc_,1 & -_,2 + _ , 2 & _ , 1 ) .the assumption of the model under consideration then yields &=| { \langle\v\ell_\theta^1|f_\theta\v\ell_\theta^2\rangle}|\sqrt{\det w } -{\mathrm{tr}\left\{w(g_\theta^{-1}-\re\tilde{g}_\theta^{-1})\right\ } } \\\nonumber = & \big [ |{\langle\v\ell_\theta^1|f_\theta\v\ell_\theta^2\rangle}|\sqrt{1-w^2}-{\mathrm{tr}\left\{g_\theta^{-1}-\re\tilde{g}_\theta^{-1}\right\}}\,w_2 \big]w_2 . \end{aligned}\ ] ] therefore , by defining a constant solely calculated from the given model , we obtain the sets as follows . where the common conditions , and also need to be imposed to satisfy the positivity of the weight matrix .so far we only consider models which consist of mixed states .it is known that collective measurements do not improve the mse for pure - state models .in other words , the holevo bound is same as the bound achieved by separable measurements as far as pure - state models are concerned . in this subsection, we examine the pure - state limit for our general result .when a mixed - state model is asymptotically classical , the holevo bound is identical to the sld cr bound .this should be true in the pure - state limit , and this agrees with the result of matsumoto . when a model is d - invariant , on the other hand , it is shown that the rld cr bound can be achieved .this also holds in the pure - state limit and we examine the pure - state limit for a generic mixed - state model below .we first note that we can not take the limit for sld and rld operators directly .this is because there are the terms appearing in the denominators .however , the sld and rld dual operators are well defined even in the pure - state limit . by direct calculation, we can show that the sld and rld dual vectors ( [ sldcot ] , [ rldcot ] ) are written as for any two - parameter qubit model .thus , as long as , they converge in the pure - state limit .this condition is also expressed as , and hence is equivalent to in the pure - state limit ., i.e. , is not asymptotically classical .second , the same warning applies to the sld and rld fisher information .however , the inverse of the sld fisher information matrix is well defined even for the pure - state limit .this is because the component of the inverse of sld fisher information matrix is and this has the well - defined limit .the same reasoning can be applied to the inverse of the rld fisher information matrix in the pure - state limit .last , let us examine the general formula .it is straightforward to show that the function vanishes in the pure - state limit . in other words ,the general formula given in theorem [ thm1 ] becomes =c^r_\theta[w] ] for all weight matrices .we note that this kind of d - invariant models possesses symmetry and has been studied by many authors , see for example ch .4 of holevo s book and hayashi and references therein . consider the following model : _ = \{=f^1()_1+f^2()_2 | } , where are unit vectors , which are not necessarily orthogonal to each other , and are scalar ( differentialble ) functions of . the parameter region is specified by an arbitrary open subset of the set ; .we can show that this model is asymptotically classical because of , and the holevo bound is ={\mathrm{tr}\left\{wg_\theta^{-1}\right\}}\forall w>0 ] , respectively , and the three bounds appearing in theorem [ thm1 ] read &={\mathrm{tr}\left\{w\right\}}- \frac{1}{1-\theta_0 ^ 2 } ( \vec{\theta}|w\vec{\theta}),\\\nonumber c_\theta^r[w]&= \frac{1-s_\theta^2}{1-\theta_0 ^ 2 } { \mathrm{tr}\left\{w\right\ } } + 2\sqrt{\frac{1-s_\theta^2}{1-\theta_0 ^ 2}}\ , |\theta_0| \sqrt{\det w},\\ \nonumber c_\theta^z[w]&= { \mathrm{tr}\left\{w\right\}}- \frac{1}{1-\theta_0 ^ 2 } ( \vec{\theta}|w\vec{\theta})+2\sqrt{\frac{1-s_\theta^2}{1-\theta_0 ^ 2}}\ , |\theta_0| \sqrt{\det w } , \end{aligned}\ ] ] where is introduced for convenience . in the followinglet us analyze the structure of the holevo bound using the following parametrization of a weight matrix : where we normalize the trace of to be one and , .this parametrization is different from the one analyzed in sec .[ sec4 - 2 ] , yet is convenient for the purpose of visualization .it is easy to see that the effect of the matrix is to mix two parameters and by rotating them about an angle . since for this particular parametrization, we see that the rld cr bound is independent of the weight parameter .the other bounds depend on two parameters .we are interested in how the holevo bound ] , whereas the white - meshed region indicates the case for =c^r_\theta[w] ] holds , whereas the white - meshed region indicates the case for =c^r_\theta[w_0] ] , whereas the white - meshed region indicates the case for =c^r_\theta[w] ] to calculate the determinant as this proves the relation ; . \2 . : + from the definition for the matrix , the imaginary part is expressed as z_=v_^1|f_v_^2 ( cc0 & 1 + -1 & 0 ) , and the straightforward calculation yields \{wz_}=2| v_^1|f_v_^2| . the imaginary part of the rld fisher information matrix is _=( cc0 & -1 + 1 & 0 ) , and the imaginary part of the inverse is \{_^-1}= ( cc0 & 1 + -1 & 0 ) , where we use .it is easy to show and thus we obtain the important relationship ; .this proves the claim . -c_\theta^r[w ] \right)/{(1-s_\theta^2)} ] and its inverse by {i , j} ] , then , the following conditions are equivalent . 1 . is d - invariant at .2 . , .3 . 4 . , . , with respect to with respect to .we prove this lemma by the chain ; .suppose that a given model is d - invariant , this is equivalent to say that the action of the commutation operator on the sld dual operators is expressed as ( l_^i)=_jc^jil_,j , with some real coefficients .these coefficients are expressed as c^ji=_^i,(l_^j ) _ _ , which directly from the orthogonality condition . using the relation ,the right hand side is also expressed as , and if the model is d - invariant at , [ app3 ] ( l_^i)=_j ( z_)^jil_,j . hence we show . next , if the condition holds , the sld inner product between and the rld dual operator gives _^i,(l_^j)__=_k c^kj _ ^i , l_^k__=c^ji .the left hand side is also calculated from eq . as .therefore , we show [ app4 ] c^ji = z_^ij=-(_^ij - g_^ij ) _ ^-1=z _ , holds , if the condition holds , that is , .consider an arbitrary linear operator and assume the condition . in this case , the rld inner product between and is calculated as where eq ., and the several equations presented in sec . [ sec2 - 3 - 2 ] are used .since is arbitrary , it implies [ app5 ] l_^i=_^ii\{1,2, ,k}. therefore , we show .next , let us assume the condition and we show that this implies the condition , that is , , with respect to with respect to .this is because the remaining to be shown is that the condition , implies the d - invariance of the model .consider a set of hermite operators and suppose the above condition .since is equivalent to with the canonical projection on , we can rewrite it as [ app-7 ] x^i , i , j ( ^i - l_^i , l_,j _ _^i - l_^i , l_,j__^+ ) .the use of eq .leads .then , the equivalent condition can be applied to conclude that the subspace is invariant under the action of linear operator , that is , the model is d - invariant . \i ) proof for the rld cr bound case : + the sufficiency ( d - invariant model = c_\theta^r[w] ] holds for , where is the collection of the rld dual operators . since the matrix ] for all , then the model is d - invariant .+ ii ) proof for the d - invariant bound : + this equivalence is a direct consequence of the proposition [ propdinv ] ( ) and the property of the canonical projection given in appendix [ sec : app-3 ] . first , let us note [ dequiv1 ] w c_^h[w]=c_^z[w ] _ z_z_. this is because is an element of the set and = z_\theta ] .conversely , let us assume \ge z_\theta ] is violated .thus , lemma [ lem4]-5 and the equivalence prove this theorem . m. hayashi , in _ selected papers on probability and statistics american mathematical society translations series 2 _ , vol . * 277 * , 95 - 123 , ( amer . math . soc .( it was originally published in japanese in bulletin of mathematical society of japan , sugaku , vol . *55 * , no . 4 , 368 - 391 ( 2003 ) .
the main contribution of this paper is to derive an explicit expression for the fundamental precision bound , the holevo bound , for estimating any two - parameter family of qubit mixed - states in terms of quantum versions of fisher information . the obtained formula depends solely on the symmetric logarithmic derivative ( sld ) , the right logarithmic derivative ( rld ) fisher information , and a given weight matrix . this result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models ; the holevo bound coincides with the sld cramr - rao bound and it does with the rld cramr - rao bound . one of the important results of this paper is that a general model other than these two special cases exhibits an unexpected property : the structure of the holevo bound changes smoothly when the weight matrix varies . in particular , it always coincides with the rld cramr - rao bound for a certain choice of the weight matrix . several examples illustrate these findings .
the evolution of a collisionless self - gravitating system is described by two coupled equations : the vlasov equation , where is the one - particle distribution function and is the gravitational potential , and poisson s equation , -body simulations use a monte - carlo method to solve these equations .the distribution function is represented by a collection of particles : where , , and are the mass , position , and velocity of particle . over time , particles move along characteristics of ( [ eq : vlasov ] ) ; at each instant , their positions provide the density needed for ( [ eq : poisson ] ) . in many collisionless -body simulations ,the equations of motion actually integrated are where is the _softening length_. these equations reduce to the standard newtonian equations of motion if .the main reason for setting is to suppress the singularity in the newtonian potential ; this greatly simplifies the task of numerically integrating these equations ( e.g. , * ? ? ?* ) . by limiting the spatial resolution of the gravitational force ,softening also helps control fluctuations caused by sampling the distribution function with finite ; however , this comes at a price , since the gravitational field is systematically biased for .softening is often described as a modification of newtonian gravity , with the potential replaced by .the latter is proportional to the potential of a sphere with scale radius .this does _ not _ imply that particles interact like plummer spheres ; the acceleration of particle is computed from the field at the point only .but it does imply that softening can also be described as a smoothing operation ( e.g. , * ? ? ?* ) , in which the pointillistic monte - carlo representation of the density field is convolved with the kernel in effect , the source term for poisson s equation ( [ eq : poisson ] ) is replaced with the _ smoothed density _ formally , ( [ eq : nbody ] ) provides a monte - carlo solution to the vlasov equation ( [ eq : vlasov ] ) coupled with thus one may argue that a softened -body simulation actually uses standard newtonian gravity , as long as it is clear that the mass distribution generating the gravitational field is derived from the particles via a smoothing process .although plummer softening is widely used in -body simulations , its effects are incompletely understood .if the underlying density field is featureless on scales of order , softening has relatively little effect .however , -body simulations are often used to model systems with power - law density profiles ; for example , and ( * ? ? ?* hereafter nfw ) models , which have at small , are widely used as initial conditions .one purpose of this paper is to examine how softening modifies such profiles .assume that the underlying density profile is spherically symmetric and centered on the origin : .the integrand in ( [ eq : smooth - rho ] ) is unchanged by rotation about the axis containing the origin and the point , so the integral can be simplified by adopting cylindrical coordinates , where is located on the -axis at .the integral over is trivial ; for plummer smoothing , the result is where the second equality holds because the outer integral is taken over the entire axis .the first step is to examine the effect of plummer smoothing on power - law density profiles , , where .these profiles are not realistic , since the total mass diverges as .however , results obtained for power - law profiles help interpret the effects of smoothing on more realistic models . profile .dashed line is the underlying density profile ; solid curve is the result of smoothing with .the smoothed profile slightly exceeds the underlying power - law at large .[ cusp2 ] ] to 0.10 profile .dashed line is the underlying density profile ; solid curve is the result of smoothing with .the smoothed profile slightly exceeds the underlying power - law at large .[ cusp2 ] ] let the density be .the total mass enclosed within radius is . in this case, the smoothed density profile can be calculated analytically ; the double integral is this yields a remarkably simple result for the smoothed density , plotted in fig .[ cusp1 ] , where .the smoothed mass within radius , hereafter called the _ smoothed mass profile _ ; only density profiles can be smoothed .] , is let the density be .the total mass enclosed within radius is .the integral over can be evaluated , but the result is not particularly informative and the remaining integral must be done numerically .[ cusp2 ] presents the results . for ,the smoothed density exceeds the underlying power - law profile .this occurs because smoothing , in effect , spreads mass from to larger radii , and with the underlying profile dropping away so steeply this redistributed mass makes a relatively large contribution to .note that as , the smoothed density .it appears impossible to calculate the smoothed density profile for arbitrary without resorting to numerical methods , but the central density is another matter .setting , the smoothed density is the central density ratio is plotted as a function of in fig .[ dzero ] . for and , the ratio and , respectively , in accord with the results above , while as the central density diverges .the smoothed central density for an arbitrary power - law is useful in devising an approximate expression for the smoothed density profile ( appendix a.1 ) .in addition , the central density is related to the shortest dynamical time - scale present in an -body simulation , which may in turn be used to estimate a maximum permissible value for the time - step ( 4.3.1 ) .plotted as a function of .limiting values are as and as .[ dzero ] ]as noted above , both of these profiles have as . for this reason ,they are treated in parallel .the model has density and mass profiles where is the scale radius and is the total mass .the model has density and mass profiles where is again the scale radius and is a characteristic density .the double integrals required to evaluate the smoothed versions of these profiles appear intractable analytically but can readily be calculated numerically .[ rhoh ] and [ rhonfw ] present results for a range of values between and . for comparison ,both models are scaled to have the same underlying density profile at . and density .lower curves show profiles smoothed with , , , ( from top to bottom ) ; heavy curve is , dashed curve is .inset shows ratio .[ rhonfw ] ] to 0.10 and density .lower curves show profiles smoothed with , , , ( from top to bottom ) ; heavy curve is , dashed curve is .inset shows ratio .[ rhonfw ] ] the smoothed profiles shown in figs .[ rhoh ] and [ rhonfw ] are , for the most part , easily understood in terms of the results obtained for power - laws . for radii , smoothing transforms central cusps into constant - density cores , just as in fig .[ cusp1 ] .if the softening length is much less than the scale length , the smoothed density within is almost independent of the underlying profile at radii .consequently , the smoothed central density , echoing the result obtained for the power - law .in addition , the actual curves in figs . [ rhoh ] and [ rhonfw ] are shifted versions of the curves in fig .[ cusp1 ] ; this observation motivates simple approximations to and described in appendix a.2 . on the other hand ,if is comparable to , the quantitative agreement between these profiles and the smoothed profile breaks down ; the smoothed density at small has a non - negligible contribution from the underlying profile beyond the scale radius . as an example , for the central density of the smoothed nfw profile is higher than the central density of the smoothed hernquist profile , because the former receives a larger contribution from mass beyond the scale radius .a somewhat more subtle result , shown in the insets , is that heavily smoothed profiles _ exceed _ the underlying profiles at radii .this is basically the same effect found with the power - law profile ( 2.2 ) ; with the underlying density dropping rapidly as a function of , the mass spread outward from smaller radii more than makes up for the mass spread to still larger radii .this effect is more evident for the hernquist profile than for the nfw profile because the former falls off more steeply for .the model has density and mass profiles where is the scale radius and is the total mass .the double integrals required to evaluate the smoothed version of this profile appear intractable analytically but can readily be calculated numerically . fig .[ rhoj ] present results for a range of values between and . , [ rhonfw ] , and [ rhoj ] .solid lines are underlying profiles ; from bottom to top , they represent jaffe , hernquist , and nfw models , respectively . dashed , dot - dashed , and dotted lines give results for , , and , respectively . [ slope ] ] to 0.10 , [ rhonfw ] , and [ rhoj ] .solid lines are underlying profiles ; from bottom to top , they represent jaffe , hernquist , and nfw models , respectively .dashed , dot - dashed , and dotted lines give results for , , and , respectively . [ slope ] ] again , much of the behavior shown in this plot can be understood by reference to the results for the power - law .in particular , for smoothing lengths , the central density is , and the curves in fig .[ rhoj ] are shifted versions of the one in fig .[ cusp2 ] .as the inset shows , for larger values of the smoothed profiles quite noticeably exceed the underlying profile ; the effect is stronger here than it is for a hernquist model because the jaffe model has more mass within to redistribute . figs .[ rhoh ] , [ rhonfw ] , and [ rhoj ] have interesting implications for -body experiments .one might expect the smoothed profiles to resolve the inner power - laws of the underlying models as long as the softening length is somewhat less than the scale radius , but that is not what these figures show .profiles smoothed with are essentially constant - density cores attached to power - law outer profiles ; the density within the core depends on , but no inner cusp per se can be seen . for , on the other hand , the smoothed profiles do appear to trace the inner power - laws over some finite range of radii , before flattening out at smaller . only for can the inner cusps be followed for at least a decade in radius .[ slope ] helps explain this result .the underlying jaffe , hernquist , and nfw profiles all roll over gradually from their inner to outer power - law slopes between radii .thus a resolution somewhat better than is required to see the inner cusps of these models . in practice , this implies the softening parameter must be several times smaller than .since the formalism developed above is exact , numerical tests of a relation like ( [ eq : plummer - smooth ] ) for the smoothed density may seem superfluous . in practice , such tests can be illuminating as benchmarks of -body technique . inwhat follows , the smoothing formalism will be applied to actual -body calculations , to check -body methodology and to demonstrate that the formalism has real applications .putting this plan into operation requires some care . to begin with , an -body realization of a standardhernquist or jaffe profile spans a huge range of radii . typically , the innermost particle has radius or for a hernquist or jaffe profile , respectively , while for either profile , the outermost particle has radius . a dynamic range of or can be awkward to handle numerically ; even gridless tree codes may not accommodate such enormous ranges gracefully .one simple option is to truncate the particle distribution at some fairly large radius , but it s preferable to smoothly taper the density profile : \!\ ! ( 1 + \mu ) \ , \rho _ { * } \ , ( b / r)^2 \ , e^{-r / r _ { * } } \ , , & r > b \\\end{array } \right .\label{eq : tapered - models}\ ] ] where the taper radius , the values of and are fixed by requiring that and its first derivative are continuous at , and the value of is chosen to preserve the total mass .let be the logarithmic slope of the density profile at , and be the underlying mass profile ; then , , and , respectively .[ rhohjt ] ] to 0.10 , , and , respectively .[ rhohjt ] ] fig .[ rhohjt ] shows how plummer smoothing modifies tapered hernquist and jaffe profiles .both profiles have scale radius , taper radius , and mass ; these parameters will be used in all subsequent calculations . in each case, the underlying profile follows the standard curve out to the taper radius , and then rapidly falls away from the outer power law . at radii , the smoothed profiles match those shown in figs .[ rhoh ] and [ rhoj ] , apart from the factor of used to preserve total mass . at larger radii, the smoothed profiles initially track the underlying tapered profiles , but then transition to asymptotic power law tails .this occurs because the plummer smoothing kernel ( [ eq : plummer - kernel ] ) falls off as at large ; in fact , these power laws match , which is the large- approximation for a point mass smoothed with a plummer kernel .the amount of mass in these tails is negligible . in principle , it s straightforward to verify that the smoothed profiles above generate potentials matching those obtained from -body calculations . for a given density profile ,construct a realization with particles at positions ; a -body force calculation with softening yields the gravitational potential for each particle .conversely , given the smoothed density profile , compute the smoothed mass profile , and use the result to obtain the _ smoothed potential _ : with boundary condition as . for each particle , the -body potential be compared with the predicted value ; apart from fluctuations , the two should agree .a major complication is that fluctuations in -body realizations imprint spatially coherent perturbations on the gravitational field ; potentials measured at adjacent positions are not statistically independent .for example , the softened potential at the origin of an -body system is if the radii are independently chosen , this expression is a monte carlo integral , which will deviate from by an amount of order ; moreover , everywhere within the potential will deviate upward or downward by roughly as much as it does at .one way around this is to average over many -body realizations , but this is tedious and expensive .an easier solution is to sample the radial distribution uniformly .let be the mass profile associated with the underlying density .assign all particles equal masses , and determine the radius of particle by solving for to .this eliminates radial fluctuations ; the monte - carlo integral for is replaced with a panel integration uniformly spaced in , and the central potential is obtained with relatively high accuracy .this trick does not suppress _ non - radial _ fluctuations , so the -body potential evaluated at any point still differs from the true . but a non - radial fluctuation which creates an overdensity at some position must borrow mass from elsewhere on the sphere ; over - dense and under - dense regions compensate each other when averaged over the entire surface of the sphere .the resulting potential fluctuations likewise average to zero over the sphere , as one can show by using gauss s theorem to evaluate the average gradient of the potential and integrating inward from .finally , a subtle bias arises if the particles used to generate the potential are also used to probe the potential , since local overdensities are sampled more heavily . to avoid this, the potential can be measured at a set of points which are _ independent _ of the particle positions .then should display some scatter , but average to zero when integrated over test points within a spherical shell . between -body and smoothed potentials for tapered hernquist ( left ) and jaffe ( right ) models ,normalized by central potential .vertical dashed lines show value of .points are results for uniform realizations ; jagged curves shows averages for groups of 32 points .grey - scale images show typical results for _ random _ realizations .central potentials are and .[ deltaphihj ] ] to 0.10 between -body and smoothed potentials for tapered hernquist ( left ) and jaffe ( right ) models , normalized by central potential .vertical dashed lines show value of .points are results for uniform realizations ; jagged curves shows averages for groups of 32 points .grey - scale images show typical results for _ random _ realizations .central potentials are and .[ deltaphihj ] ] fig .[ deltaphihj ] shows results from direct - sum potential calculations for tapered hernquist and jaffe models , using units with . in each case , the underlying density profile was represented with equal - mass particles , and the potential was measured at points uniformly distributed in between and .the points show results for uniform radial sampling . while non - radial fluctuations create scatter in , the distribution is fairly symmetric about the line .the jagged curves are obtained by averaging over radial bins each containing points .these averages fall near zero , demonstrating very good agreement between the -body results and the potentials calculated from the smoothed density profiles . for comparison , the grey - scale images in fig .[ deltaphihj ] display representative results for _ random _ realizations of each density profile . in these realizations ,the radius of particle is computed by solving , where is a random number with yields systematically different values .the results shown here use the tausworthe generator ` taus2 ` . ] uniformly distributed between and . to examine the range of outcomes , random realizations of each model were generated and ranked by central potential ; since particle radii are chosen independently , the central limit theorem implies that has a normal distribution .shown here are the percentile and percentile members of these ensembles ; half of all random realizations lie between the two examples presented in each figure .note that these examples deviate from the true potential by fractional amounts of .obviously , it s impossible to detect discrepancies between and of less than one part in using random realizations with .it s instructive , not to mention disconcerting , to try reproducing fig .[ deltaphihj ] using a tree code instead of direct summation .tree codes employ approximations which become less accurate for ; these systematically bias computed potentials and accelerations ( see appendix b ) . for example , the code which will shortly be used for dynamical tests , run with an opening angle , yields central potentials which are too deep by a few parts in , depending on the system being modeled .this systematic error can not be ` swept under the carpet ' when comparing computed and predicted potentials at the level of precision attempted here .constructing equilibrium configurations is an important element of many -body experiments .approximate equilibria may be generated by a variety of ad hoc methods , but the construction of a true equilibrium -body model amounts to drawing particle positions and velocities from an equilibrium distribution function .however , a configuration based on a distribution function ( df ) derived without allowing for softening will _ not _ be in equilibrium if it is simulated with softening .assume the model to be constructed is spherical and isotropic .broadly speaking , there are two options : ( a ) adopt a df which depends on the energy , and solve poisson s equation for the gravitational potential , or ( b ) adopt a mass model , and use eddington s ( ) formula to solve for the df .if softening is taken into account , option ( a ) becomes somewhat awkward , since the source term for poisson s equation ( [ eq : smooth - poisson ] ) is non - local - body potentials which implements option ( a ) .on the other hand , option ( b ) is relatively straightforward ( e.g. , * ? ? ?starting with a desired density profile , the first step is to compute the smoothed density and mass profiles and , respectively .since everywhere , equation ( [ eq : potential - equation ] ) guarantees that the smoothed potential is a monotonically increasing function of .it is therefore possible to express the underlying density profile as a function of , and compute the df : note that in , the quantity is the _ underlying _ density , while is the _ smoothed _ potential , related by poisson s equation to the smoothed density . in effect , the smoothed potential is taken as a given , and ( [ eq : eddington - formula ] ) is used to find what will hereafter be called the _ smoothed distribution function _ ; with this df , the underlying profile is in equilibrium in the adopted potential ( e.g. , * ? ? ?* ; * ? ? ?conversely , setting yields the _ self - consistent distribution function _ which describes a self - gravitating model with the underlying profile . ; dashed , dot - dashed , and dotted curves : for , , and , respectively .[ dfhj ] ] to 0.10 ; dashed , dot - dashed , and dotted curves : for , , and , respectively . [ dfhj ] ] fig .[ dfhj ] presents dfs for tapered hernquist and jaffe models . in each casethe solid line shows the self - consistent df ; these match the published dfs over almost the entire energy range , deviating only for where tapering sets in .smoothed dfs for jaffe models appear very different from their self - consistent counterpart .the latter has a logarithmic , infinitely deep potential well , which effectively confines material with constant velocity dispersion in a cusp .the characteristic phase - space density diverges as ( ie , as ) , but only because does . with potential well is harmonic at small , and ca nt confine a cusp unless the local velocity dispersion scales as ; thus the phase - space density now diverges as .moreover , the domain of is limited to to , where .thus , instead of growing exponentially as a function of , the smoothed df abruptly diverges at some finite energy . by comparison ,smoothed dfs for hernquist models look similar to the self - consistent df .the latter has a potential well of finite depth , and the smoothed profiles generate wells which are only slightly shallower . as the left panel of fig .[ dfhj ] shows , all the dfs asymptote to as . however , the run of velocity dispersion with is different ; the self - consistent model has , implying . in contrast, the smoothed models have , implying .one consequence is that the way in which as is different in the smoothed and self - consistent hernquist models .the self - consistent model has a linear potential as small , and thus . by comparison ,the models based on smoothed potentials have harmonic cores , and as a result , .( this difference is not apparent in fig .[ dfhj ] but becomes obvious when is plotted against . ) in this respect , the use of a smoothed potential effects a non - trivial change on hernquist models : is a different power - law of . coincidentally ,the smoothed jaffe models have , just like the self - consistent hernquist model .-body simulations are useful to show that the distribution functions just constructed are actually in dynamical equilibrium with their smoothed potentials . for each model and value, two ensembles of three random realizations were run . in one ensemble ,the initial conditions were generated using the self - consistent df .the other ensemble used initial conditions generated from the smoothed df , which allows for the effects of softening .each realization contained equal - mass particles .initial particle radii were selected randomly by solving as described above .initial particle speeds were selected randomly by rejection sampling from the distributions or , where the former assumes the self - consistent df , and the latter a smoothed df .position and velocity vectors for particle are obtained by multiplying and by independent unit vectors drawn from an isotropic distribution . in effect , this procedure treats the 6-d distribution function as a probability density , and selects each particle s coordinates independently .simulations were run using a hierarchical -body code .an opening angle of , together with quadrupole moments , provided forces with median errors .particles within have much larger force errors , although these seem to have limited effect in practice ( appendix b ) .trajectories were integrated using a time - centered leap - frog , with the same time - step for all particles ( see 4.3.1 ) .all simulations were run to , which is more than sufficient to test initial equilibrium .-body simulations of hernquist ( top row ) and jaffe ( bottom row ) models , each run with the value labeled .solid ( dotted ) curves show results for initial conditions generated from smoothed ( self - consistent ) dfs .light grey bands show expected variation in central potential for independent particles .[ phievol ] ] fig .[ phievol ] shows how the potential well depth of each simulation evolves as a function of time . here, well depth is estimated by computing the softened gravitational potential of each particle and taking the minimum ( most negative ) value .( this is more accurate than evaluating the potential at the origin since the center of the system may wander slightly during a dynamical simulation . )to better display the observed changes in , the horizontal axis is logarithmic in time .most of the ensembles set up without allowing for softening ( dotted curves in fig .[ phievol ] ) are clearly not in equilibrium . in all three of the jaffe models ( bottom row ) ,the potential wells become dramatically shallower on a time - scale comparable to the dynamical time at .the reason for this is evident .the self - consistent jaffe model has a central potential diverging like as ; this potential can confine particles with finite velocity dispersion at arbitrarily small radii .however , the relatively shallow potential well of a smoothed jaffe model can not confine these particles ; they travel outward in a coherent surge and phase - mix at radii of a few .their outward surge and subsequent fallback accounts for the rapid rise and partial rebound of the central potential .similar although less pronounced evolution occurs in the hernquist models ( top row ) with and possibly with as well .only the self - consistent hernquist models run with appear truly close to equilibrium .in contrast , all of the ensembles set up with smoothed dfs ( solid curves in fig . [ phievol ] ) are close to dynamical equilibrium . in equilibrium , gravitational potentials fluctuate as individual particles move along their orbits .if particles are uncorrelated , the amplitude of these fluctuations should be comparable to the amplitude seen in an ensemble of independent realizations . to check this , realizations of each model were generated ; central gravitational potentials were evaluated using the same tree algorithm and parameters used for the self - consistent simulations .the grey horizontal bands show a range of around the average central potential for each model and choice of . some of the simulations set up using smoothed dfs wander slightly beyond the range .however , none of them exhibit the dramatic evolution seen in the cases set up using self - consistent dfs . for simulations run with , , and , respectively ; each is displaced downward by one additional unit in for clarity .[ rhoprofj ] ] the central potential is relatively insensitive to changes in the mass distribution on scales . to examine small - scale changes directly , density profiles measured from the initial conditions were compared to profiles measured at time units .these profiles were derived as follows .first , sph - style interpolation with an adaptive kernel containing particles was used to estimate the density around each particle .next , the centroid position of the highest - density particles was determined .finally , a set of nested spherical shells , centered on , were constructed ; each shell was required to contain at least particles and have an outer radius at least times its inner radius .[ rhoprofj ] summarizes results for jaffe models , which display the most obvious changes .the density of each shell is plotted against the average distance from of the particles it contains . in each panel ,the top set of curves compare initial ( ) numerical results with the underlying tapered jaffe model , always represented by a light grey line .profiles from three independent -body realizations of each model are overplotted . while some scatter from realization to realization is seen , the measured densities track the underlying profile throughout the entire range plotted .the outermost point is at times the radius of the innermost one ; there are not enough particles to obtain measurements at smaller or larger radii . ranged below the top curves in fig .[ rhoprofj ] are numerical results at for softening lengths , , and , each shifted downward by one more unit in .again , profiles from three independent simulations are overplotted to illustrate run - to - run variations .simulations set up using the self - consistent df ( left panel ) show significant density evolution ; their initial power - law profiles are rapidly replaced by cores of roughly constant density inside .in contrast , simulations set up using smoothed dfs ( right panel ) follow the initial profile down to ( although density evolution occurs on smaller scales ) .this shows that a careful set - up procedure can maintain the initial density profile even on scales much smaller than the softening length .a similar plot for the hernquist models confirms that most of these simulations start close to equilibrium .hernquist models set up using smoothed dfs do nt appear to evolve at all , although this statement should be qualified since the profiles of these models ca nt be measured reliably on scales much smaller than .models set up using the self - consistent df and run with undergo some density evolution ; their profiles fall below the underlying hernquist model at , although they continue rising to the innermost point measured .simulations run with or less display no obvious changes down to scales of .it appears that the jaffe models set up using smoothed dfs are not _ completely _ free of long - term evolution .the right - hand panel of fig .[ rhoprofj ] shows that the peak density as measured using a fixed number of particles falls by roughly an order of magnitude by .moreover , a close examination of fig . [ phievol ] turns some cases with a gradual decrease in potential well depth ; in the smoothed jaffe models with , for example , exhibits an upward trend of percent per unit time. this evolution may not be due to any flaw in the initial conditions ; the central cusps of such models , which are confined by _ very _ shallow harmonic potentials , are fragile and easily disrupted. there may be more than one mechanism at work here ; the rate of potential evolution appears to be inversely proportional to particle number , while the rate of density evolution is independent of .a full examination of this matter is beyond the scope of this paper .selecting the time - step for an -body simulation is a non - trivial problem .while the choice can usually be justified post - hoc by testing for convergence in a series of experiments with different time - steps , it s clearly convenient to be able to estimate an appropriate a priori .a general rule governing such estimates is that the time - step should be smaller than the shortest dynamical time - scale present in the simulation .the central density of a smoothed density profile defines one such time - scale . within the nearly constant - density core of a smoothed profile , the local orbital period is ; this is the shortest orbital period anywhere in the system .numerical tests show the leap - frog integrator is well - behaved if ( conversely , it becomes unstable if ) . among the models simulated here ,the jaffe model with has the highest smoothed central density ; for this model , . given this density , and .the time required for a fast - moving particle to cross the core region defines another time - scale .if is depth of the central potential well , the maximum speed of a bound particle is , and the core crossing time is .the smoothed jaffe model with has the deepest potential well . for this model ,tests of the leap - frog with fast - moving particles on radial orbits show that yields good results , but time - steps a few times longer result in poor energy conservation as particles traverse the core region .( the relationship between and the maximum acceptable time - step may be somewhat model - dependent . ) thus , for the jaffe model with , both the local criterion based on and the global criterion based on yield similar constraints , one can show that for any smoothed jaffe model ; both criteria yield similar constraints almost independent of . for smoothed hernquist models , on the other hand , ;the constraint based on is generally stricter . ] on .it s convenient to round down to the next power of two , implying .this corresponds to , which is somewhat conservative but helps insure that non - equilibrium changes will be accurately followed . to seeif this time - step is reasonable , realizations of this model were simulated with various values of between to . at the lower end of this range , the effects of an over - large time - step manifest quickly ;global energy conservation is violated , and the measured central potential drifts upward over time ( even though the initial conditions , generated from , are near equilibrium ) . with a time - step , for example ; the potential well becomes percent shallower during the first two time - steps , and by its depth has decreased by percent .these simulations also violate global energy conservation , becoming percent less bound by .integration errors are reduced but not entirely eliminated with a time - step ; by , the potential well becomes percent shallower , while total energy changes by percent . with a time - step of or less ,global energy conservation is essentially perfect , and variations in appear to be driven largely by particle discreteness as opposed to time - step effects .plots analogous to fig .[ rhoprofj ] show that the simulations with time - steps as large as reproduce the inner cusps of jaffe models just as well as those with . with this time - step, individual particles may not be followed accurately , but their aggregate distribution is not obviously incorrect . on the other hand ,a time - step yields density profiles which fall below the initial curves for .softening and smoothing are mathematically equivalent . while the particular form of softening adopted in ( [ eq : nbody ] ) corresponds to smoothing with a plummer kernel ( [ eq : plummer - kernel ] ) , other softening prescriptions can also be described in terms of smoothing operations ( e.g. * ? ? ?there are two conceptual advantages to thinking about softening as a smoothing process transforming the underlying density field to the smoothed density field .first , since the gravitational potential is related to by poisson s equation , the powerful mathematical machinery of classical potential theory becomes available to analyze potentials in -body simulations .second , focusing attention on smoothing makes the source term for the gravitational field explicit . from this perspective, smoothing is not a property of the particles , but a separate step introduced to ameliorate singularities in the potential .particle themselves are points rather than extended objects ; this insures that their trajectories are characteristics of ( [ eq : vlasov ] ) .plummer smoothing converts power - laws to cores .at radii the density profile is essentially unchanged , while at the density approaches a constant value equal to the density of the underlying model at times a factor which depends only on .for the case , this factor is unity and everywhere .the effects of plummer smoothing on astrophysically - motivated models with power - law cusps , such as the jaffe , hernquist , and nfw profiles , follow for the most part from the results for pure power - laws .in particular , for , where is the profile s scale radius , the power - law results are essentially ` grafted ' onto the underlying profile . on the other hand , for ,the inner power - law is erased by smoothing .smoothing provides a way to predict the potentials obtained in -body calculations to an accuracy limited only by fluctuations .these predictions offer new and powerful tests of -body methodology , exposing subtle systematic effects which may be difficult to diagnose by other means . given an underlying density profile , it s straightforward to construct an isotropic distribution function such that is in equilibrium with the potential generated by its smoothed counterpart .such distribution functions can be used to generate high - quality equilibrium initial conditions for -body simulations ; they should be particularly effective when realized with ` quiet start ' procedures .systems with shallow central cusps , such as hernquist and nfw models , may be set up fairly close to equilibrium without taking softening into account as long as is not too large .however , it appears impossible to set up a good -body realization of a jaffe model without allowing for softening .it s true that realizations so constructed do nt reproduce the actual dynamics of the underlying models at small radii ( * ? ? ?* footnote 8) ; to obtain an equilibrium , the velocity dispersion is reduced on scales . but realizations set up _ without _ softening preserve neither the dispersion nor the density at small radii , and the initial relaxation of such a system ca nt be calculated a priori but must be simulated numerically . on the whole , it seems better to get the central density profile right on scales , and know how the central velocity dispersion profile has been modified . even if the dynamics are not believable within , the ability to localize mass on such scales may be advantageous in modeling dynamics on larger scales .mathematica code to tabulate smoothed models is available at + ` http://www.ifa.hawaii.edu/faculty/barnes/research/smoothing/ ` .i thank jun makino and lars hernquist for useful and encouraging comments , and an anonymous referee for a positive and constructive report .mathematica rocks .barnes , j.e .1998 , ` dynamics of galaxy interactions ' , in _ galaxies : interactions and induced star formation _ ,d. friedli , l. martinet , & d. pfenniger .berlin , springer , p. 275394 debattista , v.p . & sellwood , j.a .2000 , ` constraints from dynamical friction on the dark matter content of barred galaxies ' , apj , * 543 * , 704721the power - law density and cumulative mass profiles are plummer smoothing converts power laws with to finite - density cores . at smoothed density is nearly constant and close to the smoothed central density . within this constant - density region ,the smoothed mass profile is approximately at , on the other hand , smoothing has little effect on the mass profile , so .interpolating between these functions yields an approximate expression for the smoothed mass profile : where the shape parameter determines how abruptly the transition from one function to the other takes place .this expression can be rearranged to give the smoothed density profile is obtained by differentiating the mass profile : profile , computed for using ( [ eq : approx - smooth - mass ] ) and ( [ eq : approx - smooth - rho ] ) .dark curves show results for ; light grey solid curves show errors in density only for ( above ) and ( below ) .[ errorcusp2 ] ] to 0.10 profile , computed for using ( [ eq : approx - smooth - mass ] ) and ( [ eq : approx - smooth - rho ] ) .dark curves show results for ; light grey solid curves show errors in density only for ( above ) and ( below ) .[ errorcusp2 ] ] figs .[ errorcusp1 ] and [ errorcusp2 ] present tests of these approximations for and power - laws , respectively . as in figs .[ cusp1 ] and [ cusp2 ] , the smoothed density profile was computed with ; for other values of , the entire pattern simply shifts left or right without changing shape or amplitude .dashed curves show relative errors in smoothed mass , , while solid curves are relative errors in smoothed density .the value used for each dark curve is the value which minimizes evaluated at points distributed uniformly in between and . in light grey ,plots of for two other illustrate the sensitivity to this parameter . comparing these plots, it appears that the approximation works better for the power - law than it does for , but even in the latter case the maximum error is only % .because ( [ eq : approx - smooth - mass ] ) modifies the underlying mass profile with a multiplicative factor , it can also be used to approximate effects of softening on non - power - law profiles ( e.g. * ? ? ?* ) ; for this purpose , both and can be treated as free parameters and adjusted to provide a good fit .the resulting errors in density , which amount to a few percent near the softening scale , are undesirable but do nt appear to seriously compromise -body simulations with . for ,smoothing primarily modifies the part of these density profiles .this , together with the exact solution for the case given in 2.1 , suggests simple approximations for smoothed hernquist and nfw models : fig .[ errorrhohnfw ] plots the relative error in density , for both models , adopting . for other values of , these errors scale roughly as . the general behavior of these approximations is readily understood .overall , is more accurate than since the nfw profile is closer to at all radii . both curves are approximately flat for , then reach minima for between and the profile scale radius .these minima arise because the smoothed density approaches or even slightly exceeds the underlying density ( see figs .[ rhoh ] and [ rhonfw ] ) , while ( [ eq : approx - rho - cusp1 ] ) always yields values below the underlying density .it s sometimes useful to have the cumulative mass for a smoothed profile .the approximate profiles in ( [ eq : approx - rho - cusp1 ] ) can be integrated analytically , although the resulting expressions are a bit awkward : and note that because the approximate profiles ( [ eq : approx - rho - cusp1 ] ) systematically underestimate the true smoothed densities , these expressions will likewise systematically underestimate the total smoothed mass . , plotted vs. radius , for the approximations given in ( [ eq : approx - rho - cusp1 ] ) .solid and dashed curves show results for hernquist and nfw profiles , respectively , computed for .[ errorrhohnfw ] ]tree codes reduce the computational cost of gravitational force calculation by making explicit approximations .the long - range potential due to a localized mass distribution with total mass and center of mass position is approximated as where the higher order terms include quadrupole and possibly higher - order moments ( dipole terms vanish because coincides with the center of mass ) . to implement softening ,this approximation is typically replaced with this works at large distances , but becomes inaccurate if . moreover , because the error is introduced at the _ monopole _level , higher - order corrections do nt repair the damage . to appreciate the problem , consider a sphere centered on with radius large enough to enclose . for ,the inward acceleration averaged over the surface of is easily computed using gauss s theorem : in other words , the monopole term is sufficient to calculate the inward acceleration averaged over the surface of _exactly_. suppose we want to compute for .again using gauss s theorem , we have is the smoothed mass within the sphere .as before , this is an exact equality .the tree code approximation ( [ eq : approx - pot - soft ] ) implies that the enclosed mass is this is correct if is simply a point mass located at .but if has finite extent , then the enclosed mass is _ always _ less than . as a result , ( [ eq : approx - pot - soft ] ) will systematically _ overestimate _ the inward acceleration and depth of the potential well due to . demonstrate a similar result by computing the softened potential of a homogeneous sphere ; they find is only the first term in a series . the inequality is easily verified for plummer softening .an analogous inequality is likely to hold for other smoothing kernels which monotonically decrease with .smoothing kernels with compact support may be better behaved in this regard . underwhat conditions are these errors significant ? for ` reasonable ' values of , most dynamically relevant interactions are on ranges where softening has little effect ; these interactions are not compromised since ( [ eq : approx - pot - soft ] ) is nearly correct at long range . only if a significant fraction of a system s mass lies within a region of size can these errors become important .this situation was not investigated in early tree code tests ( e.g. , * ? ? ?* ; * ? ? ?* ) , which generally used mass models with cores instead of central cusps , and even heavily softened hernquist models do nt have much mass within one softening radius .on the other hand , jaffe models pack more mass into small radii ; a jaffe model with has almost percent of its mass within .jaffe models should be good test configurations for examining treecode softening errors .tests were run using tapered jaffe and hernquist models , realized using the same parameters ( , , , and ) used in the dynamical experiments ( 4.3 ) . in each model , the gravitational field was sampled at points drawn from the same distribution as the mass . at each test point , results from the tree code with opening angle , including quadrupole terms , were compared with the results of an direct - sum code . as expected , the tests with hernquist models showed relatively little trend of force calculation error with ,although the errors are somewhat larger for than for smaller values . in contrast , the tests with jaffe models reveal a clear relationship between softening length and force calculation accuracy . for simulations run with and .light grey , dark grey , and black show potential differences for , , and , respectively ; three independent realizations are plotted in each case .[ phidiffj ] ] for simulations run with and .light grey , dark grey , and black show potential differences for , , and , respectively ; three independent realizations are plotted in each case .[ phidiffj ] ] . here and are accelerations computed using a tree code and direct summation , respectively .the grey dots represent measurements of for individual test points , computed using .the pattern of errors suggests two regimes . at radii ( ) ,the points fall in a ` sawtooth ' pattern which reflects the hierarchical cell structure used in the force calculation . at smaller radii, on the other hand , the relative error grows more or less monotonically as .it appears that errors in the large- regime are due to neglect of moments beyond quadrupole order in computing the potentials of individual cells ; conversely , the errors in the small- regime are due to the tree code s inaccurate treatment of softening .the direction of the error vectors supports this interpretation ; in the large- regime they are isotropically distributed , while in the small- regime they point toward the center of the system . the jagged line threading through the dots in fig. [ accelerrorj ] , constructed by averaging test points in groups of , shows the overall relationship between acceleration error and radius for .similar curves are also plotted for ( above ) and ( below ) . at large radii ,all three curves coincide precisely , implying that force errors are independent of . going to smaller ,the curve for is the first to diverge , rising above the other two , next the curve for begins to rise , tracking the mean distribution of the plotted dots , and finally the curve for parallels the other two .each curve begins rising monotonically at a radius ; this is evidently where softening errors begin to dominate other errors in the force calculation . at the softening radius ,all three curves show mean acceleration errors .are errors of this magnitude dynamically important ?in particular , could they explain some of the potential evolution seen in the runs set up with softening ( solid curves ) in fig .[ phievol ] ?one possible test is to re - run the simulations using smaller values ; decreasing from to or reduces the tree code acceleration error associated with softening by factors of or , respectively ( albeit at significant computational costs ) .the new runs were started using exactly the same initial conditions as their counterparts .this initially allows central potentials to be compared between runs to high accuracy , temporarily circumventing the effects of fluctuations .[ phidiffj ] compares jaffe model results for and .initially , the potential difference arises because the treecode systematically overestimates the potential well depth by an amount which is greater for larger values of .as the simulations run , the excess radial acceleration causes systems run with to contract relative to those run with , causing a further increase in potential well depth .this contraction takes place on a dynamical time - scale at , occurring first for the simulations ( light grey curves ) . at later times ,as trajectories in otherwise - identical simulations with different values diverge , short - term fluctuations in de - correlate and the dispersion in potential differences increases markedly .these results show that force - calculation errors can have measurable , albeit modest , effects on dynamical evolution . extrapolating results for a range of values to , it appears that for the _ dynamical _ response of these jaffe models due to excess radial acceleration deepens central potential wells by about percent . to test this extrapolation, it would be instructive to repeat these experiments with a direct - summation code .however , compared to the overall range of variations seen in fig .[ phievol ] , the perturbations due to force calculation errors seem relatively insignificant . in addition , density profiles measured from runs with or appear similar to those shown in fig .[ rhoprofj ] , again implying that treecode force calculation errors have little effect on the key results of this study . the best way to correct ( [ eq : approx - pot - soft ] ) for short - range interactions is not obvious . a simple , ad - hoc option is to reduce the effective opening angle on small scales .for example , accepting cells which satisfy where is the distance to the cell s center of mass , is the cell s size , measured along any edge , and is the distance between the cell s center of mass and its geometric center , reduces softening - relating treecode errors by a factor of at a modest cost in computing time .further experiments with similar expressions may produce better compromises between speed and accuracy .
in self - consistent -body simulations of collisionless systems , gravitational interactions are modified on small scales to remove singularities and simplify the task of numerically integrating the equations of motion . this ` gravitational softening ' is sometimes presented as an ad - hoc departure from newtonian gravity . however , softening can also be described as a smoothing operation applied to the mass distribution ; the gravitational potential and the smoothed density obey poisson s equation precisely . while ` softening ' and ` smoothing ' are mathematically equivalent descriptions , the latter has some advantages . for example , the smoothing description suggests a way to set up -body initial conditions in almost perfect dynamical equilibrium . methods : numerical galaxies : kinematics & dynamics
obesity is determined through the body mass index ( bmi ) which compares the weight and height of an individual via the formula weight(kg)/height(m ) .a bmi value of 30 is considered the obesity threshold .overweight but not obese is , and underweight is bmi.5 .our main measure in this work is the adult obesity prevalence of a county , , for a given year defined as the number of obese adults ( bmi ) in a county over the total number of adults in this county , .we use the data from the usa center for disease control ( cdc ) downloaded from .cdc provides an estimate of the obesity country - wide since 1970 , at the state level from 1984 to 2009 , and at the county level from 2004 to 2008 .the study of the correlation function requires high resolution data .therefore , we use data defined at the county level and restrict our study of obesity and diabetes to the available period 2004 - 2008 .other indicators are provided by different agencies at the county level for longer periods .the datasets analyzed in this paper were obtained from the websites as indicated below .they can be downloaded as a single tar datafile from jamlab.org .the datasets consist of a list of populations and other indicators at specific counties in the usa at a given year .a graphical representation of the obesity data can be seen in fig .[ transition]b for usa from 2004 to 2008 , where each point in the maps represents a data point of obesity prevalence directly extracted from the dataset .the datasets that we use in our study have been collected from the following sources : * ( a ) population. * us census bureau .we download a number of datasets at the county level from http://www.census.gov/support/usacdatadownloads.html - for the population estimates we use the table pin030 . for the years 1969 - 2000 we use data supplied by bea ( bureau of economic analysis ) and for years 2000 - 2009 we use the file co-est2009-alldata.csv from http://www.census.gov/popest/counties/files/co-est2009-alldata.csv * ( b ) health indicators. * data downloaded from the centers for disease control and prevention ( cdc ) .the center provides county estimates between the years 2004 - 2008 for : - diagnosed diabetes in adults . - obesity prevalence in adults. - physical inactivity in adults .the estimates for obesity and diabetes prevalence and leisure - time physical inactivity were derived by the cdc using data from the census and the behavioral risk factor surveillance system ( brfss ) for 2004 , 2005 , 2006 , 2007 and 2008 .brfss is an ongoing , state - based , random - digit - dialed telephone survey of the u.s .civilian , non - institutionalized population aged 18 years and older .the analysis provided by the brfss is based on self - reported data , and estimates are age - adjusted on the basis of the 2000 us standard population .full information about the methodology can be obtained at http://www.cdc.gov/diabetes/statistics . *( c ) economic indicators. * we download data for economic activity through http://www.census.gov / econ/. the economic activity of each sector is measured as the total number of employees in this sector per county in a given year normalized by the population of the county . the north american industry classification system ( naics ) ( http://www.census.gov/epcd/naics02/naicod02.htm )assigns hierarchically a number based on the particular economy sector .the naics is the standard used by us statistical agencies in classifying business establishments across the us business economy . in this studywe have used the following economic sectors with their corresponding naics : * whole economy .entire output of all economic sectors combined including all naics codes .* \31 . manufacturing .broad economic sector from textiles , to construction materials , iron , machines , etc .* \42 . wholesale trade .very broad sector including merchants wholesalers , motors , furniture , durable goods , etc . * \56 . administrative jobs and support services . *\445 . food and beverage stores . including all the food sectors , from supermarkets ,fish , vegetables meat markets , to restaurants and bars and other services to the food industry .supermarkets and other grocery ( except convenience ) stores .this is a subsection of naics 445 .food services and drinking places . a sub - sector of naics 72 which includes restaurants , cafeterias , snacks and nonalcoholic beverage bars , caterers , bars and drinking places ( alcoholic beverages ) .* ( d ) mortality rates. * we use data from the national cancer institute seer , surveillance epidemiology and end results downloaded from http://seer.cancer.gov/data/the institute provides mortality data from 1970 to 2003 , aggregated every three years .we analyze the mortality of a specific form of cancer per county normalized by the population of the county . here , we use mortality data for the following causes of death : - all cancer , independently of type . - lung cancer . we take advantage of the available data of population distribution around the globe defined in a square grid of 2.5 arc - seconds obtained from .this data allows to study the correlation functions of the population distribution for many countries . by using this data we are able to test the system size dependence of our resultswe find that the correlation length is proportional to the linear size of the country , .the linear size is calculated as total area = .we find that the correlation scales with the system size as discussed in the text .for instance , for the usa population distribution we find km , while a smaller country like uk has km . table [ xi ] shows a list of countries used in figs .[ correl]b and c to determine the correlation length of the correlation function of population density .the shape of the main obesity clusters and location of the red bonds and obesity epicenter are depicted in fig .[ percolation]c overlayed with a us map showing the boundaries of states and counties .figure [ percolation]c shows the obesity clusters obtained at , , , and , depicting the process of percolation . at , we plot the largest red cluster which is seen in the lower mississippi basin .the highest obesity prevalence is in greene county , al , which acts as the epicenter of the epidemic . at , we plot in yellow the second largest cluster in the atlantic region south of the appalachian mountains , and at plot the third largest cluster ( violet ) , which appears north of the appalachian mountains .we mark with black the three red bonds that make the mississippi cluster to grow abruptly by absorbing the clusters in the appalachian range .the red bonds are dekalb county , tn , mclean county , ky , and colquitt county , ga .this transition is reflected in the jump in the size of the largest cluster in fig .[ percolation]a .the same process is observed in the second percolation transition at , when the red bond , rich county , ut , joins the eastern and western clusters for a whole - country percolation .the scaling properties characterizing the geometry and distribution of clusters at percolation are : _ ( i ) _ the scaling of the number of boxes to cover the infinite spanning cluster versus the size of the boxes : defining the fractal dimension of the spanning cluster , . _( ii ) _ the number of boxes , , of size covering the perimeter of the infinite cluster : defining the hull fractal dimension , . _( iii ) _ the probability distribution of the area of clusters at percolation : characterized by the critical exponent .additionally , there is a scaling relation between the fractal dimension and the cluster distribution exponent : the exponents for percolation with long - range correlations have been calculated numerically in as a function of the correlation exponent using standard percolation analysis .there exist also a theoretical prediction based on renormalization group in for the correlation length exponent .the values of for the obesity clusters at the first percolation transition , , are reported in the main text .a direct computer simulations of long - range percolation for finds the values of the three geometric exponents to be , consistent with those reported here .we notice that the exponent is expected to be larger than 2 .this is due to mass conversation , assuming that the power - law eq .( [ tau ] ) extends to infinity at percolation in a infinite system size .the fact that we find a value slightly smaller than 2 for the obesity clusters , might be due to a finite size effect .we also notice that the values of the exponents obtained from correlated percolation at are not too far from those of uncorrelated percolation .therefore , the values of the exponents may not be enough to precisely compare the obesity clusters with long - range percolation clusters .however , they serve as an indication that the obesity clusters have the geometrical properties of clusters at a critical point , such as scaling behavior . furthermore , it could be possible that long - range correlated percolation may capture only part of the dynamics of the clustering epidemic .it could be , for instance , that higher order correlations , beyond the two - point correlation captured by , are also relevant in determining the value of the exponents . in this case ,our analysis should be supplemented by studies of correlation functions , beyond ..list of countries used to calculate the correlation length , , from the correlation function of the population density .data is obtained from . is calculated as the square root of the total area of the country . [cols="^,>,>",options="header " , ]our approach supplements covariance analysis . instead, we use physics concepts to shed a different view on the spreading of epidemics . our approach can be extended to the study of the geographical spreading of any epidemic : from diabetes and lung cancer , to the spreading of viruses or real states bubbles , where the spatial spreading plays an important role .population correlations are naturally inherited by all demographic observables .even variables whose incidence varies randomly from county to county would exhibit spatial correlations in their absolute values , simply because its number increases in more populated counties and population locations are correlated .indeed , the absolute number of obese adults per county is directly proportional to the population of the county [ bettencourt , l. m. a. , _ et al . _growth , innovation , scaling , and the pace of life in cities .usa _ * 104 * , 7301 - 7306 ( 2007 ) ] .our aim is to measure spatial fluctuations on the frequency of incidence , independent of population agglomeration .thus , spatial correlations of all indicators ought to be calculated on the density defined , in the case of obesity , as rather than on the absolute number of obese people , , itself . the spatial correlations of the fluctuations of from the global average captures the collective behavior expressed in the power - law described in eq .( [ gamma ] ) .
non - communicable diseases like diabetes , obesity and certain forms of cancer have been increasing in many countries at alarming levels . a difficulty in the conception of policies to reverse these trends is the identification of the drivers behind the global epidemics . here , we implement a spatial spreading analysis to investigate whether non - communicable diseases like diabetes , obesity and cancer show spatial correlations revealing the effect of collective and global factors acting above individual choices . specifically , we adapt a theoretical framework for critical physical systems displaying collective behavior to decipher the laws of spatial spreading of diseases . we find a regularity in the spatial fluctuations of their prevalence revealed by a pattern of scale - free long - range correlations . the fluctuations are anomalous , deviating in a fundamental way from the weaker correlations found in the underlying population distribution . the resulting scaling exponents allow us to broadly classify the indicators into two universality classes , weakly or strongly correlated . this collective behavior indicates that the spreading dynamics of obesity , diabetes and some forms of cancer like lung cancer are analogous to a critical point of fluctuations , just as a physical system in a second - order phase transition . according to this notion , individual interactions and habits may have negligible influence in shaping the global patterns of spreading . thus , obesity turns out to be a global problem where local details are of little importance . interestingly , we find the same critical fluctuations in obesity and diabetes , and in the activities of economic sectors associated with food production such as supermarkets , food and beverage stores which cluster in a different universality class than other generic sectors of the economy . these results motivate future interventions to investigate the causality of this relation providing guidance for the implementation of preventive health policies . the world health organization has recognized obesity as a global epidemic . obesity heads the list of non - communicable diseases ( ncd ) like diabetes and cancer , for which no prevention strategy has managed to control their spreading . here , since the gain of excessive body weight is related to an increase in calories intake and physical inactivity a principal aspect of prevention has been directed to individual habits . however , the prevalence of ncds shows strong spatial clustering . furthermore , obesity spreading has shown high susceptibility to social pressure and global economic drivers . this suggests that the spread and growth of obesity and other ncds may be governed by collective behavior acting over and above individual factors such as genetics and personal choices . to study the emergence of collective dynamics in the spatial spreading of obesity and other ncds , we implement a statistical clustering analysis based on critical phenomenon physics . we start by investigating regularities in obesity spreading derived from correlation patterns of demographic variables . obesity is determined through the body mass index ( bmi ) obtained via the formula weight(kg)/[height ( m)] . the obesity prevalence is defined as the percentage of adults aged 18 years with a bmi 30 . we investigate the spatial correlations of obesity prevalence in the usa during a specific year using micro - data defined at the county - level provided by the us centers for disease control ( cdc ) through the behavioral risk factor surveillance system ( brfss ) from 2004 to 2008 ( see methods section [ data ] ) . the average percentage of obesity in usa was historically around 10% . in the early 80s , an obesity transition in the hitherto robust percentage , steeply increased the obesity prevalence ( fig . [ transition]a ) . the spatial map of obesity prevalence in the usa shows that neighboring areas tend to present similar percentages of obese population forming spatial ` obesity clusters ' . the evolution of the spatial map of obesity from 2004 to 2008 at the county level ( fig . [ transition]b ) highlights the mechanism of cluster growth . characterizing such geographical spreading presents a challenge to current theoretical physics frameworks of cluster dynamics . the equal - time two - point correlation function , , determines the properties of such spatial arrangement by measuring the influence of an observable in county ( e.g. , in this study : adult population density , prevalence of obesity and diabetes , cancer mortality rates and economic activity ) on another county at distance : here , is the average over all counties in the usa , is the standard deviation , is the euclidean distance between the geometrical center of counties and . large positive values of reveal strong correlations , while negative values imply anti - correlations , i.e. , two areas with opposed tendencies relative to the mean in obesity prevalence ( analogous to two domains with opposite spins in a ferromagnet ) . spatial correlations in any indicator ought to be referred to the natural correlations of population fluctuations ( fig . [ transition]c ) . to this aim , we first calculate for the population ( adults 18 years ) in usa counties , , by using the density : in eq . ( [ cr ] ) , where is the county area . population density correlations show a slow fall - off with distance ( fig . [ correl]a ) described by a power - law up to a correlation length : where is the correlation exponent . correlations become short - ranged when ( is the dimension of the map ) , and stronger as decreases . an ordinary least squares ( ols ) regression analysis on the population reveals the exponent ( averaged over 1969 - 2009 , fig . [ correl]a , error bars denote 95% confidence interval [ ci ] ) . the inset of fig . [ correl]a reveals a distance where correlations vanish , with km , representing the average size of the correlated domains . as we increase larger than , we consider correlations between areas in the east and west which are anti - correlated since for . to determine whether population correlations are scale - free , we calculate for geographical systems of different sizes using a high resolution grid of 2.5 arc - seconds , available for several countries from ( methods section [ gridded ] ) . the resulting correlations ( fig . [ correl]b ) reveal the same picture as for the usa at the county - level ( fig . [ correl]a ) , i.e. , a power - law up to a correlation length . we then measure for every country , and investigate whether , as expected with the laws of critical phenomena , it increases with the country size , . indeed , we obtain ( fig . [ correl]c and table [ xi ] ) , where is the correlation length exponent . this result implies that the fluctuations in human agglomerations are scale - free , i.e. , the only length - scale in the system is set by its size and the correlation length become infinite when . we interpret any departure from as a proxy of anomalous dynamics beyond the simple dynamics related to the population growth . when we calculate the spatial correlations of obesity prevalence ( , is the number of obese adults in county ) in usa from 2004 to 2008 we also find long - range correlations ( fig . [ correl]a ) . the crux of the matter is that the correlation exponent for obesity ( , averaged over 2004 - 2008 ) is smaller than that of the population , signaling anomalous growth . since smaller exponents mean stronger correlations , the increase in obesity prevalence in a given place can eventually spread significantly further than expected from the population dynamics . we also calculate fluctuations in variables which are known to be strongly related to obesity : diabetes and physical inactivity prevalence ( fraction of adults per county who report no physical activity or exercise , see methods section [ data ] ) . the obtained exponents are anomalous with similar values as in obesity ( fig . [ time]a ) . the system size dependence of for obesity and diabetes can not be measured directly , since there is no available micro - data for other countries . however , we find that of obesity and diabetes in usa is very close to of the population distribution ( inset of fig . [ correl]a ) . assuming that the equality of the correlation lengths holds also for other countries , then obesity and diabetes should satisfy eq . ( [ sf ] ) as well . the correlations in obesity are reminiscent of those in physical systems at a critical point of second - order phase transitions . physical systems away from criticality are uncorrelated and fluctuations in observables , e.g. , magnetization in a ferromagnet or density in a fluid , decay faster than a power - law , e.g. , exponentially . instead , long - range correlations appear at critical points of phase transitions where fluctuations are not independent and , as a consequence , fall - off more slowly . the existence of long - range correlations with rather than the noncritical exponential decay signals the emergence of strong critical fluctuations in obesity and diabetes spreading . the notion of criticality , initially developed for equilibrium systems , has been successfully extended to explain a wide variety of dynamics away from equilibrium ranging from collective behaviour of bird cohorts , biological and social systems to city growth , just to name a few . its most important consequence is that it characterizes a system for which local details of interactions have a negligible influence in the global dynamics . following this framework , the clustering patterns of obesity are interpreted as the result of collective behaviors which are not merely the consequence of fluctuations of individual habits . as a tentative way of addressing which elements of the economy may be related to the obesity spread , we calculate in economic indicators which are thought to be involved in the rise in obesity . except for transient phenomena , all studied indicators yield exponents that fall around or , representing two universality classes of weak and strong correlations , respectively ( figs . [ time]a and b ) . we begin by studying the correlations in the whole economy ( measured through the number of employees of all economic sectors per county population , see methods section [ data ] ) . we find close to ( over the period 1998 - 2009 , fig . [ time]b and c ) suggesting that the whole economy inherits the correlations in the population . generic sectors of the economy which are not believed to be drivers of obesity , e.g. , wholesalers , administration , and manufacturing , also display consistent with the population trend ( fig . [ time]b and c ) . interestingly , analysis of the spatial fluctuations in the economic activity of sectors associated to food production and sales ( supermarkets , food and beverages stores and food services such as restaurants and bars ) gives rise to the same anomalous value as obesity and diabetes ( , 1998 - 2009 , fig . [ time]b and c ) . although these results can not inform about the causality of these relations , they show that the scaling properties of the obesity patterns display a spatial coupling which is also expressed by the fluctuations of sectors of the economy related to food production . it is of interest to study other health indicators for which active health policies have been devoted to control the rate of growth . we apply the correlation analysis to lung cancer mortality defined at the county level and compare with cancer mortality due to all types ( methods section [ data ] ) . the spatial correlations of cancer mortality per county show an interesting transition in the late 70 s from anomalous strong correlations , , to weak correlations , , ( fig . [ time]a and d ) . this transition is visualized in the different correlated patterns of cancer mortality in 1970 and 2003 in fig . [ transition]d , i.e. , the clustering of the data is more profound in 1970 , while in 2003 it spreads more uniformly . the current status of all - cancer mortality fluctuations is close to the natural one , inflicted by population correlation . conversely , fluctuations in the mortality rate due to lung cancer from 1970 to 2003 have remained highly correlated and close to the obesity value , ( fig . [ time]a and [ transition]e ) , while the other types of cancer have become less correlated . this is an interesting finding since , similarly to obesity , lung cancer prevalence is affected by a global factor ( smoking ) and has been growing rapidly during the studied period . the most visible characteristic of correlations is the formation of spatial clusters of obesity prevalence . to quantitatively determine the geographical formation of obesity clusters , we implement a percolation analysis . the control parameter of the analysis is the obesity threshold , . an obesity cluster is a maximally connected set of counties for which exceeds a given threshold : . by decreasing , we monitor the progressive formation , growth and merging of obesity clusters . in random uncorrelated percolation , small clusters would be formed in a spatially uniform way until a critical value , , is reached , and an incipient cluster spans the entire system . instead , when we analyze the obesity clusters we observe a more complex pattern exemplified in fig . [ percolation]a and [ percolation]b for year 2008 . at large , the first cluster appears in the lower mississippi basin ( red in fig . [ percolation]a ) with epicenter in greene county , al . upon decreasing to 0.32 , new clusters are born including two spanning the south and north of the appalachian mountains , which acts as a geographical barrier separating the second and third largest clusters ( yellow and violet in fig . [ percolation]a , respectively ) . further lowering , we observe a percolation transition in which the appalachian clusters merge with the mississippi cluster . this point is revealed by a jump in the size of the largest component and a peak in the second largest component at ( fig . [ percolation]a ) as features of a percolation transition . at , three `` red bonds '' ( mclean county , ky , dekalb county , tn , and colquitt county , ga ) appear to connect the incipient largest cluster spanning the east of usa ( see fig . [ percolation]c and methods section [ red ] ) . as a comparison , when we randomize the obesity data by shuffling the values between counties , a single appears as a signature of a uncorrelated percolation process ( blue symbols in fig . [ percolation]a ) . obesity clusters in the west persist segregated from the main eastern cluster avoiding a full - country percolation due to low - prevalence areas around colorado state . finally , the east and west clusters merge at by a red bond ( rich county , utah ) producing a second percolation transition ; this time spanning the whole country ( see fig . [ percolation]a and c , where the whole spanning cluster is green , and methods section [ red ] ) . this cluster - merging process is a hierarchical percolation progression represented in the tree model in fig . [ percolation]b . to further inquire whether the spreading of obesity has the features of a physical system at the critical point , we examine the geometry and distribution of obesity clusters . for long - range correlated critical systems percolating through nearest neighbors in two dimensional maps , the geometrical structure gives rise to three critical exponents : the fractal dimension of the spanning cluster , , the fractal dimension of the hull , , and the cluster size distribution exponent , , analogous to zipf s law ( methods section [ som - percolation ] ) . for the percolating obesity cluster at displayed in the inset of fig . [ percolation]d , we confirm critical scaling with exponents : ( fig . [ percolation]d , e ) . taken together , these results show that obesity spreading behaves as a self - similar strongly - correlated critical system . in particular , a note of caution has to be raised since , even if the highest prevalence of obesity is localized to the south and appalachia , the scaling analysis indicates that the obesity problem is the same ( self - similar ) across all usa , including the lower prevalence areas . our results can not establish a causal relation between obesity prevalence and economic indicators : whether fluctuations in the food economy may impact obesity or , instead , whether the food industry reacts to obesity demands . however , the comparative similarities of statistical properties of demographic and economical variables serves to identify possible candidates which shape the epidemic . specifically , the observation of a common universality class in the fluctuations of obesity prevalence and economic activity of supermarkets , food stores and food services which cluster in a different universality class than simple population dynamics is in line with studies proposing that an important component of the rise of obesity is linked to the obesogenic environment regulated by food market economies . this result is consistent with recent research that relates obesity with residential proximity to fast - food stores and restaurants . the present analysis based on clustering and critical fluctuations is a supplement to studies of association between people s bmi and food s environment based on covariance ( methods section [ covariance ] ) . we have detected potential candidates in the economy which relate to the spreading of obesity by showing the same universal fluctuation properties . these tentative relations ought to be corroborated by future intervention studies . 99 world health organization . obesity : preventing and managing the global epidemic . _ who obesity technical report series _ * 894 * ( world health organization geneva , switzerland , 2000 ) . butland , b. _ et al . _ , foresight tackling obesities : future choices - project report , 2nd edn . london : government office for science ( 2007 ) . hill , j. o. & peters j. c. environmental contributions to the obesity epidemic . _ science _ * 280 * , 1371 - 1374 ( 1998 ) . swinburn , b. a. , sacks , g. , hall , k. d. , mcpherson , k. , finegood , d. t. , moodie , m. l. & gortmaker , s. l. the global obesity pandemic : shaped by global drivers and local environments . _ lancet _ * 378 * , 804 - 814 ( 2011 ) . nestle , m. _ food politics : how the food industry influences nutrition and health . _ ( university of california press , revised edition , 2007 ) . christakis , n. a. & fowler , j. h. the spread of obesity in a large social network over 32 years . _ n. engl . j. med . _ * 357 * , 370 - 379 ( 2007 ) . block , j. , christakis , n.a . , omalley , a.j . & subramanian , s. proximity to food establishments and body mass index in the framingham heart study offspring cohort over 30 years . _ american journal of epidemiology _ * 174 * , 1108 - 1114 ( 2011 ) . caballero , b. & popkin , b. m. , editors . _ the nutrition transition : diet and disease in the developing world _ ( academic press , 2002 ) . cutler , d. m. , glaeser , e. l. & shapiro j. m. why have americans become more obese ? _ j. econ . perspect . _ * 17 * , 93 - 118 ( 2003 ) . haslam d. w. & james , w. p. obesity . _ lancet _ * 366 * , 1197 - 1209 ( 2005 ) . department of health and human services ( usa ) . healthy people 2010 : understanding and improving health . conference edition . ( washington , government printing office , 2000 ) . schuurman , n. , peters , p. a. & oliver , l. n. are obesity and physical activity clustered ? a spatial analysis linked to residential density . _ obesity _ * 17 * , 2202 - 2209 ( 2009 ) . michimi , a. & wimberly , m. c. spatial patterns of obesity and associated risk factors in the conterminous u.s . _ am . j. prev . med . _ * 39 * , e1-e12 ( 2010 ) . behavioral risk factor surveillance system survey data . atlanta , georgia : u.s . department of health and human services , centers for disease control and prevention . http://apps.nccd.cdc.gov/ddt_strs2/nationaldiabetesprevalenceestimates.aspx stanley , h. e. _ introduction to phase transitions and critical phenomena . _ ( oxford university press , oxford , 1971 ) . bunde , a. & havlin , s. ( editors ) . _ fractals and disordered systems _ ( springer - verlag , heidelberg , 2nd edition , 1996 ) . coniglio , a. , nappi , c. , russo , l. & peruggi , f. _ j. phys . a _ * 10 * , 205 - 209 ( 1977 ) weinrib , a. long - range correlated percolation . _ phys . rev . b _ * 29 * , 387 - 395 ( 1984 ) . prakash , s. , havlin , s. , schwartz , m. & stanley , h. e. structural and dynamical properties of long - range correlated percolation . _ phys . rev . a _ * 46 * , r1724 - 1727 ( 1992 ) . makse , h. a. , havlin , s. & stanley , h. e. modelling urban growth patterns . _ nature _ * 377 * , 608 - 612 ( 1995 ) . mora , t. & bialek , w. are biological systems poised at criticality ? _ j. stat . phys _ * 144 * , 268 - 302 ( 2011 ) . montogomery , d. c. , peck , e. a. introduction to linear regression analysis ( wiley , new york , 1992 ) . cavagna , a. , cimarelli , a. , giardina , i. , parisi , g. , santagati , r. stefanini , f. & viale , m. scale - free correlations in starling flocks . _ proc . natl . acad . sci . usa _ * 107 * , 11865 - 11870 ( 2010 ) . center for international earth science information network ( ciesin ) , columbia university ; and centro internacional de agricultura tropical ( ciat ) . 2005 . gridded population of the world , version 3 ( gpwv3 ) . palisades , ny : socioeconomic data and applications center ( sedac ) , columbia university . available at http://sedac.ciesin.columbia.edu/gpw . mokdad , a. h. _ et al . _ , prevalence of obesity , diabetes , and obesity - related health risk factors , 2001 . _ jama _ * 289 * , 76 - 79 ( 2003 ) . rozenfeld , h. d. , rybski , d. , gabaix , x. & makse , h. a. the area and population of cities : new insights from a different perspective on cities . _ american economic review _ * 101 * , 560 - 580 ( 2011 ) . stanley , h. e. , _ et al . _ scale - invariant correlations in the biological and social sciences . _ phil . mag . part b _ * 77 * , 1373 - 1388 ( 1998 ) . weinrib , a. & halperin , b. i. critical phenomena in systems with long - range - correlated quenched disorder . _ phys . rev . b _ * 27 * , 413 - 427 ( 1983 ) . swinburn , b. & figger , g. preventive strategies against weight gain and obesity . _ obesity reviews _ * 3 * , 289 - 301 ( 2002 ) . chang , v. w. & christakis , n.a . income inequality and weight status in u.s . metropolitan areas . _ social science and medicine _ * 61 * , 83 - 96 ( 2005 ) . * acknowledgements : * lkg and ham are supported by nsf-0827508 emerging frontiers , and pb and ms by human frontiers science program . we are grateful to h. nickerson for bringing our attention to this problem , and s. alarcn - daz and d. rybski for valuable discussions . we thank epiwork , arl and the israel science foundation for support . * methods *
to what extent ideal gas laws can be obtained from simple newtonian mechanics ? in this note , we present a simple one - dimensional simulation that suggest the answer is `` yes . '' .similar prior work includes the papers .they had a two dimensional model , in which two gases were seperated by a diabatic barrier .the way energy was communicated between the gases was via the barrier , in which after collisions , the particles which collided with the barrier in that time were reassigned equal energies , and momenta derived from these energies and their original momenta . in ,this work was used to simulate the carnot cycle .this paper gives a slightly different model . in this paper ,the barrier is a massless wall , which is pushed by the particles colliding into it , but the barrier keeps its vertical orientation so that only the horizontal positions of the gas particles matter in the dynamics of the barrier .an advantage to the approach given in this paper is that the motion is completely newtonian .for example , the motion is reversible , thus demonstrating that the second law of thermodynamics is not a hard and fast law , but rather is a trend generated by the pseudo - statistical nature of the deterministic dynamics .a thermally isolated container has a freely moving piston that splits the container into two parts , the left part and the right part . the piston is thermally conducting .the left part is filled with a monatomic gas a , and the right part of filled with a monatomic gas b. gas a has an atomic weight , and gas b has an atomic weight .initially , all the molecules have the same average velocity , picked independently from a symmetric random distribution .the piston is initially placed half way along the container .we place molecules of gas a randomly in the left half of the container , uniformly distributed , and we similarly place molecules of gas b in the right half of the container .the molecules obey newton s laws of motion , with elastic collisions .the objective of this note is to describe a numerical experiment which suggests that newton s laws of motion are all that are needed to predict the ideal gas laws , and that the collisions with the piston should be enough to transfer heat from the hotter gas to the colder gas .thus this note is in effect attempting to show that newtonian mechanics are completely sufficient to explain the second law of thermodynamics .molecules from the same gas are assumed sufficiently small so that collisions between them never occur .thus the molecules collide either with the walls of the container , or with the piston . if a molecule collides with a wall , it s velocity is simply reflected .however the piston has zero mass .thus if it is hit by a molecule , it simply travels with that molecule until it hits a molecule of the other gas .then the piston acts as a conduit for the two molecules to collide , conserving energy and momentum , as if they had the same -component in space .( -4,2)(4,2 ) ; ( 4,2)(4,-2 ) ; ( 4,-2)(-4,-2 ) ; ( -4,-2)(-4,2 ) ; ( -0.5,1.9)(-0.5,-1.9 ) ; ( -0.7,1.9)(-0.3,1.9 ) ; ( -0.7,-1.9)(-0.3,-1.9 ) ; ( -2.25,0 ) nodegas a ; ( 1.75,0 ) nodegas b ; from the ideal gas law , we have that each container , at least when in a state of quasi - equilibrium , should obey , where is the pressure , is the volume , is boltzmann s constant , is the number of molecules , and is the temperature. initially the temperature in the left hand part should be times cooler than the temperature in the right hand part .this is because the particles originally have the same average velocities , and hence the right hand side has total kinetic energy 100 times the kinetic energy of the left hand side , and kinetic energy is proportional to temperature .initially the pressure in the right hand side will be 100 times the left hand side , and so the piston should move quickly towards the left until the pressures are equalized . because this motion is quite fast , one might not expect conditions of quasi - equilibrium to be met , and so there will be some overshoot of this piston due to inertia of the gas molecules . next , because of the collisions between the molecules of the different gases , one might expect the kinetic energy to begin to spread evenly between the molecules in the left hand side and the right hand side .thus the piston should begin to move slowly towards the middle .to compute this numerically , we will completely neglect the vertical motions , and simply compute in one dimension .we let , and assign to each particle a one dimensional position and momentum .the only collisions that take place are between molecules and the left or right wall of the container , or between two molecules from the two different gases. a molecule from gas a will always be to the left of a molecule of gas b. molecules from the same gas will simply pass through each other .the computer program is as follows : 1 .assign to the particles of gas a positions uniformly picked from , and to the particles of gas b positions uniformly picked from ] .( the convention is that velocity is positive if the motion is to the right . )[ loop ] compute the following collision times : 1 .the time for each particle to collide with the walls or ; 2 .the time for a particle of gas a to collide with a particle of gas b. 4 .let be the smallest of all these collision times .advance the positions of all the particles to time using their given velocities .if represents a collision of a particle with a wall , multiply the velocity of that particle by .if represents a collision of a particle of gas a with velocity , and a particle of gas b with velocity , then replace these velocities by and , where 8 .go back to step .the program was carefully written so that floating point errors would not cause great error in the calculations .for example , when two particles collide , they might pass through each other by machine precision .but then when the collisions are recalculated , they will collide again in a very small amount of time .another place where care must be taken is when two collision events take place at exactly the same time .the collisions were stored in a database , implemented using c++ s standard library map . in this way , after a collision between two particles , or a particle and a wall , the only other collisions that need to be recalculated are those involving these two particles or particle and wall .the results , shown in figures [ plot1 ] , [ plot2 ] and [ plot3 ] , seem to bear out the prediction that newtonian mechanics is sufficient to obtain the ideal gas laws and the second law of thermodynamics . in figure [ plotxxx ], we plot the moving averages over a range of 20 time units , showing better than expected agreement with charles law .the moving averages were calculated using the following formula . for an ideal gas , maxwell s distribution would predict that the -componant of the velocities of the particles of either gas a or gas b should converge to a normal distribution .we decided to use the anderson - darling test described in .we assumed that both the mean and variance of the distributions were unknown .for each time , we calculated the modified statistic as described in table 1.3 in , and as case 4 in .this test assumes that the null - hypothesis is that the distribution is gaussian , and for a single application of the test , the hypothesis fails with 15% significance if , and fails with 1% significance if . in figure[ plotlr ] , we plot this statistic as a function of time .this is technically not a proper use of the statistic , as the statistic calculated at one particular time is not going to be statistically independent from the statistic calculated at another particular time .nevertheless , the graphs do strongly suggest that both gas a and gas b achieve a maxwellian distribution .as might be expected , the gas with the heavier molecules , gas b , takes longer to reach this distribution . statistic described in the text.,title="fig : " ] statistic described in the text.,title="fig : " ] another experiment that was performed was to run the numerics for 1000 time units , and then reverse the velocities , and then run the numerics backwards .it was found that in most cases that the data after 2000 time units matched the initial data to within about .however in one case the final data was completely different from the initial data . to help explain why it might be different just one time, the same experiment was performed many times , with only 100 particles and 100 time units .( the 1000/1000 experiment takes about 3.5 hours . ) in these cases small random perturbations were made to the data at the same time as the velocities were reversed . by adjusting the random fluctuations to a size of about was found that the initial data was recovered to within with about the same frequency as the final data being in complete disagreement with the initial data . in these experiments ,a careful log of the order of the collisions was made , and the times when the data agreed closely , the order of collisions in the first 100 time units was almost exactly the reverse of the order of collisions in the second 100 time units ( the order differing with at most a few pairs of adjacent collisions reversing their order ) . when the final data greatly disagreed with the initial data , at some point in time the collisions became very different .these all suggest that the apparent statistical nature of equilibrium is not due to numerical floating point errors .it also suggests that if the final data has sensitive dependence on initial conditions , that this only happens when the time elapsed , or the number of particles , is very large .the effect of `` chaos '' is not primarily responsible for the statistical nature of the equilibrium .in this section , we derive the well known relationship between temperature and volume of an ideal gas when subjected to an adiabatic expansion . pressure is a more difficult quantity to understand in our context , so we will leave it out of our discussion .we have gas particles in a one - dimensional container , whose right hand side is able to move .adiabatic means that when the particles collide with the sides , we assume that the collisions are elastic , and the mass of the side walls are effectively infinite . the volume , , is proportional to the distance , , between the left hand side and the right hand side .the temperature is proportional to the average left - right kinetic energy of the particles , that is , if the particles have masses , and left - right - coordinate of velocity , for , then the particles themselves do not interact with each other . for this reason , without loss of generality , we only need to perform the calculation for one particle .the only thermodynamic assumption we shall make is that the energy in each particle is spread between the left - right kinetic energy and the other energies equally .let us denote by the ratio of the total energy to the left - right energy of each particle .thus if the particle is a three dimensional monotomic gas , then , and if the particle is a three dimensional diatomic gas , then ( because of the extra rotational kinetic energy ) . and if the particle is a three dimensional diatomic gas with a bond that is elastic , then ( two extra degrees of freedom for the rotational kinetic energy , one extra degree of freedom for the kinetic energy of the bond oscillating , and one extra degree of freedom for the potential energy of the bond oscillating ) . the purpose of this section is to derive the well known adiabatic thermodynamic relation the right hand side wall will move uniformly at a velocity , and the particle will bounce back and forth between the left and right wall .we assume that the are much larger than , and in the end we shall take the limit as .we also assume that the time for a back and forth bounce is much smaller than the time scale that might vary .if is allowed to vary too quickly , then we can create a kind of maxwell s daemon " in which we can impart any energy we like to the particle .the mass of the wall is taken to be , which we will take to be effectively infinite .we denote the velocity of the particle before and after it collides with the right wall are and respectively . the velocity of the wall after the collision is , and we will soon see that .( 4,1)(4,-1 ) ; ( -4,-1)(-4,1 ) ; ( -3.1,0.5)node[above](4,0 ) ; ( 3.8,-0.05)node[below](-2.9,-0.5 ) ; ( 3.5,-1.5) node[below](4.5,-1.5 ) ; ( -4,1.5)node[above](4,1.5 ) ; let us consider one of the particles .conservation of momentum implies conservation of energy implies solving these equations we obtain now letting , we obtain the time required for the particle to bounce from the right hand wall , to the left hand wall , and back to the right hand wall is . in that time, the length of the wall changes , and the change in absolute value of the velocity of the particle is .then we obtain now let . then we obtain averaging over we obtain authors feel that there are many extensions to these kinds of numerical simulations that could provide more illustrations of certain properties of ideal gases . for example , in the first section we could perform the experiment assuming that a constant force is applied to the piston , or we could assume the piston has non - zero mass . potentially there is a large number of variations that could be performed .we also feel that the scenario described in the first section could be a place to look for plausible theorems that could be rigorously proved .in particular , it seems to the authors that equilibrium is achieved rather quickly , and probably much more quickly than could be proved if we assumed certain ergodicity assumptions on the phase space of solutions .perhaps equilibrium theorems could be proved without ergodicity assumptions .and of course any equilibrium results must only show that equilibrium is achieved most of the time , because as our backwards in time calculations show , in this discrete setting the second law of thermodynamics is more of a trend than a law .adam galant , ryszard kutner , andrzej majerowski , heat transfer , newton s law of cooling and the law of entropy increase simulated by the real - time computer experiment in java , computational science iccs 2003 lecture notes in computer science volume 2657 , 2003 , pp 45 - 53 .
the purpose of this note is to see to what extent ideal gas laws can be obtained from simple newtonian mechanics , specifically elastic collisions . we present simple one - dimensional situations that seem to validate the laws . the first section describes a numerical simulation that demonstrates the second law of thermodynamics . the second section mathematically demonstrates the adiabatic law of expansion of ideal gases .
numerical schemes , which is using for solving systems of equations of many - particle dynamics , can have restrictions on a step and an interval of integration because if they increase , the numerical schemes become unstable and do not conserve integrals of motion . as a result , when we simulate many - particle system behaviour on the sufficiently large time interval we should decrease an integration step , which leads to considerable increasing of computation quantity .a motion of -particle system in a field with potential can be described by the system of hamiltonian equations with initial conditions . here- particle coordinates , - particle momentums , - a dimension of coordinate space and is a separable hamiltonian of the system with symmetric and positive definite mass matrix and a field potential .an approach to design of conservative difference scheme for solving hamiltonian equations ( [ heq ] ) is based on following stages .first is a choice of appropriate type of generating function , which fixes a definite family of simplectic difference scheme .simplectic difference scheme corresponds to the canonical transformation of canonical variables .solution of hamiltonian system ( [ heq ] ) for a definite time moment can be represented as canonical transformation and that is why conserves the value of hamiltonian .second is using of `` forward '' and `` backward '' taylor expansion of coordinate and momentum on a time step for getting of a corresponding generating function and remaining values of coordinate and momentum .at last , obtained at the previous stage explicit and implicit schemes are used for constructing symmetric simplectic scheme for solving hamiltonian equations ( [ heq ] ) . -\ ] ] \cdot\left(\mathbf{q}^{k+1}-\mathbf{q}^k\right),\ ] ] -\ ] ] -\ ] ] where is hessian matrix for .obtained by described above approach difference scheme of third order [ 5 ] was tested by solving the system of equations ( [ heq ] ) with hamiltonian so called kepler two body problem , results of numerical calculations are compared for the symmetric simplectic scheme of third order and well - known second order velocity verlet scheme with a time step and a time interval , \,t=5000 $ ] .approximate solution by verlet scheme with a time step h=0.02 is used as an exact solution [ 5 ] .a new approach for constructing symmetric simplectic numerical schemes for solving hamiltonian systems of equations is proposed .the numerical schemes constructed by this approach produce more stable and accurate solution of hamiltonian system ( 1 ) and better conserve the energy of a system on the large interval of numerical integration for a relatively large integration step in comparision with the verlet method , which is usually using for solving equations of motion in molecular dynamics .* acknowledgments . *e.g.n . acknowledge dr .b. batgerel for computations and numerical results .the work was supported in part by the russian foundation for basic research , grant no .15 - 01 - 06055 .
simulation of many - particle system evolution by molecular dynamics takes to decrease integration step to provide numerical scheme stability on the sufficiently large time interval . it leads to a significant increase of the volume of calculations . an approach for constructing symmetric simplectic numerical schemes with given approximation accuracy in relation to integration step , for solving molecular dynamics hamiltonian equations , is proposed in this paper . numerical experiments show that obtained under this approach symmetric simplectic third order scheme is more stable for integration step , time - reversible and conserves hamiltonian of the system with more accuracy at a large integration interval then second order velocity verlet numerical schemes . + * key words : * hamiltonian systems of equations , simplectic difference schemes , generating functions , molecular dynamics + * pacs : * 02.60.cb 02.60.jh 02.70.bf 02.70.ns
cold atoms have found numerous applications in quantum simulation , quantum chemistry and precision metrology . for most such experiments it is helpful to trap as many atoms in a magneto - optical trap ( mot ) as possible . in order to achieve this ,a collimated slowed atomic beam is often used to load the mot . a common approach to atomic beam slowingmakes use of the scattering force from a counter - propagating near - resonant laser beam . in a zeeman slower ,the unavoidable doppler shifts are compensated with zeeman shifts from an inhomogeneous applied magnetic field .this field is commonly produced by a carefully - designed pattern of current - carrying wires around the axis of atomic beam propagation .recently , alternative designs based on permanent magnets have attracted interest .these use permanent magnets positioned at different distances away from the atomic beam to create the desired field profile .although the resulting field can not be switched on and off , the use of permanent magnets offers numerous practical advantages including robustness , ease of maintenance , the elimination of electrical power and water cooling requirements , and low cost .permanent magnet zeeman slowers can be categorized by the direction of the field relative to the beam axis ( longitudinal or transverse ) and by the mechanism for positioning the individual magnets .for example , in the work of hill _ , the positions of the magnets are fixed with screws , which can be manually adjusted for fine optimization of the profile of the magnetic field . realized an automated version of this design by adding servo motors to control and optimize the positions of the magnets .another approach is to position the magnets in a halbach arrangement .all the above - mentioned designs rely on rigid attachment of magnets to a mechanical construction whose position can be fine - tuned if required . in this paperwe present a conceptually novel approach , which relies on `` self - assembly '' of spherical neodymium iron boride ( ndfeb ) magnets into stable structures .the resultant slower is of the longitudinal - field type , meaning that a magnetic field along the atomic beam axis is used to compensate for the doppler shift of the atoms . unlike transverse - field slowers , which use linearly polarized ( ) cooling light , longitudinal slowers use only or only polarized light . as a resultwe expect our slower to require roughly half the laser power of transverse - field permanent - magnet slowers .the design does not require any supporting mechanical holders , and is stable , quick to assemble , and inexpensive .in section [ design ] we present the general approach and details of the main design considerations . in section [ realization ]we describe the construction of the slower . in section [ results ]we discuss the measured field profile and calculated performance , and we conclude in section [ discussion ] .the basic idea behind our design is to create a longitudinal - field permanent - magnet slower by arranging a large number of individual magnetized spheres in a magnetically stable configuration .design goals include simplicity , low cost , ease of assembly , and tunability ( which can be achieved by adding or subtracting individual magnetic elements ) .previous work has shown that an array of magnetic dipoles can be rigidly positioned so as to create a spatially varying longitudinal field which is suitable for a zeeman slower . in this paperwe concentrate on finding an arrangement of magnetic dipoles which both locally minimizes the magnetic interaction energy among the dipoles and creates a suitable slowing field .we refer informally to such an arrangement as a `` self - assembled zeeman - slower . '' in a zeeman slower , atoms are slowed by an approximately constant deceleration force due to photon scattering from a counterpropagating beam . in the limit of large laser intensity, the maximum deceleration is , where is the wavevector of the slowing light , is the natural linewidth of the relevant optical transition , and is the atomic mass .robust slowing requires operation at some smaller acceleration , where . for constant acceleration ,the velocity as a function of position is then where is the initial velocity , is the length of the slower , and is the position along the axis of the slower . to maintain resonance , the doppler shift arising from this variation in velocity must be compensated by the zeeman shift due to an applied magnetic field . for a transition with a magnetic moment of one bohr magneton ,the condition for such compensation is , leading to an ideal magnetic field profile of the form figure [ b_ideal ] illustrates a magnetic field profile calculated from equation ( 2 ) with cm and m/s . and ., width=377 ] in our zeeman slower design , a field of the appropriate form is created by a superposition of fields due to individual elements .the components of the field due to a single magnetic dipole placed at the origin and oriented along the -axis are ,\\ & b_{y } = \frac{\mu _ { 0}m}{4\pi } \left [ \frac{3yz}{r^5 } \right],\\ & b_{z } = \frac{\mu _ { 0}m}{4\pi } \left [ \frac{2z^2-x^2-y^2}{r^5 } \right ] , \end{aligned}\ ] ] where is the magnetic moment of the dipole and .the field of our slower is a sum of such terms due to dipoles at different positions . for an array of magnets which has discrete or continuous rotational symmetry about the axis , and are zero at all points on the axis of symmetry , and the total magnetic field on axis is the sum of the components due to each individual magnet : fig .[ lines2 ] illustrates the fields from a combination of 1d arrays of different lengths , with each array starting at the origin .the total field from all three arrays ( solid line ) has the required asymmetric profile .it is easy to see that adding the contributions of more such arrays ( each with incrementally increasing length ) will result in a gradually decreasing total field profile qualitatively resembling that required for a zeeman slower .note that there are two ways to tune the slope of the field profile : by varying the length of each array and also by varying the distance of each array from the -axis .the configuration considered in fig .[ lines2 ] has all magnets positioned at the same distance away from the -axis .-axis and aligned along it .the spacing between neighboring dipoles is 6 mm , and each dipole moment is 0.075 a .the dashed line corresponds to an array consisting of six such magnets , the dash - dotted to an array of 9 magnets , and the dotted to an array of 12 magnets .the solid line is the sum of the fields of all three arrays.,width=377 ] for simplicity of construction it is important to ensure that the arrangement of dipoles not only generates the correct field profile but also is magnetically stable .although it is possible to fix the position of each dipole with glue or mechanical mounting , here we aim at realizing a stable magnetic configuration which does not require any mechanical support structure . for optimum beam slowing , and should be zero and the longitudinal field should vary as little as possible in the transverse direction , which requires the magnet arrangement to be symmetric around the -axis .this means that our structure must be stable in three dimensions .here we briefly consider some basic examples of structures which are stable in one and two dimensions .a stable 1d structure is simply a line of spherical magnets with the south pole of one magnet touching the north pole of the other . in 2d, stability can be achieved for two different patterns ( see fig .[ fig : stable_b_0 ] ) .note that only one of the solutions corresponds to a non - zero total magnetic field at long distances .the 3d case is more complex than 1d and 2d , and supports an infinite number of stable structures . as a practical note, stable 3d arrangements can also be very sensitive to the chronological order in which the dipoles are assembled .one aim of this work was to find a 3d pattern that is stable and results in a zeeman - slower - like profile of the magnetic field . the line of magnets from fig .[ fig : stable_b_0]a can be closed onto itself to make a ring which is very stable and generates no net field through its axis .if one stacks a number of such rings so that their axes overlap the resulting structure will be a cylinder .depending on the relative orientation of magnetization in neighboring rings one can end up with two possible types of regular cylinder .[ fig : cylinder_square_pattern]a illustrates a square lattice which results from each ring being magnetized in the opposite direction to the neighboring ring ( similar to the pattern described in fig .[ fig : stable_b_0]b ) . fig .[ fig : cylinder_square_pattern]b illustrates a triangular lattice , for which each ring is magnetized in the same direction as the neighboring ones ( similar to the pattern described in fig .[ fig : stable_b_0]c ) .both of these cylinders generate no net magnetic field along their axis .although neither of the two cylinders generate any net magnetic field , they play a key role in the design of our zeeman slower .we use the square - patterned cylinder as a rigid bottom layer for a stable arrangement described in the next section .we refer to the inner cylinder as an _ adhesive layer _ , because it keeps the magnets in the outer layers fixed in the correct way .the square - patterned cylinder is especially appropriate for supporting line - shaped outer layers aligned along the length of the cylinder .as discussed in the previous section , we create stable magnetic structures by adding lines of spherical magnets on top of an adhesive cylindrical inner layer . in principle, there is a large number of different arrangements of individual magnets that can generate a zeeman - slower - like field . in choosing the most appropriate structure for a specific applicationthe main constraints arise from the need to keep the whole structure magnetically stable and symmetric around the z - axis , and the need to minimize the total number of individual magnets in order to keep the cost and weight low .for these reasons we chose the shape of the whole structure to have three fin - like elements as shown in fig .[ fig : slower ] .all three fins are identical and magnetized in the same direction and are held in a fixed position by the adhesive layer , which in turn is wound around a non - magnetic tube .the structure and ordering of magnets within each fin is shown in fig .[ fig : fins ] .each individual magnet in the fins is magnetized the same way along the -axis .adjacent lines of magnets have incrementally different numbers of elements , and each shorter line is incrementally further away from the z - axis .such a structure results in a slowly decreasing field profile , and three such fins produce a -directed axial field three times larger than than that from any one fin . as seen in fig .[ fig : slower ] , all fins are positioned symmetrically around the z - axis at 120 degrees to each other .each individual magnet used in this assembly is a 6 mm diameter n45 grade spherical neodymium iron boride magnet , with a dipole moment of about 0.075 a .[ fig : slowerfield ] presents the measured longitudinal magnetic field along the -axis of the slower .the slowing region is from around 15 to 30 cm , resulting in a 15 cm slowing length .this is sufficient for zeeman slowing of strontium , due to the broad ( mhz ) linewidth and consequent high scattering rate of this atom s 461 nm transition .this design could also be straightforwardly adapted to slowing of other atomic species .the field is reasonably smooth and can be tweaked and improved further by moving the individual magnets around the outer layers of the fins .variation of the axial field as the position is scanned in and is plotted in the inset of fig .[ fig : slowerfield ] .these data were taken at the point of maximum field , near the slower entrance .the variation of the zeeman shift across a typical atomic beam diameter is small compared to the linewidth of the relevant transition in strontium , indicating that transverse variations in the axial field should not significantly limit slower performance . .inset plots measured variations in the axial field at the maximum field point as the position is scanned transverse to the axis ., width=453 ] using the measured field profile one can integrate the equations of motion to determine the fate of atoms with different incoming velocities .atoms traveling at velocity through a magnetic field experience both a doppler shift and a zeeman shift , where is the wavelength of the slowing laser beam and is the magnetic moment of the relevant atomic transition .together with the chosen detuning of the laser from the atomic transition , the sum of these terms sets the local effective detuning at any point along the axis : the average spontaneous light pressure force is then given by where is the local saturation parameter , which we choose to be constant for simplicity .the resulting equation of motion can be integrated to find the velocity as a function of time or distance for different starting velocities .[ fate ] presents atomic trajectories for different initial velocities in parameter space for the measured field profile of our zeeman slower .the highest captured velocity is limited to around m / s , which could be increased by simply adding magnets to increase the maximum magnetic field .note that the atoms are significantly decelerated in the region between 0.3 and 0.4 meters despite the opposite slope of the magnetic field .this can be explained by the large natural linewidth ( mhz ) of strontium s 461 nm line , which leads to substantial photon scattering rates even at relatively large effective detunings .taking this into account , by varying the saturation parameter and laser detuning it is possible to produce a final velocity of around 25 m / s , which is below the capture velocity of a typical mot .for similar reasons , the atoms with initial velocities below m / s are decelerated too early and reach zero velocity before reaching the end of the slower .these atoms are therefore lost since they will never reach the mot region and can not be trapped . however , for a typical oven temperature of 600 , the most probable longitudinal velocity of the atomic beam is close to 500 m / s .this means that there are significantly more atoms with initial velocities between 170 and 370 m / s than with velocities between 0 and 170 m / s , even before taking into account scattering processes which can preferentially deplete the population of low - velocity atoms .therefore this slower design is capable of trapping more than 75% of the atoms with starting velocities below 370 m / s . of 1.5 and a detuning of 340 mhz.,width=453 ]we have presented a new type of longitudinal zeeman slower based on spherical permanent magnets . in particularwe demonstrated how one can assemble such magnets into a stable structure which provides an axial magnetic field profile suitable for zeeman slowing .like other permanent magnet slowers , this design does not require electrical current or water cooling .the proposed design is flexible , easy to assemble , and inexpensive .the authors thank ruwan senaratne , shankari rajagopal , and zach geiger for useful discussions and physical insights . we acknowledge support from the air force office of scientific research ( award fa9550 - 12 - 1 - 0305 ) and the alfred p. sloan foundation ( grant br2013 - 110 ) .99 w. d. phillips , `` laser cooling and trapping of neutral atoms '' , reviews of modern physics * 70 , * 721741 ( 1998 ) c. e. wieman , d. e. pritchard and d. j. wineland , `` atom cooling , trapping , and quantum manipulation '' , reviews of modern physics * 71 , * s253s262 ( 1999 ) i. bloch , j. dalibard and s. nascimbene , `` quantum simulations with ultracold quantum gases '' , nature physics * 8 , * 267 - 276 ( 2012 ) d. s. jin and j. ye , `` introduction to ultracold molecules : new frontiers in quantum and chemical physics '' , chemical reviews * 112 , * 48014802 ( 2012 ) a. derevianko and h. katori , `` _ colloquium _ : physics of optical lattice clocks '' , reviews of modern physics * 83 , * 331347 ( 1998 ) b. j. bloom et al , `` an optical lattice clock with accuracy and stability at the level '' , nature * 506 * , 7175 ( 2014 ) w. d. phillips and h. metcalf , `` laser deceleration of an atomic beam '' , physical review letters * 48 , * 596599 ( 1982 ) y. b. ovchinnikov , `` a zeeman slower based on magnetic dipoles '' , optics communications * 276 , * 261267 ( 2007 ) y. b. ovchinnikov , `` longitudinal zeeman slowers based on permanent magnetic dipoles '' , optics communications * 285 , * 11751180 ( 2012 ) i. r. hill et al , `` a simple , configurable , permanent magnet zeeman slower for strontium '' , eftf 2012 - 2012 european frequency and time forum , proceedings , art .6502439 , pp .545 - 549 ( 2012 ) p. cheiney et al , `` a zeeman slower design with permanent magnets in a halbach configuration '' , reviews of scientific instruments * 82 , * 063115 ( 2011 ) g. reinaudi , c. b. osborn , k. bega , and t. zelevinsky , `` dynamically configurable and optimizable zeeman slower using permanent magnets and servomotors '' , j. opt .b * 29 , * 729 ( 2012 ) i. r. hill et al , `` zeeman slowers for strontium based on permanent magnets '' , j. phys .b : at . mol .. phys . * 47 * , 075006 ( 2014 )
we present a novel type of longitudinal zeeman slower . the magnetic field profile is generated by a 3d array of permanent spherical magnets , which are self - assembled into a stable structure . the simplicity and stability of the design make it quick to assemble and inexpensive . in addition , as with other permanent magnet slowers , no electrical current or water cooling is required . we describe the theory , assembly , and testing of this new design .
the emergence of drug resistance represents one of the major clinical challenges of the current century .microbial pathogens quickly acquire resistance to new antibiotics , while solid tumors often regrow after treatment because of resistance mutations that arise during tumor growth .in addition to genomic studies examining the molecular causes of resistance , the dynamics of drug resistance evolution has recently attracted wide interest , with the dual goal of understanding the emergence of resistance and developing novel strategies to prevent or control its spread .next - generation sequencing and high - throughput experimental techniques enable the quantitative study of resistance evolution but require the development of new theories to appropriately interpret experimental results . in many realistic systems ,an evolving population interacts with its surroundings and exhibits a well - defined spatial structure ( for instance , in tumors and biofilms ) .it has recently been shown that this spatial structure can strongly influence the subclonal structure and the adaptation of spatially expanding populations , both from _ de novo _ and pre - existing mutations .likewise , spatial or temporal gradients in antibiotic concentration can enable populations to reach a higher degree of resistance than in homogeneous drug concentrations , at least in part because they enable the slow accumulation of multiple mutations , each conferring a small amount of resistance .the presence of spatial drug gradients is well documented both in the outside environment as well as within biofilms and the human body , and it has been hypothesized that the presence of spatial heterogeneities may facilitate the emergence of drug - resistant phenotypes . in a microfluidic experiment ,a spatial gradient indeed gave rise to a higher rate of adaptation of bacterial populations .similarly , microbes growing on soft agar plates with gradually increasing antibiotic concentrations were able to rapidly evolve resistance to high levels of antibiotics , while sudden jumps to unsustainably high concentrations dramatically slowed down adaptation .these findings raise the theoretical question of how to predict the rate of emergence of drug resistance in the presence of spatial gradients .a number of recent theoretical studies have investigated how gradients speed up the evolution of drug resistance .each study made critical assumptions about the nature of the gradients : greulich et al . considered a population adapting to a smooth gradient , which gradually lowers the growth rate of susceptible individuals .hermsen et al . studied resistance evolution in a series of sharp step - like increases in concentration , where a novel resistance mutations was necessary for survival in the next step ( the `` staircase '' model ) ; hermsen later proposed a generalization of the staircase model to continuous gradients .these previous studies focused on the speed of adaptation , i.e. , how quickly the population evolves to tolerate high concentrations of antibiotics . in the context of the emergence of drug resistance ,this observable alone ignores a crucial reality of antibiotic treatment : efficient drug treatment first and foremost aims to kill as many bacteria as possible , while limiting the rise of resistance mutation .how this apparent trade - off can be optimized to prevent the evolution of drug resistance has so far been unexplored .moreover , many realistic growth scenarios of bacterial populations may exhibit a directed flow that drives them up or down the gradient .examples include the gut , arteries , and urethra in the human body , but also flows in aquatic environments , like ocean and river currents , or flow in pipes and catheters .the effect of convection on the evolution of drug resistance remains unknown . here , we present simulations , rationalized by a comprehensive analytical framework , of populations evolving resistance in a variety of spatial antibiotic concentration gradients and under the influence of convection .we measure the establishment probability of resistant mutants arising in a region occupied by susceptible wild type , and the drug - induced death rate of the wild type , and show how bacterial diffusion , antibiotic gradient steepness , and convection interact to affect the treatment efficiency .we find that shallow gradients and convection into the antibiotic promote wild - type death , at the cost of increasing the establishment probability of resistance mutations .conversely , populations in steep gradients and subject to convection away from the antibiotics are less susceptible to drug - induced wild - type death but also produce fewer resistance mutants . the treatment efficiency , which quantifies the inherent trade - off between adaptation and death ,is strongly modulated by gradient steepness and convection .treatment efficiency is found to strongly depend on convection away from the antibiotic , which increases it in shallow gradients and decreases it in steep gradients .we perform individual - based , stepping stone simulations where both wild - type and mutants are modeled explicitly .the population is divided into demes with carrying capacity on a one - dimensional lattice .wild - type and mutants replicate at a rate and migrate at rate independently on their position .wild - type in deme die at a rate ] , as in a standard gillespie algorithm .the total elapsed time is the sum of the sampled time intervals . * * convection .* convection ( with convection speed ) is implemented by shifting the simulation box by one deme towards or away from the wild type population , depending on whether convection is negative or positive , respectively .the shift is performed when the time since the last shift is greater than . for each simulation, we first allow the wild type to reach the steady - state profile .we then introduce one mutant element at position and run the simulation until either all mutants go extinct , or mutants reach the last deme in the simulation box .no further mutations are allowed in the course of the simulations .the probability of fixation is then computed as the proportion of the simulations in which a mutant introduced at reached fixation .all numerical results were obtained by evaluating the differential equations using mathematica s built - in _ ndsolve _ routine with the backwards differentiation ( bdf ) method , with a maximum step size of 1 and a domain size of 1000 .initial conditions were chosen according to the analytical approximations to the steady - state profiles given in the text .this is only done to speed up computation and increase numerical stability ; starting with different initial conditions leads the same final solution . to obtain steady - state profiles ,we solve the full time - dependent problems until the solution no longer changes for longer evaluation times . to compute the establishment probability in realistic wild - type population profiles , we first computed the steady - state population density and then used the final profile to compute the ( constant in time ) local death rate for the mutants .the resulting numerical solutions were integrated numerically using mathematica s built - in _ nintegrate _ routine to obtain the total death rate and the establishment score .) or away from it ( ) .our goal is to analyze the fixation probability of resistance mutation for a given the rate of wild - type killing.,scaledwidth=49.0% ] , defined as the number of drug - induced deaths per generation divided by the establishment score ( see eq .[ eq : q ] ) , for different antibiotic concentration profiles ( shallow gradient , ; intermediate gradient , ; step - like concentration profile ) .convection away from the antibiotic ( positive ) can increase treatment efficiency by an order of magnitude in shallow gradients .( b ) both and are increased ( in contrast to their ratio ) in shallow gradients and for convection into the antibiotic.,scaledwidth=49.0% ] ( magenta ) and establishment probability ( green ) for three different concentration profiles ( gray background ; ( a ) shallow gradient , ; ( b ) intermediate gradient ; ( c ) step ) and different values of to ( light to dark colors ) , from stepping stone simulations . dotted and dashed lineare the local wild - type death rate and the net growth rate . on the right in each panelis the product , which identifies the region where successful mutants arise.,scaledwidth=49.0% ] we simulate a population of wild - type individuals on a lattice of demes , each of size , in a one - dimensional antibiotic gradient ( see the sketch in fig .[ fig : sketch ] and _ methods _ ) .each individual can migrate into a neighboring deme , replicate , and die .the antibiotic gradient sets the death rate of the wild type , giving rise to an effective growth rate .\label{eq : tanh}\ ] ] in the absence of antibiotics , population growth is only limited by a carrying capacity . following the equilibration to the steady - state profile of the wild type, a resistant mutant is inserted into the population at position .the resistant mutant has the same birth rate as the wild type , but it does not suffer from an increased death rate due to the antibiotics .we follow the mutant clone until it either goes extinct or reaches the far end of the simulation box , in which case we consider the mutant _ established_. in treating a bacterial infection , the optimal antibiotic strategy would eliminate as many bacteria as possible , while limiting the emergence of resistance . to quantify the treatment efficacy ,we first obtain an average wild - type population profile from independent simulation runs and thus compute the number of drug - induced wild - type deaths per generation , where is the death rate per generation due to antibiotics in deme . for simplicity ,we call the total death rate . to quantify the emergence of resistance , we measure the local mutant establishment probability , i.e. , the probability that a mutation that arose at position establishes . since the probability that a mutation occurs in the first place is proportional to the wild - type population density , it follows that successful mutants can only arise where both the wild - type population density and the establishment probability are high ( see fig .[ fig : cofxanduofx ] ) . a measure forhow readily new resistant mutants establish is thus given by the product of wild - type population density and the establishment probability , summed over all demes .we call this measure the establishment score , the establishment score is proportional to the rate of establishment , the rate at which new resistance mutations arise ( at a low mutation rate ) and establish in the population .alternatively , the establishment score can be understood as a measure proportional to the mean establishment probability , i.e. , the probability that a mutation arising anywhere in the population establishes .finally , we define the efficiency of drug treatment via the amount of drug - induced wild - type death before an adaptive mutation establishes , i.e. , by .we assume mutation rates to be low such that no clonal interference occurs .then , we can treat each mutation independently and thus define fig .[ fig : roverbandrandb]a shows how the treatment efficiency changes across gradient steepness and convection into ( ) and away from ( ) the gradient .we find that without convection ( ) , and for convection toward the gradient ( ) , is slightly higher for shallow gradients but always bounded between and .however , the trend changes dramatically for positive convection away from the gradient , which boosts treatment efficiency by a factor of 10 for shallow gradients and decreases it for the step - like antibiotic profile .considering the establishment score and the total death rate separately ( fig .[ fig : roverbandrandb]b ) illustrates the reason behind the qualitative behavior of treatment efficiency .both and show similar tendencies for different gradients and convection speeds : shallow gradients are characterized by larger values of both and than the step - like concentration profile .negative convection ( towards the gradient ) increases both and by up to a factor of 5 , while positive convection leads to their rapid decrease by two orders of magnitude .remarkably , in shallow gradients the establishment score is more sensitive than the total death rate to positive convection .the rapid decrease of explains the increase in the treatment efficiency for shallow gradients .a detailed look at the population density and establishment probability profiles is necessary to understand the values taken by and .these profiles are shown in fig .[ fig : cofxanduofx ] for three different gradients ( shaded areas ) .the population density approximately tracks the net growth rate , while the establishment probability is roughly a mirror image of the population density . in the region where the drug concentration is low andthe wild - type population is dense , the mutants have no advantage over the wild type , since they compete for the same resources , and are likely to go extinct due to genetic drift .conversely , in high drug concentration regions , the wild type can not survive and the mutants can grow freely and establish with high probability .convection has a rich variety of effects on both profiles .convection into the antibiotic ( co - flow , ) leads to broader profiles of both the wild type population density and the establishment probability ( fig .[ fig : cofxanduofx ] ) , while convection away ( counter - flow , ) from the antibiotic reduces both the population density and the establishment probability in the antibiotic region . for shallow gradients, we even observe an apparent cut - off in ( and correspondingly in ) , whose location depends on .consequently , co - flow increases total death rate and establishment score , while counter - flow leads to their rapid decrease . in summary ,our simulations show that both the population density profile as well as the local establishment probability of resistance mutants are strongly influenced by environmental parameters , in particular , by the steepness of the antibiotic gradient and the strength of convection . as a general rule of thumb , shallow gradients andconvection towards increasing drug concentrations increases the establishment score , while steep gradients and convection away from the drug source decrease it .the total death rate follows the same trend . however , subtle differences between and give rise to a qualitative change in behavior of the treatment efficiency , which is increased by positive convection ( counter - flow ) in shallow gradients and decreased in steep ones . to rationalize our results and identify the critical length scales ,we now develop a mathematical model that accommodates these processes .consider a population at some density in an antibiotic concentration field , which gives rise to a death rate per generation , i.e. , the drug - induced death rate is measured relative to the maximum growth rate .we define the total ( drug - induced ) death rate per generation through a continuum version of eq .[ eq : b ] as resistance mutations arise at a rate , which we assume to be constant and sufficiently low such that adaptation occurs via the sequential acquisition of resistance mutations , i.e. , we neglect local clonal interference .analogously to eq .[ eq : r ] above , we define the establishment score where represents the probability that a beneficial mutation born at position eventually establishes .the population density can be computed using standard techniques ( like deterministic reaction - diffusion equations ) , and we return to this question later on .the establishment probability , i.e. , the probability for a mutation to survive until time , obeys a nonlinear reaction - diffusion equation , where is the local birth rate of the mutants and is the net growth rate ( see si section 1 ) . throughout this paper, we will make the assumption that the birth rate of wild - type and mutant is identical and constant , , and that the drug - induced death rate of the resistant mutant is zero , while the wild - type drug - induced death rate ranges from 0 to , i.e. , we assume that the antibiotic only affects the population by increasing the death rate of the wild type .we include diffusion and convection terms to account for the random dispersal and the effects of external flow on the bacteria .it is important at this point to note that in general the selective gradient will intimately depend on various environmental factors , including competition with the wild - type population . in later sections , especially whenincorporating convection ( i.e. , ) , we will take this effect into account .for now , we will assume that this function is known for given external conditions in order to gain a better intuition for the system .we will first analyze eq .[ eq : survival - probability0 ] in the case without drift , i.e. , in order to extract the characteristic length scales of the system .we solve the extreme cases of a step - like concentration increase and a very smooth gradient analytically , and then interpolate between the two regimes using numerical solutions to eq .[ eq : survival - probability0 ] .we begin by considering the simplest functional form for a selective gradient a step : where is the heaviside step function .such a sharp gradient could emerge , for instance , at the boundary of different tissues or organs with different affinities to store antibiotics .the equation for the fixation probability in this case is given by : where we have rescaled by the diffusion length scale : in the appendix , we solve this equation using an analogy from classical mechanics .we have that for implying for large . for , we obtain - 1\right\ } , \label{eq : positive - u}\ ] ] where .[ eq : step-1 ] shows that the fixation probability decays over a characteristic length scale .this length scale can be roughly understood as the typical distance that a mutant individual travels through random dispersal before replicating .having identified the characteristic length scale over which the establishment probability decays in a step - like concentration profile , we now turn towards more realistic gradients . for simplicity , for the remainder of our calculations, we will model a shallow antibiotic gradient again as the sigmoidal function given in eq .[ eq : tanh ] , $ ] , such that the gradient changes over a characteristic distance . intuitively, if , individuals feel sharp , step - like antibiotic transitions within their typical migration distance , and thus , we expect the results of the previous section to remain good approximations . if , conversely , individuals migrate only short distances compared to , i.e. , , they will sample only a small region of the gradient and will not feel the differences in antibiotics concentration. to make this heuristic idea quantitative , we rescale in eq .[ eq : survival - probability0 ] by the new length scale to obtain when , i.e. , , we can neglect the partial derivative in eq .[ eq : alpha1 ] , such that this so - called quasi - static approximation is a straight - forward extrapolation of the well - mixed result and has been used to model establishment in ref . . for ,the step - like solution holds , whereas , for , no analytical solution is available ; in such cases , we solve the equations numerically ( see methods ) .to determine which scenario is more relevant in microbial communities , we estimate a typical .a typical ( non - motile ) bacterial cell may have a diameter of 1 , swimming in a medium of viscosity comparable to that of water ( with blood typically only a factor 3 larger than that ) .then , its diffusion constant is of order 0.1 - 1 .the length scale for a typical birth rate of is then between 50 and a few hundred microns . in a microfluidic experiment by zhang et al . , demonstrating facilitation of adaptation through antibiotic gradients , the length scale on which the drug gradient varied was m so that .this estimate is , however , very crude : firstly , the diffusion constant of a bacterium may not be due to thermal fluctuations but due to directed motion that only becomes approximately diffusive on long time scales . in that case, the diffusion constant can be much larger , up to tens or even 100 . for a bacterium with a division time of 60 minutes ,this leads to a expected typical length . indeed , for motile bacteria such as those used in a recent experimental study by baym et al . with a reported spreading velocity of 40mm / hr , we find mm .surely , we can not expect homogeneous antibiotic concentration over such long length scales and indeed step - like transitions in antibiotic concentration were assumed in the study .therefore , we expect that concentration gradients of significant sharpness should play a key role in both experimental set - ups and practically relevant systems , and hence the quasi - static approximation in eq .[ eq : quasi - static ] should be employed with caution . as mentioned above ,the mutants net growth rate generally depends on several environmental factors , including competition with the wild - type population .in addition , both the establishment score and the total death rate determined in our simulations depend on the wild - type density profile . to provide analytical expressions for these quantities and the corresponding treatment efficiency , we therefore need to accommodate the coupling between wild - type and mutants . in the following, we will employ a simple logistic growth model for the wild - type population and its interaction with the mutant dynamics .we assume that the wild - type population density is described by the following reaction - diffusion equation : where is the local wild type birth rate , is the local antibiotic - induced death rate of the wild type . in our model ,wild - type growth is limited by a logistic term , which models a finite amounts of nutrients limiting the population size per deme to the carrying capacity , while the local death rate acts on each individual independently and is therefore unaffected by the local population density .this distinguishes our model from the fisher equation used to describe population spreading , where also the death rate multiplies the logistic term .our model instead ensures that the steady - state local population density depends explicitly on the local death rate when the death and birth rate profiles change sufficiently slowly in space , our model , like the original fisher equation , ignores the discrete nature of individuals , which has been shown to significantly alter the tip of the population front ( see si section 7 for a detailed discussion ) .nevertheless , we expect good agreement between our model and simulations in terms of the treatment efficiency , whose value is not significantly affected by the details of the population profile at the tip of the wave .for the remainder of this text , we will assume , as before , that the total growth rate is a constant , , and that the presence of the antibiotic modulates selection by increasing the wild type death rate .the logistic growth term makes eq .[ eq : cofxandt ] for the wild type density mathematically equivalent to eq .[ eq : survival - probability0 ] for the establishment probability and hence , all results carry over with minor modifications ( eqs .[ eq : step-1 ] , [ eq : positive - u ] , and [ eq : quasi - static ] ) . in particular , the predicted steady - state population density has a broad tail in the case of a step - like antibiotic profile , and closely follows the mirror image of the antibiotic profile if this is shallow .note , however , that we are unable to observe the broad tail in simulations because of the discreteness effects associated with the small population size at the front ( see si section 7 ) .given an explicit wild - type population profile , it is plausible to assume that the selective pressure felt by the mutants is purely due to competition with the wild type , i.e. , , such that the local establishment probability of the mutants is coupled to the wild type profile .this should indeed be the case if the antibiotic does not directly alter the birth and death rate of the mutants .hence , if changes sharply , i.e. , on length scales shorter than , we expect to see the signatures of a step - like antibiotic profile also in the behavior of .if , on the other hand , changes slowly in space , then the quasi - static approximation may become applicable .the case of a step - like antibiotic concentration profile for illustrates both scenarios ( see fig .[ fig : driftcu]a ) : for , approaches 1 exponentially ( eq . [ eq : positive - u ] ) , such that is well - described by the broad tail in eq .[ eq : step-1 ] . for ,instead decays slowly and the quasi - static approximation can be used such that .we compare the simulations with numerical and approximate analytical solutions for and in detail in the si , section 7 . the solutions for and then be used to compute the rate of adaptation ( eq . [ eq : establishment - rate ] ) and total death rate .asymptotically , we find , and which agrees well with the numerical result in the two limiting cases ( fig . [fig : driftbrq]a ) .the inverse scaling of the establishment score with the gradient steepness is also in agreement with greulich et al . in relatively steep gradients where the rate of adaptation ( which is proportional to the establishment score defined here ) is dominated by the time until a mutation arises and establishes .hermsen also finds that the rate of adaptation increases for shallower ( but still relatively steep ) gradients in what is identified as the `` mutation - limited '' regime , where mutations are rare .however , both hermsen and greulich consider the establishment of many mutation ( potentially simultaneously ) , giving rise to a second , `` dispersion - limited '' , regime in very shallow gradients , in which the mutational supply is large and the rate of adaptation is dominated by the speed with which established mutations invade previously uninhabitable territory . since we follow the fate of individual mutations , our model operates in the mutation - limited regime exclusively , and we therefore do not expect agreement between our and their models .( magenta ) and establishment probability ( green ) for a step - like antibiotic concentration ( a ) and a very broad concentration profile ( , c ) , for values of the convection speed ranging from to in steps of ( dark to light colors ) .curves were obtained by numerically integrating eqs .[ eq : drift - eq - c ] and [ eq : drift - eq - u ] ( _ methods _ ) . for high negative convection speeds ( dark colors ) ,an analytical approximation is valid ( b , si section 5 ) .( d ) the cut - off of the population density profile in shallow gradients found numerically ( dots ) agrees well with the theoretical prediction , eq .[ eq : cut - off ] ( solid lines).,scaledwidth=49.0% ] so far , we have neglected external flows and only considered random diffusion of individuals in space . in the following ,we explore the influence of convection on the population density , the establishment probability , and finally , the treatment efficiency , which , in the absence of convection , is naturally constrained to a relatively small range , see eqs .[ eq : b - analytical ] and [ eq : r - analytical ] .when a population is subjected to convection , it will generally move in the direction of the flow , leading to either a depletion or enrichment of both mutant and wild types in the antibiotic region ( unless convection is too strong , see si section 4 ) , depending on whether the flow is directed away ( counter - flow ) or towards ( co - flow ) the gradient , respectively .we define the direction of the flow such that a negative convection speed points in the positive -direction , towards the gradient ( see fig . [fig : sketch ] ) .the corresponding equations for and are where .note the change in sign for the convection term in the equations for and that stems from the reverse time direction in .figure [ fig : driftcu ] shows the effect of convection on both the population density and the establishment probability , for a step - like antibiotic profile and a shallow gradient ( ) , obtained by first evaluating from eq .[ eq : drift - eq - c ] and then solving from eq . [ eq : drift - eq - u ] using the steady - state density profile ( see _ methods _ ) .consider first the step - like case .there is a strict distinction between positive and negative convection : negative convection ( darker colors in fig .[ fig : driftcu]a and c ) tends to broaden the profiles away from the step and increase both population density and establishment probability near the step .in fact , it can be shown that for our model both profiles decay asymptotically as far from the step in the limit of strong negative convection ( fig .[ fig : driftcu]b , si section 5 ) .by contrast , positive convection away from the antibiotic ( ) reduces the wild - type population density in the antibiotic region as well as the establishment probability , giving rise to an exponential decay for .the effects of a finite gradient steepness , characterized by the length scale , can be judged by rescaling eq .[ eq : drift - eq - c ] by , as in eq .[ eq : alpha1 ] , i.e. , for in shallow gradients , the profiles are barely affected ( see fig .[ fig : driftcu]c ) . for ,on the other hand , convection can alter the profiles strongly : as , the solution to eq .[ eq : alpha2 ] first becomes insensitive to the diffusion term and finally also to the convection term , such that the population density profile follows the gradient almost perfectly until it drops steeply to zero at the cut - off position ( fig .[ fig : driftcu]d , see si section 6 ) , given by this cut - off position captures the behavior of the numerical solution very well ( fig . [ fig : driftcu]d ) .the establishment probability mirrors as long as : for , we have , and for , we have . for , increases very sharply from 0 to 1 at the cut - off position , leaving only a very small overlap with .hence , the establishment score in shallow gradients becomes very small for high positive convection speeds . andthe total death rate in our analytical model .panel ( a ) shows and as a function of the gradient length scale . for , the total death rate approaches the theoretical predictions , eq .[ eq : b - analytical ] .negative / positive convection shifts both and up / down , respectively . plotting and as a function of in panel ( b ), we recover the same trend as in stepping stone simulations : negative / positive convection increase / decrease both and similarly .( c ) the treatment efficiency for a step - like antibiotic profile ( black ) and gradients of different steepness , from shallow ( purple ) to steep ( red ) . ] given the scaling of the population density and establishment probability profiles , we can rationalize the behavior of the total death rate and the establishment score . as shown in fig .[ fig : driftbrq]a and b , both and increase for negative convection because and maintain broader profiles .in shallow gradients , and exhibit a plateau , since and are affected only weakly by negative convection ( green ) . for positive convection ,both and decrease rapidly by up to 2 order of magnitude for our range of parameters . in shallow gradients ,the establishment score decreases faster than the total death rate with increasing convection speed because convection affects both and equally , such that their overlap decreases more rapidly than alone ( see si section 6 ) .however , for the steepest gradients , and decrease at the same pace until at high convection speeds , the death rate decreases more quickly than because of the hard cut - off in ( fig .[ fig : driftbrq ] ) . the behavior of the treatment efficiency ( fig [ fig : driftbrq]c ) follows directly from these considerations . for ,i.e. , for convection into the antibiotic , changes only slightly with , and only in relatively steep gradients ( ) .this is in contrast to the case of convection away from the antibiotic : for shallow gradients , i.e. , , is approximately equal to 1 for small positive and all negative convection speeds . for strong positive convection ,the establishment score becomes very small , such that increases rapidly ( see si fig . s2 and si section 6 ) .as the step - like case is approached , i.e. , for , the treatment efficiency is _ reduced _ further for large positive , where the total death rate decays more quickly than the establishment score .our numerical findings , supplemented by analytical calculations rationalize all major observations from our simulations . in particular, we find that convection can strongly alter both the total death rate and the establishment score , but in a manner that depends on the steepness of the gradient .although there are subtle differences between the simulated and numerical profiles of and ( see si section 7 ) , our simple model reproduces even the complex phenomenology of the treatment efficiency , including the shift from an increase in in shallow gradients to a decrease in in steep ones .in order to fight an infection efficiently , an antibiotic must kill as many wild - type bacteria as possible before resistant mutants arise and survive against genetic drift . if a mutant arises in a low - concentration region , it has little advantage over the wild type and likely goes extinct .by contrast , a mutant arising in a high - concentration background will quickly establish , thus creating a resistant population that can populate regions in the antibiotic gradient inaccessible to the wild type .high population densities in high - concentration regions induce significant wild - type death , as the susceptible wild type can not survive well in these regions . on the other hand , high population densities in high - concentration regionsalso maximize the number of resistance mutations that occur in a favorable environment .we have studied this trade - off between drug - induced wild - type death and adaptation in an antibiotic gradient using simulations and analytical theory .our simulations reveal that drug - induced death is highest when the wild - type population density is enriched in high - concentration regions , i.e. , for convection towards the antibiotic , and shallow gradients .however , since each wild - type individual harbors the potential for a resistance mutation to occur , which would then have a big advantage over the wild type in the region where it occurs , the rate of establishment of resistance mutations has the same general behavior .similarly , for strong counter - flow , the drug - induced total death rate is decreased because the wild - type population density is low in high - drug regions , which in turn limits the rate of establishment of new resistance mutations .thus , making ad hoc predictions about the treatment efficiency is difficult .our detailed analysis of population density and establishment probability profiles over a wide range of gradient steepness and convection speeds shows that in shallow gradients , where both and are strongly affected by convection away from the antibiotic , treatment can become an order of magnitude more efficient than in no - flow scenarios .our simulations are a strongly simplified model of real populations .for instance , we have assumed that mutation establish independently , i.e. , we have neglected clonal interference .our predictions are thus strictly speaking only valid when mutations arise rarely enough that they do not interact .typically , this condition is quantified by demanding that , where is the effective population size . in a spatial scenario ,however , potentially interfering mutations can only arise in a spatial region where both the population density and the establishment probability are large .thus , our results remain valid as long as . for resistance mutations with small target sizes, mutation rates can be very small , typically less than , and thus our approximation may be accurate even for relatively large local populations .in addition , even when mutation rates are not small , we expect convection and spatial gradients to have the same qualitative effects on the establishment of resistance mutations , and thus our results should remain qualitatively correct in this case .individuals in our simulation merely occupy `` space '' in their specific deme ; in reality , bacteria have a finite size , and a population front can advance through mere growth , even against strong counter - flow .conversely , for strong co - flow , individuals may de - adhere and be carried away from the bulk population , thus founding extant colonies that enjoy large growth rates in the absence of competition for resources .such processes can be studied by generalizing the diffusion term in our model to a long - range dispersal term as used frequently to model epidemics .since long - range dispersal can allow individuals far from the population front to escape the bulk population , we expect it to increase the total establishment probability and thus the rate of adaptation relative to short - range dispersal as discussed here .we have only discussed one - dimensional populations here , but real surface - bound microbial populations typically grow as two - dimensional colonies , with complex spatial patterns . the establishment of beneficial mutations in microbial colonies has recently been discussed . due to the particular strength of genetic drift at the front of such populations , beneficial mutationsfirst have to reach a threshold size ( depending on the strength of the selective advantage ) neutrally before they become established .once the mutant clone reaches the threshold size , the selective advantage of the mutants can deterministically drive them to fixation in the population . during the initial phase , the mutant clone is contained between boundaries with characteristic stochastic properties that are not captured in our one - dimensional model .however , if the threshold size is small , the boundary fluctuations will not have a large impact on the growth of mutant clones . in such cases ,we expect our results to apply also to two - dimensional populations .the emergence of drug resistance remains a topic of significant interest , both from a scientific and a public health point of view .considerable effort is brought forward to create novel antibiotics and new therapy strategies are developed that attempt to limit the emergence of resistance , but more research is needed to understand how resistance evolves in complex spatio - temporal settings like the spatial gradients discussed in this paper .in particular , as we have shown here , convection constitutes an important factor in shaping the adaptation to antibiotics in spatial concentration gradients and should receive more attention from both theorists and experimentalists .research reported in this publication was supported by the national institute of general medical sciences of the national institutes of health under award number r01gm115851 , by a national science foundation career award and by a simons investigator award from the simons foundation ( o.h . ) .the content is solely the responsibility of the authors and does not necessarily represent the official views of the national institutes of health .the simulations were run on the savio computational cluster resource provided by the berkeley research computing program .suppose particles branch at rate and disappear at rate .then the probability that a particle or its offspring survive until time satisfies (0,t)+\epsilon a \left[1-\left(1-u(0,t)\right)^2\right]\ ] ] the first term on the right - hand side describes the case of neither disappearing nor branching .the second one accounts for the fact that when the initial particle branches then there are two particles , and the probability of survival of at least one lineage is minus the square of both lineages disappearing . using time - translation invariance , we obtain in the limit to extend this equation for spatial degrees of freedom simply account in eq .[ eq : survival - prob1 ] for the random jumps to neighboring lattice sites in a small time interval . [[ solving - the - establishment - probability - for - step - like - concentration - profile ] ] solving the establishment probability for step - like concentration profile ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the ( ultimate ) establishment probability of a mutation in step - like antibiotic concentration profile given by satisfies eq . 7 in the main text , repeated here for completeness : to solve this equation , we first treat both sides of the step independently , and then fix integration constants by invoking differentiability at . the fixation probability of mutants in the region ( where is determined by the following differential equation : with boundary conditions , and .we exploit a mechanical analogy to solve this equation .we write eq .[ eq : safe - haven - dynamics ] as with a ` potential energy ' .we can then determine the solution for as where the ` total energy ' was chosen to satisfy the boundary condition as .the solution thus follows as implying for large . in the region the new mutations have a net growth rate .the fixation probability is then given as the solution of exploiting the same mechanical analogy , we have the integral : where we have used the condition to set the total energy .we obtain - 1\right\ } , \label{eq : positive - usi}\ ] ] where . for a shallow gradient where , it follows that and hence . for a step, can be obtained directly by simple integration of , . to compute , the integral for is easily performed to yield , but for , the full solution for can not be easily integrated .however , is well approximated by .thus , we obtain the result , where is the exponential integral function . then evaluates to close to the numerical value of .a population growing logistically with an homogeneous rate across demes leads to a fisher wave with a front velocity given by . in the presence of a convection , far away from the gradientthe wild - type population would generate a front advancing at speed .hence , intuitively , if convection is too strong , i.e. , , the population will not be able to establish a steady - state profile . to show this formally, we first transform eq .17 by introducing a new variable , such that the convection term is replaced by a death term with effective death rate proportional to , around , the exponential factor is approximately equal to 1 and the nonlinearity can be neglected because .the resulting linear equation then has solutions that can be written as .each mode then satisfies a differential equation with .thus , the lowest mode decays to zero as long as , and all higher modes are suppressed by the coupling of modes through the nonlinearity .therefore , , and hence , undergo an extinction transition for when .( eq . [ eq : negconvection - c ] ) and ( eq . [ eq : negconvection - u ] ) matches the simulation results well .( b ) treatment efficiency for different gradients , with asymptotic result , eq .[ eq : asymptoticq ] ( black ) . ] for negative , we first solve for .for , we can neglect diffusion and find the equilibrium population density by solving the solution for is we thus uncover another characteristic length scale , which quantifies the strength of convection relative to diffusion : if , convection dominates over diffusion , and vice - versa . plugging eq .[ eq : negconvection - c ] into the equation for the establishment probability , eq . 18 in the main text, we find analytical solutions in two cases : for small but finite negative , the convection term and the diffusion term can both be neglected and we recover the quasi - static result . for strong negative convection , , we can neglect only the diffusion term and integrate directly .the result is since , it is easy to see that this approximation does not yield a finite establishment score for infinite domains since mutants arising far away from the antibiotics region still have a relatively high chance of establishing . in practice , the establishment score is always finite because for finite , the decay is slightly faster that . assuming a finite domain ( or a cut - off , e.g. , because of a finite carrying capacity )ranging from some to , we can estimate both the establishment score and the total death rate , and thus the treatment efficiency , as \\ b=&2\nu \log[1 + \xi_0/2\nu]\\ q=&\frac{\log[1 + \xi_0/\nu]}{2 \log[1 + \xi_0/2 \nu]}\gtrsim \frac{1}{2}\end{aligned}\ ] ] where we have assumed in the last step . in our numerical evaluations , and , such that we have . .both in the region of high antibiotic concentration and no antibiotic concentration , the population density approaches its asymptotic value exponentially ( ( a ) and ( b ) , analytical results as dotted lines ) . fitting the numerical curves with the analytical expressions , eqs .[ eq : posxi ] and [ eq : negxi ] , we obtain the apparent exponential prefactors , which match well with the expressions for given in the text . ]positive convection ( ) away from the antibiotic limits diffusion of the wildtype into the antibiotics , thus leading to a reduced wildtype population density in the region of antibiotics . for strong convection and a step - like drug profile, the population density decays exponentially over the characteristic distance .this follows by considering the steady - state equation for , for , is positive while is negative ; because these two terms can not add up to zero , we can not neglect the diffusive term .however , since we expect to be small , we can instead neglect to obtain the equation , such that where , which approximates the numerical solution well ( fig .[ fig : step - posv ] ) . for , the corresponding equation reads which does not have a simple analytical solution .however , we can find by inspection that approximately behaves as where is parametrized well by ( see fig .[ fig : step - posv ] . for and , we can transform eq .18 by defining to obtain convection only has a relatively small effect on and , and only in the region of strong antibiotics ( where ) , as long as . for large enough this will no longer be satisfied .when , the convection term leads to a cut - off .assuming the cut - off is faster than exponential , we can neglect the non - linearity near the cut - off and thus find a transition to negative growth rate for . solving for , we obtain the approximate cut - off position note that for . for , we can estimate and analytically by approximating and similarly for .this leads to for , this agrees well with the numerical result for shallow gradients ( ) , as shown in fig .[ fig : si_compare ] .when , i.e. , when , and retain a small overlap , as shown in fig .4b in the main text .hence , the treatment efficiency does not diverge at .( left column , magenta ) and establishment probability ( right column , green ) , for three different antibiotic gradients ( step - like , ( a ) and ( e ) ; intermediate gradient , ( b ) and ( f ) ; shallow gradient , ( c ) and ( g ) ) . deviations between simulations and our numerical model can be traced back to a difference in effective convection speed : the population density feels a slightly higher effective convection ( d ) , while the establishment probability profile reflects the actual convection speed in the simulation ( h ) until the deviations in become too large at high . ] and establishment score .deviations appear mainly at high positive convection speeds in shallow gradients , where the wave front is characterized by strong fluctuations ( see fig . [fig : comparisonsimnum ] and si section 7 .( b ) our theory reproduces the observed trends of the treatment efficiency except for the highest positive convection speeds . ] our model for the dynamics of the population density , like the original fisher equation , ignores the discrete nature of individuals .this has been shown to have important consequences , particularly at the tip of the population front , where the number of individuals per deme is small . a heuristic way to implement the discreteness of individualsconsists in introducing a cut - off in the growth rate when the local population size becomes too small .such a cut - off alters the front profile , and the resulting correction to the expansion velocity decays only slowly with the carrying capacity , in our simulations , we use , which should lead to a correction to the velocity by almost 25% . quantitative comparison between numerics and simulationsreveals such correction . in shallow gradients with positive convection , where the population density and establishment probability profiles are effectively cut off at a characteristic position ( see eq .[ eq : cut - offsi ] ) , we compute the effective convection speed by comparing the cut - offs in simulated profiles with the numerical curves .the results are shown in fig .[ fig : comparisonsimnum](d ) and ( h ) : the population density obtained from simulations at convection speed has cut - offs positions corresponding to a numerical convection speed that is shifted up by a constant .for instance , even at , we observe a cut - off consistent with a convection speed .the same shift is not visible in the apparent convection speeds obtained from the establishment profiles ; in this case , , except for very high velocities , where is entirely determined by the population density profile ( which deviates from the analytical prediction , see fig .[ fig : comparisonsimnum]c ) . despite these subtle deviations between our model and simulations ,we expect good agreement as long as the population density is not too small ; since the establishment probability is typically low where the wildtype density is high and vice - versa , and the establishment score depends only on the product of and , regions of small population density should not significantly impact the integral in most cases .[ fig : comparisonsimnumbr ] shows that there is indeed good agreement between the establishment score , the total death rate and the treatment efficiency obtained from simulations and from numerical evaluation of our model .ling s , hu z , yang z , yang f , li y , chen q , gong q , wu d , li w , tian x , hao c , hungate e a , daniel v t , hudson r r , li w h , lu x , ling s , hu z , yang z , yang f , li y , lin p , chen k , dong l and cao l 2016 _ proceedings of the national academy of sciences _ * 113 * e663e663
since penicillin was discovered about 90 years ago , we have become used to using drugs to eradicate unwanted pathogenic cells . however , using drugs to kill bacteria , viruses or cancer cells has the serious side effect of selecting for mutant types that survive the drug attack . a key question therefore is how one could kill as many cells as possible for a given acceptable risk of drug resistance evolution . we address this general question in a model of drug resistance evolution in spatial drug gradients , which recent experiments and theories have suggested as key drivers of drug resistance . importantly , our model also includes the influence of convection , resulting for instance from blood flow . using stochastic simulations , we quantify the trade - off between the killing of wild - type cells and the rise of resistance mutations : shallow gradients and convection into the antibiotic region promote wild - type death , at the cost of increasing the establishment probability of resistance mutations . we can explain these observed trends by modeling the adaptation process as a branching random walk . our analysis also reveals that the trade - off between death and adaptation depends on a few key control parameters , which compare the strengths of the spatial drug gradient with the convection and random dispersal terms . our results show in particular that convection can have a momentous effect on the rate of establishment of new mutations , and may heavily impact the efficiency of antibiotic treatment .
the recent paper by dunkel and hilbert titled `` consistent thermostatistics forbids negative absolute temperatures '' has triggered a vigorous debate on whether the boltzmann entropy ( alias the surface entropy , eq .[ eq : somega ] ) or the gibbs entropy ( alias the volume entropy , eq .[ eq : somega ] ) is the more appropriate expression for the thermodynamic entropy of thermally isolated mechanical systems .the thermodynamic consistency of the gibbs entropy has been a leitmotiv that sporadically recurred in the classical statistical mechanics literature .it started with helmholtz , boltzmann , and gibbs , it continued with p. hertz einstein , and others , until it has been reprised recently by various authors .this line of research culminated with the work of ref . , showing that the gibbs entropy complies with all known thermodynamic laws and unveiling the mistakes apparently incurred into the arguments of its opponents . while the work of ref . is characterised by a top - down approach ( namely , one postulates an entropy expression and then investigates compliance with the thermodynamic laws ) here we adopt instead a bottom - up approach : we begin from the thermodynamic laws and construct the expression of the microcanonical entropy on them .in particular we base our construction on the following two fundamental pillars of thermodynamics .1 ) the second law of thermodynamics as formulated by clausius for quasi - static processes , namely , , which says that is an integrating factor for , and identifies the entropy with the associated primitive function .2 ) the equation of state of an ideal gas . our construction , based on the mathematics of differential forms , leads uniquely to the gibbs entropy ; see sec .[ sec : construction ] . as a consequencethe adoption of any expression of entropy other than the gibbs entropy , e.g. , the boltzmann entropy , may lead to inconsistency with the fundamental pillars .this will be illustrated with a macroscopic collection of spins in a magnetic field . as we will see the boltzmann entropy severely fails to predict the correct value of the magnetization , andeven predicts a nonexistent phase transition in the thermodynamic limit , see sec .[ subsec : failure ] .this provides a compelling reason for discarding the boltzmann entropy at once .the present work thus complements the work of ref . by stating not only the compliance of the gibbs entropy with the thermodynamic laws , but also its _ necessity _ and _ uniqueness _ : thermodynamic entropy has to be expressed by means of gibbs formula and no other expression is admissible .together with ref . the present work appears to settle the debated issue .we recall the definitions of boltzmann and gibbs entropies within the microcanonical formalism : \label{eq : somega}\ , , \\s_g(e,\bm{\lambda } ) & = k_b \ln \omega(e,\bm{\lambda } ) \label{eq : somega } \ , , \end{aligned}\ ] ] where \label{eq : phi}\end{aligned}\ ] ] denotes the volume of the region of the phase space of the system with energy not above .the symbol stand for some arbitrary constant with units of energy . here , denotes the hamilton function of either a classical or a quantum system with degrees of freedom and denotes external parameters , e.g. the volume of a vessel containing the system or the value of an applied magnetic or electric field . is their number . in the case of continuous classical systems the symbol stands for an integral over the phase space normalized by the appropriate power of planck s constant and possible symmetry factors . for classical discrete systems, denotes a sum over the discrete state space . for quantum systems the trace over the hilbert space .the symbol stands for the heaviside step function .the symbol stands for the density of states , namely the derivative of with respect to : = \frac{\partial \omega(e,\bm{\lambda } ) } { \partial e}\ , .\label{eq : omega}\end{aligned}\ ] ] here it is assumed that the spectrum is so dense that the density of states can be considered a smooth function of .the main objective is to link thermodynamic observables , i.e. the forces and temperature , to the quantities which naturally pertain to both the mechanical hamiltonian description and the thermodynamic description , i.e. , the energy and the external parameters .as we will see the entropy , will follow automatically and uniquely once the s and are linked .we begin with the thermodynamic forces , whose expression is universally agreed upon : with denoting the ensemble average . within the microcanonical framework theseare expressed as : } { \omega(e,\bm{\lambda } ) } \right ) \ , .\label{eq : fmicro}\end{aligned}\ ] ] with the expression of we can construct the differential form representing heat : is a differential form in the dimensional space .it is easy to see that , in general is not an exact differential ; see , e.g. , ref . .before we proceed it is important to explain the meaning of within the microcanonical formalism .the idea behind the microcanonical ensemble is that and are controllable parameters .and are controllable parameters in the canonical formalism ] accordingly , if the system is on an energy surface identified by , the idea is that the experimentalist is able to steer it onto a nearby energy shell . in practicethis is can be a difficult task .it can be accomplished , in principle , in the following way : the experimentalist should first change the parameters by in a quasi - static way .this induces a well defined energy change , which is the work done on the system .this brings the system to the energy shell . to bring the system to the target shell the experimentalist must now provide the energy by other means while keeping the fixed .for example she can shine targeted amounts of light on the system , from a light source .after the energy is absorbed by the system ( or emitted , depending on its sign ) , no other interaction occurs and the system continues undisturbed to explore the target shell . in this frameworkthe light source acts as a reservoir of energy , and the quantity , identified as heat , represents the energy it exchanges .according to the second law of thermodynamics in the formulation given by clausius the inverse temperature is an integrating factor for .this fundamental statement is often called the _ heat theorem _ .we recall that an integrating factor is a function such that equals the total differential of some function called the associated primitive , or in brief , just the primitive .primitives are determined up to an unimportant constant , which we will disregard in the following .entropy is defined in thermodynamics as the primitive associated with clausius s integrating factor : in searching for thermodynamically consistent expressions of temperature within the microcanonical formalism , one should therefore look among the integrating factors of the microcanonically calculated heat differential in ( [ eq : deltaq ] ) . it must be remarked that it is not obvious that one integrating factor exists , because the existence of integrating factors is not guaranteed in spaces of dimensions higher than .so the existence of a mechanical expression for thermodynamic temperature ( hence of the entropy ) is likewise not obvious .it turns out however that an integrating factor for the differential in ( [ eq : deltaq ] ) always exists . finding it is straightforward if one re - writes the forces in the following equivalent form } { \omega(e,\bm{\lambda } ) } \right ) \nonumber\\ & = \frac{1}{\omega(e,\bm{\lambda } ) } \tr\ , \left ( \frac{\partial \theta[e - h(\bm{\xi};\bm{\lambda})]}{\partial \lambda_i}\right ) \nonumber \\ & = \frac{1}{\omega(e,\bm{\lambda } ) } \frac{\partial \omega(e,\bm{\lambda})}{\partial \lambda_i}\ , .\label{eq : fomega}\end{aligned}\ ] ] this follows from the fact that dirac s delta is the derivative of heaviside s step function . with this , eq . ( [ eq : deltaq ] ) reads it is now evident that is an integrating factor : being the associated primitive .this does not mean that should be identified with temperature and accordingly with entropy .in fact if an integrating factor exists , this identifies a whole family of infinitely many integrating factors . to find the family of integrating factors , consider any differentiable function with non null derivative .its total differential reads : \delta q\ , .\label{eq : dg}\end{aligned}\ ] ] this means that any function of the form is an integrating factor for the heat differential , and is the associated primitive . in factall integrating factors must be of the form in eq .( [ eq : beta ] ) , which is equivalent to saying that all associated primitives must be of the form to prove that all primitives must be of the form in eq .( [ eq : g(phi ) ] ) we consider the adiabatic manifolds , namely the dimensional manifolds in the space identified by the condition that , i.e. , .note that the density of states is a strictly positive function .this is because increasing the energy results in a strictly larger enclosed volume in the phase space .thus , the adiabatic manifolds are characterised by the condition , ( i.e. , any path occurring on them involves no heat exchanges ) , and each value of identifies one and only one adiabatic manifold .any primitive associated with an integrating factor stays constant on the adiabatic manifolds : unless diverges , which we exclude here .hence the only way by which any primitives and can both be constant on all adiabatic manifolds is that is a function of , as anticipated .note that this rules out automatically the surface entropy $ ] because , in general , the density of states can not be written as a function of the phase volume .this is clear for example in the case of an ideal monoatomic gas in a vessel of volume , for which and ; see below . our derivation above tells us that the second law requires that the entropy , which is one of the primitives , has to be a function of the phase volume , but does not tell us which function that is . forthat we need to identify which , among the infinitely many integrating factors , corresponds to clausius s notion of temperature .we remark that once the function is chosen , it has to be one and the same for all systems .this is because by adjusting the external parameters , whose number and physical meaning is completely unspecified , one can transform any hamiltonian into any other .this fact reflects the very essence of clausius s heat theorem , namely , that there exists a unique and universal scale of temperature which is one and the same for all systems .we proceed then to single out the function that is consistent with the notion of temperature of an ideal monoatomic gas in a vessel of volume , taking its equation of state as the definition .the hamilton function of an ideal monoatomic gas reads with representing the box potential confining the gas within the volume .the phase volume reads hence , using eq .( [ eq : fomega ] ) , we obtain for the pressure , : . confronting this with the ideal gas lawwe obtain consistently with what is known from thermodynamics . since in this case, we readily recognize that , namely , that is , which singles out the gibbs entropy , as the primitive associated with the integrating factor corresponding to the thermodynamic absolute temperature . in sum : if one accepts the microcanonical expression ( [ eq : f ] ) of the forces , the gibbs entropy represents the _ only _ expression of thermodynamic entropy which is consistent with the second law of thermodynamics , eq .( [ eq : deltaq ] ) , and the equation of state of the ideal gas .as mentioned above the density of states is definite positive , also , by definition , the volume is non - negative .hence their ratio is non - negative .this means that , within the _ microcanonical _ formalism , negative temperatures are inadmissible .often the present microcanonical scenario is confused with the more common canonical scenario , where the system stays in a canonical state at all times during a transformation , e.g. , ref .this is unfortunate because , as we see below , microcanonical and canonical descriptions are not equivalent for those finite spectrum systems usually discussed in this context .the same construction presented above can be repeated for systems obeying statistics other than microcanonical . if applied to the canonical ensemble , , ( with being the canonical partition function ) the canonical expression for the forces , along with the equation of state of the ideal gas , _uniquely _ identifies the canonical parameter as the integrating factor , and its associated primitive as the only thermodynamically consistent expressions of inverse temperature and entropy within the canonical formalism ., that is , the canonical entropy coincides with the gibbs - von neumann information of the canonical distribution . ] in the canonical formalism nothing formally constraints the sign of to be definite .a spin system in a canonical state at negative will have a positive internal energy .the same system in the microcanonical state of energy will , however , have a positive thermodynamic temperature .this evidences the inequivalence of canonical and microcanonical ensembles in systems with a finite spectrum . in an attempt to justify the correctness of the boltzmann entropy , frenkel and warren ,provided a construction which leads to the boltzmann entropy .it must be stressed that the construction presented by frenkel and warren is approximate , and valid only under the assumption that the saddle point approximation holds .this approximation holds only when the density of states increases exponentially with the energy . under this assumption , however , the density of states and phase volume coincide . so the construction of frenkel and warren can not shed light onto which entropy expression is appropriate in the case when they do not coincide , which is indeed the very case of practical interest .in contrast , the present construction is exact , i.e. , it holds regardless of the functional dependence of the density of states on energy .accordingly it says that in any case , independent of whether equivalence of the two entropies holds , the volume entropy is the consistent choice . for continuous classical hamiltonian systems ,thanks to the equipartition theorem , the thermodynamic temperature is identical with the equipartition temperature : where the average is the microcanonical average on the shell .this provides further evidence that the choice conforms to the common notion of temperature of any classical system , not just the ideal monoatomic gas .we further remark that the equipartition theorem also identifies the temperature in eq .( [ eq : partiallogphi ] ) as an intensive quantity , namely a property that is equally shared by all subsystems .we emphasize that at variance with previous approaches to the foundations of the gibbs entropy , which postulated that the thermodynamic temperature is the equipartition temperature , here we have instead postulated only that temperature is the integrating factor that is consistent with the ideal gas law and have obtained the coincidence with the equipartition temperature as an aftermath .the advantage of the present approach is evident : it applies to any microcanonical system , even those for which there is no equipartition theorem ( e.g. , quantum systems ) . at variance with other approaches we chose as starting point the expression for the microcanonical forces ( [ eq : fmicro ] ) which is universally agreed upon and built our construction on that firm ground .the salient point of our argument is the identity ( [ eq : fomega ] ) expressing the microcanonical forces in terms of the partial derivatives of .the identity ( [ eq : fomega ] ) alone has as a consequence that the entropy must be of the form with some with non null derivative .in fact , for any one finds the forces , to be identical to the microcanonical forces , eq . ( [ eq : fmicro ] ) here is a shorthand notation for .if one employs an entropy expression that is not of the form , e.g. , the boltzmann entropy , one can well end up in wrongly evaluating the forces .this happens , for example , in the case of a large collection of non interacting spins in a magnetic field , at energy , that is the prototypical example of the emergence of negative boltzmann temperature .the hamiltonian reads here plays the role of the external parameter , is depending on whether the spin points parallel ( up ) or antiparallel ( down ) to the field , and is the magnetic moment of each spin . at energy , the magnetization is given by ( [ eq : fmicro ] ) : the number of states with spins up is the number of states with no more than spins up is using the relation , and treating as a continuous variable under the assumption that is very large , according to standard procedures , we observe that denotes the number of states with energy between and .the density of states is therefore : and the number of states with energy below is and magnetization of a system of non - interacting spins , as predicted by the boltzmann entropy , and the gibbs entropy here .only gibbs magnetization conforms with the physical magnetization . ]figure [ fig : fig1 ] shows the gibbs and boltzmann temperatures and magnetizations as functions of calculated with for larger values of qualitatively similar plots are obtained .a very unphysical property of is that with the flip of a single spin it jumps discontinuously from to in the thermodynamic limit .the usual reply to such a criticism would be , following , to say that one should look instead at the quantity , which displays no divergence .no way out is however possible if one considers the magnetization . as can be seen from the figure, only reproduces the exact result , eq .( [ eq : exact ] ) whereas the magnetization given by is drastically off , and even predicts a nonexistent and unphysical phase transition , in the thermodynamic limit , where the magnetization abruptly jumps from to as a single spin flips from to .the results in the figure are also corroborated by analytical calculations .using eqs .( [ eq : tmb ] ) and ( [ eq : tmg ] ) with eqs .( [ eq : omega ] ) and ( [ eq : omega ] ) we obtain thus the discrepancy between the boltzmann magnetization and the physical magnetization is given by the negative boltzmann thermal energy rescaled by the applied magnetic field : since diverges around the zero energy in the thermodynamic limit , so does the discrepancy .note that the discrepancy also diverges as the intensity of the applied magnetic field decreases .it is interesting to notice that , while in the thermodynamic limit approaches for , the same is not true for , which distinctly deviates from for both and .this unveils the fact , apparently previously unnoticed , that boltzmann and the gibbs entropy are not equivalent even in the lower part of the spectrum of large spin systems .lines do not coincide with the adiabats . ]equation ( [ eq : mb ] ) is a special case of a general relation linking the boltzmann forces ( ) and the gibbs forces ( i.e. , the thermodynamic forces ) , reading : this equation accompanies a similar relation linking boltzmann and gibbs temperatures with being the heat capacity. equations ( [ eq : mbvsmg ] ) and ( [ eq : tbvstg ] ) follow by taking the derivative with respect to of and respectively . the reason for the thermodynamic inconsistency of ( consistency of ) can also be understood in the following way .consider the heat differential .clearly is an integrating factor : .hence is a primitive .accordingly the adiabats are determined by the equation and the entropy must be some monotonic function of , that is of . by inspecting eqs .( [ eq : omega ] ) and ( [ eq : omega ] ) we see that the phase volume is a monotonic function of while the density of states is not a function of ; hence is thermodynamically inconsistent .the inequivalence of and is most clearly seen by plotting the iso- lines in the thermodynamic space ; see fig .[ fig : fig2 ] .note that the adiabats , eq .( [ eq : adiabats ] ) are straight lines passing through the origin .the iso- lines instead predict a completely different structure of the adiabats .note in particular that the iso- lines are closed .this evidences their thermodynamical inconsistency .summing up : the boltzmann entropy severely fails to accomplish one of its basic tasks , namely , reproducing the correct value of the thermodynamic forces and of heat .we have shown that , within the microcanonical formalism there is only one possible choice of entropy that is consistent with the second law and the equation of state of an ideal gas , namely , the gibbs entropy .discarding the gibbs entropy in favour of the boltzmann entropy , may accordingly result in inconsistency with either of those two pillars .for the great majority of large thermodynamic systems , gibbs and boltzmann entropies practically coincide ; hence there is no problem regarding which we choose .however , there are cases when the two do not coincide : examples are spin systems and point vortex gases , where boltzmann temperature , in disagreement with gibbs temperature , has no definite sign , and the boltzmann entropy can largely fail to predict correct values of thermodynamic forces . it must be stressed that the demonstrated failure of the boltzmann entropy to reproduce the thermodynamic forces is not restricted to small systems , where the failure was already known to occur , but survives , and even becomes more prominent , in the thermodynamic limit , where the boltzmann entropy predicts an unphysical and nonexistent phase transition in the magnetization of a system of non - interacting spins in a magnetic field . in the light of the present results , together with the established fact that the gibbs entropy conforms with all thermodynamic laws ,the issue of which entropy expression is correct is apparently now fully and ultimately settled .the author is indebted to jrn dunkel , stefan hilbert , peter talkner and especially peter hnggi , for the many discussions we had on this topic for years .this research was supported by a marie curie intra european fellowship within the 7th european community framework programme through the project nequflux grant no . 623085 and by the cost action no .mp1209 `` thermodynamics in the quantum regime . ''
a question that is currently highly debated is whether the microcanonical entropy should be expressed as the logarithm of the phase volume ( volume entropy , also known as the gibbs entropy ) or as the logarithm of the density of states ( surface entropy , also known as the boltzmann entropy ) . rather than postulating them and investigating the consequence of each definition , as is customary , here we adopt a bottom - up approach and construct the entropy expression within the microcanonical formalism upon two fundamental thermodynamic pillars : ( i ) the second law of thermodynamics as formulated for quasi - static processes : is an exact differential , and ( ii ) the law of ideal gases : . the first pillar implies that entropy must be some function of the phase volume . the second pillar singles out the logarithmic function among all possible functions . hence the construction leads uniquely to the expression , that is the volume entropy . as a consequence any entropy expression other than that of gibbs , e.g. , the boltzmann entropy , can lead to inconsistencies with the two thermodynamic pillars . we illustrate this with the prototypical example of a macroscopic collection of non - interacting spins in a magnetic field , and show that the boltzmann entropy severely fails to predict the magnetization , even in the thermodynamic limit . the uniqueness of the gibbs entropy , as well as the demonstrated potential harm of the boltzmann entropy , provide compelling reasons for discarding the latter at once .
determining the conditions for epidemic extinction is an important public health problem .global eradication of an infectious disease has rarely been achieved , but it continues to be a public health goal for polio and many other diseases , including childhood diseases .more commonly , disease extinction , or fade out , is local and may be followed by a reintroduction of the disease from other regions .extinction may also occur for individual strains of a multistrain disease , such as influenza or dengue fever .since extinction occurs in finite populations , it depends critically on local community size . moreover , it is important to know how parameters affect the chance of extinction for predicting the dynamics of outbreaks and for developing control strategies to promote epidemic extinction .the determination of extinction risk is also of interest in related fields , such as evolution and ecology .for example , in the neutral theory of ecology , bio - diversity arises from the interplay between the introduction and extinction of species . in general, extinction occurs in discrete , finite populations undergoing stochastic effects due to random transitions or perturbations .the origins of stochasticity may be internal to the system or may arise from the external environment .small population size , low contact frequency for frequency - dependent transmission , competition for resources , and evolutionary pressure , as well as heterogeneity in populations and transmission , may all be determining factors for extinction to occur .extinction risk is affected by the nature and strength of the noise , as well as other factors , including outbreak amplitude and seasonal phase occurrence . for large populations ,the intensity of internal population noise is generally small .however , a rare , large fluctuation can occur with non - zero probability and the system may be able to reach the extinct state . for respiratory diseases such as sars , super - spreading may account for these large stochastic fluctuations .since the extinct state is absorbing due to effective stochastic forces , eventual extinction is guaranteed when there is no source of reintroduction . however , because fade outs are usually rare events in large populations , typical time scales for extinction may be extremely long .stochastic population models of finite populations , which include extinction processes , are effectively described using the master equation formalism .stochastic master equations are commonly used in statistical physics when dealing with chemical reaction processes and predict probabilities of rare events occurring at certain times . for many problems involving extinction in large populations ,if the probability distribution of the population is approximately stationary , the probability of extinction is a function that decreases exponentially with increasing population size .the exponent in this function scales as a deterministic quantity called the action .it can be shown that a trajectory that brings the system to extinction is very likely to lie along a most probable path , called the optimal extinction trajectory or optimal path .it is a remarkable property that a deterministic quantity such as the action can predict the probability of extinction , which is inherently a stochastic process .locating the optimal path is desirable because the quantity of interest , the extinction rate , depends on the probability to traverse this path , and the effect of a control strategy on extinction rate can be determined by its effect on the optimal path . by employing an optimal path formalism , we convert the stochastic problem to a mechanistic dynamical systems problem .in contrast to approaches based on diffusive processes that are valid only in the limit of large system sizes , this dynamical systems approach can give accurate estimates for the extinction time even for small populations if the action is sufficiently large .additionally , unlike other methods that are used to estimate lifetimes , this approach enables one both to estimate lifetimes and to draw conclusions about the path taken to extinction .this more detailed understanding of how extinction occurs may lead to new stochastic control strategies . in this article, we show that locating the optimal extinction trajectory can be achieved naturally by evolving a dynamical system that converges to the optimal path .the method is based on computing finite - time lyapunov exponents ( ftle ) , which have previously been used to find coherent structures in fluid flows .the ftle provides a measure of how sensitively the system s future behavior depends on its current state .we argue that the system displays maximum sensitivity near the optimal extinction trajectory , which enables us to dynamically evolve toward the optimal escape trajectory using ftle calculations . for several models of epidemics that contain internal or external noise sources ,we illustrate the power of our method to locate optimal extinction trajectories .although our examples are taken from infectious disease models , the approach is general and is applicable to any extinction process or escape process .to introduce our idea of dynamically constructing an optimal path to extinction in stochastic systems , we show its application to a stochastic susceptible - infectious - recovered ( sir ) epidemiological model .details of the sir model can be found in sec . 1 of the electronic online supplement .figure [ fig : sir ] shows the probability density of extinction prehistory in the si plane .the probability density was numerically computed using stochastic sir trajectories that ended in extinction .trajectories are aligned at their extinction point . from the extinction point ,the prehistory of each trajectory up to the last outbreak of infection is considered .small fluctuations in the infectious population are not considered in identifying the last outbreak . in this way, we restrict the analysis to the interval between the last large outbreak of infection and the extinction point .the resulting ( s , i ) pairs of susceptible and infectious individuals are then binned and plotted in the si plane .the resulting discrete density has been color coded so that the brighter regions correspond to higher density of trajectories .the figure shows that , among all the paths that the stochastic system can take to reach the extinct state , there is one path that has the highest probability of occurring .this is the optimal path to extinction .one can see that the optimal path to extinction lies on the peak of the probability density of the extinction prehistory .it should be noted that extinction for the stochastic sir model has been studied previously .the optimal path can be obtained using methods of statistical physics . in figure[ fig : sir ] , the numerical prediction of the entire optimal trajectory for the stochastic sir system has been overlaid on the probability density of extinction prehistory that was found using stochastic simulation .the trajectory spirals away from the endemic state , with larger and larger oscillations until it hits the extinct state .the agreement between the stochastically simulated optimal path to extinction and the predicted optimal path is excellent .the curve of figure [ fig : sir ] is obtained by recasting the stochastic problem in a deterministic form .the evolution of the probability of finding a stochastic system in a given state at a given time is described by the master equation .solving the master equation would provide a complete description of the time evolution of the stochastic system , but in general it is a difficult task to obtain explicit solutions for the master equation .thus , one generally resorts to approximations to the solution ; i.e. , one considers an ansatz for the probability density . in this case , since extinction of finite populations is a rare event , we will be interested in examining events that occur in the tail of the probability distribution .therefore , the distribution is assumed to take the form where is the probability density of finding the system in the state at time , is the size of the population , is the normalized state ( e.g. , in an epidemic model , the fraction of the population in the various compartments ) , and is a deterministic state function known in classical physics as the action .equation ( [ e : action ] ) describes the relationship between the action and the probability density and is based on an assumption for how the probability scales with the population size .the action is the negative of the natural log of the stationary probability distribution divided by the population size .therefore , the probability ( if we normalize the population ) is roughly given by the exponential of the action .intuitively , equation ( [ e : action ] ) expresses the assumption that the probability of occurrence of extreme events , such as extinction , lies in the tails of the probability distribution , which falls steeply away from the steady state .this approximation leads to a conserved quantity that is called the hamiltonian . from the hamiltonian, one can find a set of conservative ordinary differential equations ( odes ) that are known as hamilton s equations .these odes describe the time evolution of the system in terms of state variables , which are associated with the population in each compartment . for the sir example , is the vector .in addition to the state variables , the equations contain conjugate momenta variables , .the conjugate momenta , or noise , account for the uncertainty associated with the system being in a given state at a given time due to the stochastic interactions among the individuals of the population .these odes can be constructed from information in the master equation about the possible transitions and transition rates in the system .details can be found in [ sec : app - tlf ] .integration of the odes with the appropriate boundary conditions will then give the optimal evolution of the system under the influence of the noise .boundary conditions are chosen to be fixed points of the system .a typical case is shown schematically in figure [ fig : hyperbolic]a .deterministically , the endemic state is attracting and the extinct state repelling .however , introducing stochasticity allows the system to leave the deterministic manifold along an unstable direction of the endemic state , corresponding to nonzero noise .stochasticity leads to an additional extinct state which arises due to the general non - gaussian nature of the noise . for the extinction process of figure [ fig : sir ] ,boundary conditions were the system leaving the endemic steady state and asymptotically approaching the stochastic extinct state .in general , the optimal extinction path is an unstable dynamical object , and this reflects extinction as a rare event .this has led many authors to consider how extinction rates scale with respect to a parameter close to a bifurcation point , where the dynamics are very slow . for an epidemic modelthis means that the reproductive rate should be greater than but very close to .however , most real diseases have larger than 1.5 , which translates into a faster growth rate from the extinct state . in general , in order to obtain analytic scaling results , one must obtain the odes for the optimal path either analytically ( using the classical theory of large fluctuations mentioned within this section and described in detail in [ sec : app - tlf ] ) or numerically ( using shooting methods for boundary value problems ) .this task may be impossible or extremely cumbersome , especially when the system is far from the bifurcation point . in the following sectionwe demonstrate how to evolve naturally to the optimal path to extinction using a dynamical systems approach .we consider a velocity field which is defined over a finite time interval and is given by hamilton s equations of motion .such a velocity field , when starting from an initial condition , has a unique solution .the continuous dynamical system has quantities , known as lyapunov exponents , which are associated with the trajectory of the system in an infinite time limit , and which measure the average growth rates of the linearized dynamics about the trajectory . to find the finite - time lyapunov exponents ( ftle ) , one computes the lyapunov exponents on a restricted finite time interval . for each initial condition, the exponents provide a measure of its sensitivity to small perturbations . therefore , the ftle is a measure of the local sensitivity to initial data .details regarding the ftle can be found in sec . 2 of the electronic online supplement .the ftle measurements can be shown to exhibit `` ridges '' of local maxima .the ridges of the field indicate the location of attracting ( backward time ftle field ) and repelling ( forward time ftle field ) structures . in two - dimensional ( 2d ) space ,the ridge is a curve which locally maximizes the ftle field so that transverse to the ridge one finds the ftle to be a local maximum .what is remarkable is that the ftle ridges correspond to the optimal path trajectories , which we heuristically argue in sec . 3 of the electronic online supplement .the basic idea is that since the optimal path is inherently unstable , the ftle shows that , locally , the path is also the most sensitive to initial data .figure [ fig : hyperbolic]b shows a schematic that demonstrates why the optimal path has a local maximum to sensitivity . if one chooses an initial point on either side of the path near the endemic state , the two trajectories will separate exponentially in time .this is due to the fact that both extinct and endemic states are unstable , and the connecting trajectory defining the path is unstable as well .any initial points starting near the optimal path will leave the neighborhood in short time .we now apply our theory of dynamical sensitivity to the problem of locating optimal paths to extinction for several examples .we consider the case of internal fluctuations , where the noise is not known a priori , as well as the case of external noise . in each case, the interaction of the noise and state of the system begins by finding the equations of motion that describe the unstable flow .these equations of motion are then used to compute the ridges corresponding to maximum ftle , which in turn correspond to the optimal extinction paths .for an example of a system with internal fluctuations which has an analytical solution , consider extinction in the stochastic branching - annihilation process where and are constant reaction rates .equation ( [ e : bd ] ) is a single species birth - death process and can be thought of as a simplified form of the verhulst logistic model for population growth .the mean field equation for the average number of individuals in the infinite population limit is given by .the stochastic process given by equation ( [ e : bd ] ) contains intrinsic noise which arises from the randomness of the reactions and the fact that the population consists of discrete individuals .this intrinsic noise can generate a rare sequence of events that causes the system to evolve to the extinct state .the probability to observe , at time , individuals is governed by the master equation +\lambda\left[(n-1)p_{n-1}-np_{n}\right].\label{e : bd_master}\ ] ] the hamiltonian associated with this system is where is a conjugate coordinate related to through a transformation , and plays the role of the momentum .the equations of motion are given by = & = q[(1 + 2p)-(1+p)q],[e : bd_qdot ] + = & -=p[(2+p)q-(1+p)].[e : bd_pdot ] the mean field is retrieved in equation ( [ e : bd_qdot ] ) when ( no fluctuations or noise ) .the hamiltonian has three zero - energy curves .the first is the mean - field zero - energy line ( no fluctuations ) , which contains two unstable points and . the second is the extinction line ( trivial solution ) , which contains another unstable point .the third zero - energy curve is non - trivial and is the segment of the curve given by equation ( [ e : bd_op ] ) which lies between corresponds to a ( heteroclinic ) trajectory which exits , at , the point along its unstable manifold and enters , at , the point along its stable manifold .this trajectory is the optimal path to extinction and describes the most probable sequence of events which evolves the system from a quasi - stationary state to extinction . to show that the ftle evolves to the optimal path, we calculate the ftle field using the system of hamilton s equations given by equations ( [ e : bd_qdot])-([e : bd_pdot ] ) .figure [ fig : ftle_bd_sis]a shows both the forward and backward ftle plot computed for the finite time , with and . in this example , as well as the following two examples , was chosen to be sufficiently large so that one obtains a measurable exponential separation of trajectories . in figure[ fig : ftle_bd_sis]a , one can see that the optimal path to extinction is given by the ridge associated with the maximum ftle .in fact , by overlaying the forward and backward ftle fields , one can see all three zero - energy curves including the optimal path to extinction . also shown in figure [ fig : ftle_bd_sis]a are the analytical solutions to the three zero - energy curves given by , , and equation ( [ e : bd_op ] ) .there is excellent agreement between the analytical solutions of all three curves and the ridges which are found through numerical computation of the ftle flow fields .it is possible to compute analytically the action along the optimal path for a range of values . using equation ( [ e : bd_op ] ) ,it is easy to show that the action is it is clear from equation ( [ e : bd_action ] ) that the action scales linearly with .we now consider the well - known problem of extinction in a susceptible - infectious - susceptible ( sis ) epidemiological model , which is a core model for the basis of many recurrent epidemics .the sis model is described by the following system of equations : = & -s + i - is,[e : siex_a ] + = & -(+)i + is,[e : siex_b ] where denotes a constant birth and death rate , represents the contact rate , and denotes the recovery rate . assuming the total population size is constant and can be normalized to , then equations ( [ e : siex_a])-([e : siex_b ] ) can be rewritten as the the following one - dimensional ( 1d ) process for the fraction of infectious individuals in the population : the stochastic version of equation ( [ e:1dsis_det ] ) is given as where is uncorrelated gaussian noise with zero mean and is the standard deviation of the noise intensity .the noise models random migration to and from another population .equation ( [ e:1dsis_stoch ] ) has two equilibrium points given by ( disease - free state ) and ( endemic state ) . using the euler - lagrange equation of motion from the lagrangian determined by equation ( [ e:1dsis_stoch ] ) ( ^{2}=[\dot{i}-f(i)]^{2} ] ( see sec .5 of the electronic online supplement ) .( b ) numerically simulated ( solid curve with black points ) mean extinction time versus reproductive number for the sis epidemic model with internal noise and the analytical prediction ( dashed , red curve ) found using the asymptotic scaling law that is valid near .,width=236 ] ( a ) numerically computed action ( integrated along the optimal path found numerically from the ftle flow field ) versus reproductive number for the sis epidemic model with internal fluctuations .the inset shows a portion of the graph near .the numerically computed action is given by the black points , while the dashed , red curve shows an asymptotic scaling result that is valid near , and is given by $ ] ( see sec .5 of the electronic online supplement ) .( b ) numerically simulated ( solid curve with black points ) mean extinction time versus reproductive number for the sis epidemic model with internal noise and the analytical prediction ( dashed , red curve ) found using the asymptotic scaling law that is valid near .,width=236,height=158 ]
extinction , optimal path , finite - time lyapunov exponents extinction appears ubiquitously in many fields , including chemical reactions , population biology , evolution , and epidemiology . even though extinction as a random process is a rare event , its occurrence is observed in large finite populations . extinction occurs when fluctuations due to random transitions act as an effective force which drives one or more components or species to vanish . although there are many random paths to an extinct state , there is an optimal path that maximizes the probability to extinction . in this article , we show that the optimal path is associated with the dynamical systems idea of having maximum sensitive dependence to initial conditions . using the equivalence between the sensitive dependence and the path to extinction , we show that the dynamical systems picture of extinction evolves naturally toward the optimal path in several stochastic models of epidemics . [ firstpage ]
advanced - generation interferometric gravitational wave detectors , such as advanced ligo , advanced virgo and kagra are currently being commissioned .their sensitivity is expected to surpass that achieved by first generation instruments by almost an order of magnitude in the high frequency region . to achieve this , very high circulating power levels ( 0.5 - 1 mw )will be stored within the fabry - perot arm cavities . at these power levels , even low levels of optical absorptioncan lead to significant thermoelastic distortion of optical surfaces and unacceptable levels of wavefront distortion , resulting in reduced circulating power and a reduction in the efficiency of the detector signal readout .thermally actuated compensation systems will be thus used to ameliorate the wavefront distortion .however , the thermal time constants for the absorption - induced distortion and the compensation are long , typically 12 hours , and thus incorporating predictive modeling in the control systems may prove essential .the response of a linear elastic system to heating is described by the theory of thermo - elasticity and its applications to highly symmetric , idealized systems are described in many books ( see for example ) .it has also been used to develop analytic expressions for less idealized optical systems .the expressions developed by hello and vinet are relevant to the work described here , but apply only to cylindrical isotropic mirrors heated by coaxial laser beams .more complicated systems , which incorporate asymmetric heating or anisotropic elasticity , can be investigated using finite - element numerical models that apply the equations of thermo - elasticity on a three - dimensional spatial mesh . for dynamic systems ,the thermoelastic equations must be solved at each epoch , requiring computational times that can run to many days .this approach would be untenable for use in predictive feed - forward actuation to control systems .in such cases , the solution of the scalar problem to determine the temperature profile throughout the optic can be solved rapidly ; the time consuming part is solving the tensor - based elasticity problem to convert the thermal profile into an elastic distortion .the betti - maxwell theorem of elastodynamic reciprocity provides an alternative approach to using finite - element methods ( fem ) to solve the tensor part of the thermoelastic distortion .it has previously been used to investigate the excitation of rayleigh - lamb elastic waves in a metal plate due to heating produced by a line - focused pulsed laser beam assuming that the heating is confined to the surface of the plate and it has infinite lateral extent . in the context of gravitational wave detection, it has been used to compute the interferometer s response to creep events in the fibers that suspend the optics .we extend its use to predict thermoelastic distortion of an optic of finite size with asymmetric heating .we describe here how elastodynamic reciprocity and fem can be combined to provide accurate predictions of thermoelastic surface distortion more quickly than using fem alone . in summary, fem is used to determine the response of the optic to a set of orthonormal tractions , or pressures a computationally expensive calculation that is performed once for an optic .then , using reciprocity , the distortion due to the instantaneous temperature profile in the optic is calculated using a sum of scalar volume integrals that incorporate these responses .the computational cost of this step is much less than that of a full elastostatic fem evaluation .additionally , it is amenable to parallelization , which would further reduce the computational time .the layout of the rest of the paper is as follows : in section ii we introduce the betti - maxwell theorem of elastodynamics and show how it can be used to determine the surface distortion by careful choice of a suitable ` auxiliary ' elastic system .we demonstrate its application by calculating the distortion of the end face of a cylindrical optic that is heated by a gaussian heat flux that is ( a ) coaxial with and ( b ) laterally displaced from the axis .the approach and model are described in sections iii and iv .finally , the resulting surface distortions are presented in section v and compared with the results of elastostatic fem calculations .computation times for these two approaches are compared in section vithe betti - maxwell reciprocity theorem for elastodynamics specifies the relationship between the displacement that results from an applied surface traction and internal body force for two elastic states of a linear elastic body : \label{equation1}\end{gathered}\ ] ] where is the density , is acceleration , the superscripts and represent the two states , and the einstein summation convention is us ed . if and then , and thus eq .( [ equation1 ] ) becomes we shall use this theorem to determine the surface displacement ( distortion ) due to heating of an optic by , for example , partial absorption of an incident laser beam . for the first state , which we shall refer to as the `` thermal state '' and label , we assume that the optic is free and thus , and there is a non - zero body force due to the heating .since we are interested in the distortion of the end face of the optic , we choose the second state , which is often referred to as the `` auxiliary state '' and we shall label , to have a traction applied to the end face of the optic and assume .thus , eq . ( [ equation2 ] ) becomes where is the internal strain produced by the traction , and is the internal stress associated with the body force : .consider now applying time - harmonic tractions with amplitude .it is convenient to choose to be orthonormal , so that . then , expressing the surface displacement amplitude as : transforms the left - hand term of eq .( [ equation3 ] ) to therefore that is , if the amplitude of the elastic response of the optic , , to each of the tractions is known then the amplitude of the distortion of the end face of the optic , , due to any thermal stress distribution can be calculated using eqs .( [ equation4 ] ) and ( [ equation6 ] ) .we shall use this approach to calculate the surface distortion due to non - uniform heating of a homogeneous isotropic body for which where is young s modulus , is the coefficient of thermal expansion , is poisson s ratio , , and is the ambient temperature .. [ equation6 ] thus becomes determine the distortion of the end - face using reciprocity , one must first characterize the response of the elastic system , , to a set of orthonormal basis tractions }:n = 1 , .... , n$ ] using an elastostatic fem [ 11 ] .zernike functions would be a tempting choice given our cylindrical geometry , particularly as they are orthogonal to a uniform traction and thus applying the auxiliary tractions should not apply net forces to the system .however , as shown in section iv , they are not well suited to describing the surface distortion .the orthonormal basis tractions we shall use apply a non - zero ( instantaneous ) force to the optic , leading to ill - conditioning of the fem at very low frequencies . we thus used a traction frequency of hz as the response is independent of frequency for frequencies well below the first resonance - see [ 12 ] for example . in all of our numerical tests , we assume a cylindrical fused silica optic with height 200 mm , radius 170 mm , mpa , and k^-1^ .a radial cross section of the optic and the meshing used for the fem is shown in fig .[ figure1 ] .nodes , and is finest on the heated top surface of the test mass . ] we assume heating of the top face by 1 w of power absorbed with a gaussian - distributed flux : \ ] ] where the beam radius mm , and radiative cooling of all surfaces of the optic to surroundings at 293 k. a thermal fem is used to calculate the temperature distribution , , resulting from the heating . the displacement amplitude for each basis function , , and the total displacement , ,are then calculated using eq .[ equation4 ] and eq .[ equation8 ] .choosing a set of orthonormal functions that can describe the surface distortion without requiring a large number of functions , which would necessarily include high spatial frequencies , is crucial as it reduces both the number of auxiliary tractions that must be evaluated and the requirement for using a fine mesh in the fem .thus , we describe the choice of basis functions for on - axis and off - axis heating of the optic .zernike polynomials ( see appendix a ) are often used to describe cylindrically symmetric optical aberrations , as they are orthogonal over a circular disc and can be normalized. however , as shown in figure [ figure2 ] , these polynomials are not well suited to describing the distortion . , the sum of the first six zernike components , and the sum of the first six orthonormalized lg components on - axis surface distortion due to the heating can also be described using laguerre - gauss ( lg ) functions : % \label{equationlg}\ ] ] where are laguerre polynomials of order ( see appendix a ) , is the radial coordinate and is a free parameter .these functions are orthogonal only over the infinite plane however .symmetric orthogonalization is therefore used , as outlined in appendix b , to construct linear combinations , , of lg functions that are orthonormal over the end face for a given . in this type of orthogonalization, the difference between the new and original functions is minimized in the least - squares sense .the optimum value of was chosen as described in appendix c , giving .the six lowest - order orthonomalized - lg functions are defined in appendix d. a comparison of and the sum of these components in fig . [ figure2 ] shows that the lg basis is much superior to the zernike basis .the distortion due to off - axis heating can be described using the sets of functions listed below : \(a ) hermite - gauss ( hg ) functions : h_n \big ( \frac{\sqrt{2 } y}{r_{0y}}\big ) \exp \big[\frac{-y^2}{r_{0y}^2}\big ] % \label{equationhg}\ ] ] where are the ( `` physicists '' ) hermite polynomials of order ( see appendix a ) .these functions are orthogonal over the interval .we choose as the heat flux has a circular cross section and we shall use , and thus \(b ) generalized lg functions : \begin{cases } 1 \\ \sin{l \phi } \\ \cos{l \phi } \end{cases } % \label{lglp}\ ] ] where is the azimuthal angle , and for .we restricted the azimuthal dependence to due to the symmetry of the expected distortion .orthonormalized hg and generalized - lg functions were constructed , and an optimized value of was selected as discussed above .hg functions up to ( 136 functions in total ) were initially used to describe the distortion due to a heating beam that was displaced from the center of the optic according to (0,10 mm ) , ( 10 mm , 0 ) and ( 8.7 mm , 5 mm ) . in each case , the distortion was dominated by the same 17 components , the functions for which are plotted in appendix e. a comparison of and the sum of the dominant 19 components is shown in fig .[ figure4 ] .and the sum of the 17 dominant orthonormalized - hg components for a heat flux offset of 10 mm ] orthonormalized generalized - lg functions up to ( 16 functions in total ) were also generated and used to describe the distortion due to a heat flux displaced from the center of the optic by 10 mm , but they yielded slightly poorer agreement with .in addition , since the lower order orthonormalized - hg functions appear similar to the tem and tem eigenmodes observed in optical cavities , we chose to use that basis .we now show how to use the orthonormal bases described above with reciprocity to determine the surface distortion . in each case, the equilibrium values were calculated for the basis tractions and then combined with the temperature distribution from the thermal fem to yield the amplitudes .while zernike polynomials are not appropriate for describing the surface distortion in the example presented here , they can be used for a reciprocity - based calculation .table i shows a comparison of the reciprocity zernike amplitudes with those calculated by decomposing the distortion predicted by the thermoelastostatic fem ..zernike amplitudes calculated using reciprocity , , and thermoelastostatic fem , , for the axisymmetric gaussian heat flux . [ cols="^,^,^",options="header " , ]the hermite gauss functions up to order are orthogonalized using symmetric orthogonaliazation . of these 136 modes ,the 17 modes that make the largest contribution to describing the deformed surface were selected .these modes are shown in fig .[ figurec3 ]
thermoelastic distortion resulting from optical absorption by transmissive and reflective optics can cause unacceptable changes in optical systems that employ high power beams . in `` advanced''-generation laser - interferometric gravitational wave detectors , for example , optical absorption is expected to result in wavefront distortions that would compromise the sensitivity of the detector ; thus necessitating the use of adaptive thermal compensation . unfortunately , these systems have long thermal time constants and so predictive feed - forward control systems could be required - but the finite - element analysis is computationally expensive . we describe here the use of the betti - maxwell elastodynamic reciprocity theorem to calculate the response of linear elastic bodies ( optics ) to heating that has arbitrary spatial distribution . we demonstrate using a simple example , that it can yield accurate results in computational times that are significantly less than those required for finite - element analyses .
the problem of finding global extrema ( maxima and minima ) for a univariate real function is important for variety of real world applications .for example , it arises in electric engineering , , in computer science , , and in various other fields ( see for further references ) . in many industrial applications the global optimization algorithmis expected to operate in real time while simultaneously , finding the global extremum of non - convex functions exhibiting large number of local sub - extrema . the problem of efficiently finding a function s global extremum has been historically challenging .one of the first solutions , zero derivative method ( zdm ) , was proposed by pierre de fermat ( 1601 - 1665 ) .his main idea was to look for the global extremum among critical points : the points where the derivative of the target function is zero . despite its theoretical significance, fermat s proposed method ( zdm ) is limited by the numerical difficulties imposed by finding critical points .one of the leading global optimization approaches adopted by many industrial applications is a brute - force search or exhaustive search for the global extremum ( brute - force search ( bfs ) ) .it is simple to implement but its performance linearly depends on the complexity of the target function and the size of the search area .a plethora of optimization methods have been developed for various types of target functions . among thempiyavskii - shubert method ( psm ) occupies a special place , , .it is one of the few procedures that delivers the global extremum for a univariate function and at the same time it exhibits reasonable performance as long as the respective lipschitz constant is of a modest value . on the other hand, the method is very sensitive to the size of the lipschitz constant : its performance sharply diminishes for large lipschitz constants . for this reason ,accelerations and improvements of psm were developed in the following papers , , , .numerical experiments presented in this paper show that leap gradient algorithm ( referred as lga ) significantly outperforms psm together with its modifications and improvements ( from , , ) when finding global extrema of polynomials .the method of gradient descent ( see , e.g. , , ) is widely used to solve various practical optimization problems .the main advantage of the gradient descent algorithm is its simplicity and applicability to a wide range of practical problems . on the other hand, gradient descent has limitations imposed by the initial guess of a starting point and then its subsequent conversion to a suboptimal solution .this paper gives practical recipes on how to overcome those limitations for a univariate function and how to equip the gradient descent algorithm with abilities to converge to a global extremum .it is achieved via evolutionary leaps towards the global extremum .lga neither requires the knowledge of the lipschitz constant nor convexity conditions that are often imposed on the target function .moreover , lga naturally becomes the standard gradient descent procedure when the target function is convex or when the algorithm operates in the close proximity to the global extremum .the recursive application of lga yields an efficient algorithm for calculating global extrema for univariate polynomials .lga does not intend to locate any critical points ( like zdm ) instead it follows gradient descent ( ascent ) until a local extremum is reached and then performs an evolutionary leap towards the next extremum .as far as performance is concerned , numerical experiments conducted for univariate polynomials show that lga outperforms bfs , zdm and psm with all its modifications from , , .the layout of this publication is as follows .section [ intro ] is the introduction .section [ sec:2 ] introduces lga for univariate functions .section [ lgarecursion ] explores the recursive application of lga .section [ multiw ] describes a recursive implementation of lga for polynomials .section [ realanalytic ] presets an implementation of lga for real analytic univariate functions .section [ numerical_experiments ] reports on numerical experiments with lga .it compares lga with zdm , bfs , psm and accelerations of psm presented in , , .polynomial roots for zdm are calculated with the help of laguerre s method , .section [ acknowledgment ] expresses gratitude to professionals who spent their time contributing to the paper , reviewing various parts of the publication and making many valuable suggestions and improvements .section [ appendix ] finalizes the publication with a snapshot of the working and thoroughly tested source code for lga .let be known and well defined real function on , ] contains at least two points where is equal to zero .evolutionary leaps from ( on are solutions of the following equation . where indeed , if is continuously differentiable then the necessary condition for to be the solution for ( [ optimization_proc ] ) on is differentiating yields after multiplying the equation with we obtain ( [ leap_formula ] ) .since is the solution of the problem ( [ optimization_proc ] ) and is twice differentiable we conclude that differentiating yields when hence , making use of ( [ leap_formula ] ) we obtain according to * step 2 * of lga for the evolutionary leap we have that together with ( [ leap_formula ] ) implies on the other hand , ( [ gradient_decent ] ) yields the existence of such that that together with ( [ rt ] ) yield the existence of where taking into account that is obtained after * step 1 * of lga and we obtain the existence of where under the conditions of the theorem is continuous on and the statement follows from ( [ rightplus ] ) , ( [ midminus ] ) and ( [ leftplus ] ) . that means ( in a generic situation )a non trivial evolutionary leap inside is only possible over two inflection points of a target function . consider the optimization problem where and are real numbers .if then has two different zeroes. hence , by theorem [ inflection_and_leaps ] , lga might need to perform a single evolutionary leap ( over two inflection points ) in order to solve the optimization problem .otherwise , , lga coincides with the standard gradient decent .lga replaces the optimization problem }\ ] ] with }\ ] ] where is calculated at the previous step of lga .it leads us to the following recursive procedure executed at each iteration of lga . and in order to complete an iteration of lga for one needs to calculate }\ ] ] and lga finds the minimum of if finding the minimum for is obvious , then the iteration of lga is completed after recursive steps .the next theorem shows when an lga iteration is completed after a finite number of recursive steps .if a real function is at least -times continuously differentiable on the segment ] in accordance with notations ( [ iter_1 ] ) and ( [ iter_2 ] ) we have and under the conditions of the theorem achieves its minimum on ] recursive lga delivers the global extremum for in a finite number of steps .the proof is conducted by mathematical induction with respect to the degree of a polynomial . as a basis of mathematical induction and for the purpose of illustrations let us consider finding the global minimum on ] that means and the series uniformly converges to on . ] it follows from taylor expansion with the integral remainder term that in order to calculate let us justify the following statements with the help of mathematical induction . the basis of mathematical induction follows from the step of mathematical induction for each of the statements ( [ first ] ) , ( [ second ] ) , ( [ third ] ) is as follows .suppose that ( [ first ] ) is true for and to prove that it remains true for and consider and due to the assumption of the mathematical induction for the statement ( [ first ] ) follows .assume that ( [ second ] ) is true for consider and ( [ first ] ) together with the assumption of the mathematical induction yield the statement ( [ second ] ) is established .assume that ( [ third ] ) is valid for then by the assumption of the mathematical induction and taking into account ( [ first ] ) , ( [ second ] ) we have statement ( [ third ] ) is established .applying to the taylor expansion and taking into account ( [ first ] ) , ( [ second ] ) yields on the other hand , and so is therefore setting in ( [ fourth ] ) and making use of ( [ third ] ) we obtain that completes the calculation of ( [ difference ] ) and the proof .given the required margin of error theorem [ real_analytic_lga ] provides an effective recipe for finding the global minimum of a real analytic function on the interval . ] where is a real number between and which remains fixed across all 500 trials .* * use lga and its competitor to solve the optimization problem }\ ] ] with precision for -argument of the global minimum on . ] where lga considerably speeds up as the value of the parameter decreases ( fig .[ fig:2 ] , fig .[ fig:3 ] ) .epsf epsf if then lga works exactly so well as bfs ( fig .[ fig:4 ] , fig .[ fig:5 ] ) .a generic application corresponds to the situation with therefore employing lga instead of bfs will improve the performance of your application .epsf epsf zdm finds the global minimum }\ ] ] by calculating zeroes of the derivative \ ] ] and then finding the smallest value in if it is then zdm returns as an estimate for the solution of the optimization problem . in all numerical experiments reported in this paperthe polynomial roots for zdm were calculated with laguerre s method , .the zdm processing time includes the invocation of laguerre s method .lga notably faster than zdm ( fig .[ fig:6 ] ) .epsf piyavskii s type algorithms tackle the optimization problem }\ ] ] by constructing at each iteration either a piecewise linear is lipschitz , , , ) or a piecewise quadratic is lipschitz , , , ) auxiliary function such that \ ] ] then the original optimization problem is replaced with }\ ] ] based on its solution the algorithm either terminates or proceeds to the next step with the new refined auxiliary function lga is compared against psm with tuning of the local lipschitz constants ( referred as lt ) and its enhancement lt_li presented in . in view of the numerical simulations from and lt_li appear to be the fastest among pyavskii s type algorithms ( discussed in ) with piecewise linear auxiliary functions .lga is faster than lt li ( fig .[ fig:7 ] ) .epsf modifications of psm with smooth piecewise quadratic auxiliary functions are discussed in .[ fig:8 ] presents the comparison results between lga and psm enriched by local tuning of the lipschitz constant for ( referred as dlt ) .lga outperforms dlt .epsf dlt with some local improvement technique is addressed as dlt_li .its comparison with lga is presented by fig.[fig:9 ] .lga is faster than dlt_li .epsf the paper introduces the modification of psm based on piecewise quadratic auxiliary functions that are not necessary smooth . the algorithm from referred in fig.[fig:10 ] as eek .lga is faster than eek .the author is grateful to anonymous referees for comments and suggestions that helped to focus and improve the original manuscript . in particular, the comparison between lga and piyavskii - shubert method was added upon a referee remark .the author is thankful to blaiklen marx for the help with preparing the manuscript for publication and testing the java implementations of algorithms in i-oblako.org framework .the program is looking for the global minimum of a polynomial on the interval .$ ] the polynomial is represented as an array while(polynom[polynom.length - 1 ] = = 0 & & polynom.length > 1 ) { pl= new double[polynom.length - 1 ] ; for(int i = 0 ; i< polynom.length - 1;i++ ) pl[i]=polynom[i ] ; polynom = pl ; } if(polynom.length = = 1 ) return a ; if(polynom.length = = 2 ) { if(polynom[1]>= 0 ) return a ; else return b ; } pl = horner(polynom , rt_prev ) ; rt = getmin(pl , rt_prev , b , step ) ; if(rt - rt_prev<= step ) return rt_prev ; else { if(rt_prev = = a ) njumps++ ; else njumps = njumps + 2 ; } public double gradientdescent(double [ ] polynom , double a , double b , double step ) { double rt = a ; while(evalderivative(polynom , rt)<0 & & rt < b ) rt = rt+step ; return rt ; } .... d.a. adams , `` a stopping criterion for polynomial root finding . ''acm,10 , 655 - 658 , ( 1967 ) .l. armijo , minimization of functions having lipschitz continuous first partial derivatives , pacific j. math . , 16 , 1 - 3 ( 1966 ) j. m. calvin , an adaptive univariate global optimization algorithm and its convergence rate under the wiener measure , informatica , 22 , 471488 ( 2011 ) . j. m. calvin and a. zilinskas , one - dimensional global optimization for observations with noise , comput .appl . , 50 , 157169 ( 2005 ) .r. ellaia , m. z. es - sadek , h. kasbioui , modified piyavskii s global one - dimensional optimization of a differentiable function , applied mathematics , 3 , 1306 - 1320 ( 2012 ) n.j .higham , accuracy and stability of numerical algorithms , siam , philadelphia ( 2002 ) .k. hamacher , on stochastic global optimization of one - dimensional functions , phys . a , 354 , 547557 ( 2005 ) .o. gler , foundations of optimizations , springer , new york ( 2010 ) d. e. johnson , introduction to filter theory , prentice hall , new jersey , ( 1976 ) l. kantorovich and g. akilov , functional analysis in normed spaces , fizmatgiz , moscow ( 1959 ) , translated by d. brown and a. robertson , pergamon press , oxford ( 1964 ) d. kalra and a. h. barr , guaranteed ray intersections with implicit surface , comput ., 23 , 297306 ( 1989 ) d.e .knuth , art of computer programming , vol .2 : seminumerical algorithms , 3rd ed . , ma : addison - wesley , ( 1998 ) d.e .kvasov , y.d .sergeyev , a univariate global search working with a set of lipschitz constants for the first derivative , optim ., 303 - 318 ( 2009 ) h. y .- f .lam , analog and digital filters - design and realization , prentice hall , new jersey , ( 1979 ) yu . nesterov , introductory lectures on convex optimization , applied optimization , 87 , kluwer academic publishers , boston ( 2004 ) s. piyavskii , an algorithm for finding the absolute minimum of a function , theory of optimal solutions , ik akad .nauk ussr , kiev , 2 , 13 - 24 , ( 1967 ) s. piyavskii , an algorithm for finding the absolute extremum of a function , ussr comput ., 12 , 57 - 67 , ( 1972 ) b. shubert , a sequential method seeking the global maximum of a function , siam j. numer ., 9 , 379 - 388 ( 1972 ) d. lera and y. d. sergeyev , acceleration of univariate global optimization algorithms working with lipschitz functions and lipschitz first derivatives , siam j. optim . , 23 , 1 , 508 - 529 ( 2013 ) y. d. sergeyev , global one - dimensional optimization using smooth auxiliary functions , math .programming , 81 , 127 - 146 ( 1998 )
the paper proposes a new algorithm for solving global univariate optimization problems . the algorithm does not require convexity of the target function . for a broad variety of target functions after performing ( if necessary ) several evolutionary leaps the algorithm naturally becomes the standard descent ( or ascent ) procedure near the global extremum . moreover , it leads us to an efficient numerical method for calculating the global extrema of univariate real analytic functions .
the field of fracture mechanics is becoming extremely broad with the occurrence of unexpected failure of weapons , buildings , bridges , ships , trains , airplanes , and various machines .there are two fundamental fracture criterions : the strain energy release rate ( i.e. , g theory ) and the stress intensity factor ( i.e. , k theory ) ; see arthur and richard ( 2002 ) .the experimentally determined stress intensity depends on the specimen size , and the fracture is accompanied by energy localization and concentration .+ localization is manifested by degradation of material properties with localized large deformations , and this feature often results in formation and propagation of macrocracks through engineering structures . due to the importance of localization phenomena in structural safety assessment ,much research has been conducted to resolve experimental , theoretical and computational issues associated with localization problems , as reviewed by chen and schreyer ( 1994 ) and chen and sulsky ( 1995 ) . for hyperelastic materials ,important progress has been made based on the gradient approach ; see aifantis ( 1984 ) and triantafyllidis and aifantis ( 1986 ) .however , there is still a lack of analytical results for three - dimensional boundary - value problems . in this paper , hence , we study the localization in a slender cylinder composed of an incompressible hyperelastic material subjected to tension , based on an analytical approach to solve the three - dimensional governing equations .we also intend to provide mathematical descriptions for some interesting phenomena as observed in experiments .+ jansen and shah ( 1997 ) conducted careful experiments on concrete cylinders by using the feedback - control method . from two test series ,the typical stress - displacement behavior for different height - diameter ratios with normal strength and high strength was obtained .it appears that the pre - peak segment of the stress - displacement curves agrees well with the pre - peak part of the stress - strain curves , but the post - peak segment shows a strong dependence on the geometric size ( i.e. , the radius - length ratio ) .more specifically , the longer the specimen is , the steeper the post - peak part of the stress - displacement curves becomes .also , they found that the width of the localization zone changes with the specimen size . in the experiment by gopalaratnam and shah ( 1985 ), it was found that the tangent value in the ascending part of the stress - strain curves seemed to be independent of the specimen size but in the post - peak part there was a softening region and no unique stress - strain relation .schreyer and chen ( 1986 ) studied the softening phenomena analytically based on a one - dimensional model .their results indicate that if the size of the softening zone is small enough ( in a relative sense ) , the behavior of displacement - prescribed loading is unstable , and the softening curves are steeper than those with a larger size of the softening region . here , we shall provide the three - dimensional analytical solutions to capture all the localized features mentioned above .+ another purpose of this research is to provide a method judging the onset of failure in a slender cylinder subjected to tension . here, we use the maximum - distortion - energy theory ( the huber - hencky - von mises theory ; see riley et al .2007 ) , which depicts there are two portions of the strain energy intensity .one is the portion producing volume change which is ineffective in causing failure by yielding , and the other is that producing the change of shape which is completely responsible for the material failure by yielding .+ by constructing the analytical solutions for localizations , it is possible to get the point - wise energy distribution . then , an expression for the maximum value of the strain energy can be obtained . with the huber - hencky - von mises theory, we can then establish an analytical criterion for identifying the onset of failure .+ mathematically , to deduce the analytical solutions for localizations in a three - dimensional setting is a very difficult task .one needs to deal with coupled nonlinear partial differential equations together with complicated boundary conditions .further , the existence of multiple solutions ( corresponding to no unique stress - strain relation ) makes the problem even harder to solve . here , the analysis is carried out by a novel method developed earlier ( dai and huo , 2002 ; dai and fan , 2004 ; dai and cai , 2006 ) , which is capable of treating the bifurcations of nonlinear partial differential equations .our results yield the analytical forms of the strain and stress fields , total elongation , the potential energy distribution and the strain energy distribution , which are characterized by localization phenomena .in particular , it is found that once the localization is formed its width does not change with the further increase of the total elongation , which is in agreement with the experimental observations .we also provide a description for the snap - through phenomenon .+ the remaining parts of the paper are arranged as follows . in section 2 , we formulate the three - dimensional govering equations for the axisymmetric deformations of a circular cylinder .we nondimensionalize them in section 3 to identify the key small variable and key small parameters .then , in section 4 , a novel method of coupled series - asymptotic expansions is used to derive the normal form equation of the original system . by the variational principle , in section 5, we derive the same equation by considering the energy . in section 6 , with the help of the phase plane, we solve the boundary - value problems for a given external axial force and a given elongation , respectively . in section 7 , through an energy analysis, we determine the most preferred configurations and give a description of the snap - through phenomenon .also , an analytical criterion for identifying material failure based on the huber - hencky - von mises theory is discussed . finally , concluding remarks and future tasks are given in section 8 .we consider the axisymmetric deformations of a slender cylinder subjected to a static axial force at one plane end with the other plane end fixed .the radius of the cylinder is and the length is .we take a cylindrical coordinate system , and denote and as the coordinates of a material point of the cylinder in the reference and current configurations , respectively .the radial and axial displacements can be written as we suppose that the cylinder is composed of an incompressible hyperelastic material , for which the strain energy density is a function of the first two invariants and of the left cauchy - green strain tensor , i.e. , .moreover the first piola - kirchhoff stress tensor is given by where is the deformation gradient and is the indeterminate pressure . if the strains are small , it is possible to expand the first piola - kirchhoff stress components in terms of the strains up to any order .the expressions for the stress components are very lengthy , and due to the complexity of calculations , we shall only work up to the third - order material nonlinearity .the formula containing terms up to the third - order material nonlinearity has been provided by fu and ogden ( 1999 ) : where + + is the pressure value in eq .( 2 ) corresponding to zero strains , is the incremental pressure , and and are incremental elastic moduli defined by it can be found that where denotes the first - order partial derivative of with respect to the invariant at , denotes the first - order partial derivative of with respect to the invariant at . in the following derivations, we shall also use to denote the i - th order and the j - th order partial derivative of with respect to the invariants and at .all the coefficients in eq .( [ axishu ] ) can be expressed in terms of and and here for brevity the expressions are omitted . + the equilibrium equations for a static and axisymmetric problem are given by the incompressibility condition yields that we consider the case where the lateral surface of the cylinder is traction - free , then the stress components and should vanish on the lateral surface .thus , we have the boundary conditions : eqs .( [ e1])([t1 ] ) together with eq .( [ ji ] ) provide three governing equations for three unknowns and .the former two are very complicated nonlinear partial differential equations ( pde s ) and the boundary conditions ( [ b1 ] ) are also complicated nonlinear relations ( cf .( [ 12])([16 ] ) ) . to describe the localization, one needs to study the bifurcation of this complicated system of nonlinear pde s .as far as we know , there is no available mathematical method . here, we shall adapt a novel approach involving coupled series - asymptotic expansions to tackle this bifurcation problem .a similar methodology has been developed to study nonlinear waves and phase transitions ( dai and huo , 2002 ; dai and fan , 2004 ; dai and cai , 2006 ) .first , we shall nondimensionalize this system to identify the relevant small variable and small parameters .we introduce the dimensionless quantities through the following scales : where is the length of the cylinder , is a characteristic axial displacement and is the material shear modulus , with a transformation being defined by substituting eqs .( [ 2 ] ) , ( [ 10 ] ) and ( [ 11 ] ) into eqs . ( [ e1])([t1 ] ), we obtain +\varepsilon^2[v^2w_z+2sv(v_sw_z - v_zw_s)]=0 , \end{array}\label{14}\ ] ] where is regarded as a small parameter . for convenience ,we have replaced by in the non - dimensionalized equations . here and thereafter , a subscript letter is used to represent the corresponding partial derivative ( i.e. , ) .the full forms of ( [ 12 ] ) and ( [ 13 ] ) are very lengthy and we do not write out the nonlinear terms explicitly for brevity . + substituting ( [ 10 ] ) and ( [ 11 ] ) into the traction - free boundary conditions ( [ b1 ] ) , we have +\varepsilon^2[-p^*v^2 + 3b_{47}v^3 + 30b_7v^2w_z+b_{16}vw_z^2 + 4b_8w_z^3\\\vspace{.2 cm } + \nu(-2p^*(2vv_s+v_zw_s)+b_{22}v^2v_s+7b_9vv_z^2 + b_{23}vv_zw_s\\\vspace{.2 cm } + 28b_9vw_s^2 + 72b_{12}vv_sw_z+2b_{19}v_z^2w_z+2b_8v_zw_sw_z+8b_{19}w_s^2w_z\\\vspace{.2 cm } + b_{18}v_sw_z^2)+\nu^2(-4p^*v_s^2 + 3b_{31}vv_s^2 + 4b_{19}v_sv_z^2 + 8b_{13}v_sv_zw_s\\+16b_{19}v_sw_s^2+b_{20}v_s^2w_z)+\nu^3b_{21}v_s^3]|_{s=\nu}=0 , \end{array}\label{15}\ ] ] +\varepsilon^2[-p^*(vv_z+v_zw_z)\\\vspace{.2 cm } + b_{44}v^2v_z+2b_{16}v^2w_s+b_{30}vv_zw_z+b_{32}vw_sw_z+b_{13}v_zw_z^2\\\vspace{.2 cm } + b_{33}w_sw_z^2+\nu(-2p^*v_sv_z+b_{23}vv_sv_z - b_{3}v_z^3 + 56b_9vv_sw_s\\\vspace{.2 cm } + 2b_{11}v_z^2w_s-12b_3v_zw_s^2 + 16b_5w_s^3 + 2b_8v_zv_sw_z\\ + 16b_{19}v_sw_sw_z)+\nu^2(4b_{13}v_s^2v_z+16b_{14}v_s^2w_s)]|_{s=\nu}=0 , \end{array}\label{16}\ ] ] where is a small parameter for a slender cylinder .+ then , eqs .( [ 12])([16 ] ) compose a new system of complicated nonlinear pde s with complicated boundary conditions , which is still very difficult to solve exactly .however , it is characterized by a small variable and two small parameters ( and ) , which permit us to use expansion methods to proceed further . + * remark : * the coefficients in eqs .( [ 12])([16 ] ) can be expressed in terms of and , and for brevity we omit their expressions .we note that is also a small variable as .an important feature of the system ( [ 12])([16 ] ) is that the unknowns and become the functions of the variable , the small variable and the small parameters and , i.e. , to go further , we assume that have the following taylor expansions in the neighborhood of the small variable : substituting eqs .( [ 18])([20 ] ) into eq .( [ 13 ] ) and equating the coefficient of to be zero yields that similarly , substituting eqs .( [ 18])([20 ] ) into eq .( [ 12 ] ) and setting the coefficients of and to be zero , we obtain the expressions of and are very lengthy , whose expressions are omitted for brevity . from the incompressibility condition ( [ 14 ] ) , the vanishing of the coefficients of and leads to the following two equations : substituting eqs .( [ 18])([20 ] ) into the traction - free boundary conditions ( [ 15 ] ) and ( [ 16 ] ) , and neglecting the terms higher than , we obtain where the lengthy expressions for are omitted for brevity .( [ 21])([27 ] ) are seven nonlinear ordinary differential equations , which are the governing equations for the seven unknowns and .mathematically , it is still very difficult to solve them directly . to go further, we shall use the smallness of the parameter . from eq .( [ 24 ] ) , we obtain using the above equation in eq .( [ 25 ] ) , we can express in terms of and , and then from eqs .( [ 21 ] ) and ( [ 23 ] ) , we can express and in terms of and . substituting the expressions for and into eq .( [ 22 ] ) , can be expressed in terms of and .then , from eq .( [ 26 ] ) , can be expressed in terms of and . finally , from eqs .( [ 22 ] ) and ( [ 27 ] ) , we obtain we note that the above two equations come from the axial equilibrium equation ( the coefficient of ) and the zero tangential force at the lateral surface , the two most important relations for tension problems in a slender cylinder . + by eliminating from eqs .( [ 29 ] ) and ( [ 30 ] ) and then expressing in terms of , finally we obtain an equation for the single unknown : by further using eq .( [ 28 ] ) , we obtain the following equation for the axial strain : integrating eq .( [ 36 ] ) once , we obtain where is an integration constant . to find the physical meaning of , we consider the resultant force acting on the material cross section that is planar and perpendicular to the cylinder axis in the reference configuration , and the formula is + after expressing in terms of by using the results obtained above , the integration can be carried out , and as a result we find that comparing eqs .( [ jifen ] ) and ( [ tzhi ] ) , we have .thus , we can rewrite eq .( [ tzhi ] ) as if we retain the original dimensional variable and let we have where since eq .( [ 41 ] ) is derived from the three - dimensional field equations , once its solution is found , the three - dimensional strain and stress fields can also be found .also , it contains all the required terms to yield the leading - order behavior of the original system .therefore , we refer eq .( [ 41 ] ) as the normal form equation of the system of nonlinear pde s ( [ 12])([14 ] ) together with boundary conditions ( [ 15 ] ) and ( [ 16 ] ) under a given axial resultant .it is also possible to deduce the equation for by considering the total potential energy and then using the variational principle . by using the expansions obtained in section 4, we can express the two principal invariants and in terms of .the results are ,\\\vspace{0.3 cm } i_2 - 3=\displaystyle3\varepsilon^2w_{0z}^2 - 4\varepsilon^3w_{0z}^3 + 5\varepsilon^4w_{0z}^4+s[\varepsilon^2(\frac{81}{64}w_{0zz}^2-\frac{15}{8}w_{0z}w_{0zzz})\\ \displaystyle+\varepsilon^3(-\frac{3(53\phi_{01}+71\phi_{10})}{64(\phi_{01}+\phi_{10})}w_{0z}w_{0zz}^2 + \frac{3(25\phi_{01}+16\phi_{10})}{8(\phi_{01}+\phi_{10})}w_{0z}^2w_{0zzz})].\label{43 } \end{array}\ ] ] from eqs .( [ 43 ] ) , we know the first terms in the right - hand sides of eqs .( [ 43 ] ) are second - order nonlinear . to be consistent with the third - order material nonlinearity of the stress components , the strain energy should be expanded up to the fourth - order nonlinear terms .so , according to the taylor s expansion , we have +\cdots.\label{44 } \end{array}\ ] ] in eq .( [ 44 ] ) it is sufficient to keep the second - order terms of and . substituting the expressions of and in eqs .( [ 43 ] ) into eq .( [ 44 ] ) , we have .\label{45 } \end{array}\ ] ] the stored energy per unit length is given by substituting eq .( [ 45 ] ) into eq .( [ 46 ] ) and carrying out the integration , we obtain the average stored energy over a cross section : .\label{47 } \end{array}\ ] ] letting we can further rewrite the above equation as ,\label{48 } \end{array}\ ] ] where is the young s modulus , and are constants related to material parameters .+ the total potential energy for a force - controlled problem is given by by the variational principle , we have the following euler - lagrange equation : which yields that which is just eq .( [ 41 ] ) .this shows that the normal form equation ( [ 41 ] ) obeys the variational principle for energy .+ if we multiply to both sides of eq .( [ 52 ] ) , it can be integrated once to yield that where is an integration constant .+ in the following section , we shall discuss the solutions for two boundary - value problems based on eqs .( [ 52 ] ) and ( [ 53 ] ) , and reveal their main characteristics .we rewrite eq.([52 ] ) as a first - order system as follows : without loss of generality , we take the length of the cylinder to be , then is equivalent to the radius - length ratio . we suppose that the two plane ends of the cylinder are attached to rigid bodies .then we have and we point out that although eq .( [ 52 ] ) is one - dimensional , it is derived from the three - dimensional governing equations , and as a result we can also derive the proper boundary conditions by considering the condition in the other ( radial ) dimension such as eq .( [ 58 ] ) .if one directly introduces a one - dimensional model ( say , using a gradient theory ) , such an option is not available .so , this is another advantage of eq .( [ 52 ] ) . + from eqs .( [ 57 ] ) and ( [ 58 ] ) , we have substituting eq .( [ 59 ] ) into eq .( [ t1 ] ) and integrating with respect to once , we obtain thus , the proper boundary conditions for eq .( [ 52 ] ) are to solve this boundary - value problem of the first - order system ( [ 56 ] ) under eq .( [ 61 ] ) , we shall conduct a phase - plane analysis with the engineering stress as the bifurcation parameter .the critical points of this system is given by here we shall consider a class of strain energy functions such that the plot based on eq . has one peak , and this requires that the curves corresponding to eq . are plotted in fig . 1 . in this figure , is the local maximum of the stress and is the corresponding strain value , and they are given by when we take and , ; when we take and , ; when we take and , . the three curves in fig .1 correspond to these values of , and , respectively .+ in the following discussions we consider the tension case so that .similar analysis can be made for the compression case , which will not be discussed here .equation ( [ 53 ] ) can be rewritten as in this paper , we consider the case of .new phenomena can arise for and the results will be reported elsewhere .for the present case , the phase plane always has a saddle point and a center point as varies , which is shown in fig .2 . in this figure , and are a saddle point and a center point , respectively .there are two solutions for the same stress , which are represented by the curve 1 and the curve 2 in fig .2 , respectively . for curve 1 , the right hand of eq .( [ y59 ] ) have four real roots , which we label in an increasing order by and .we note that the smallest root is smaller than .so , from eq .( [ y59 ] ) we obtain the following expression : then , an integration leads to \\ \displaystyle\frac{1}{2}+\frac{a}{\sqrt{2d_2}}\int_v^{\alpha_1}\sqrt{\frac{1 - 8d_3t}{(t-\alpha_1)(t - g_1)(t - g_2)(t-\alpha_2)}}dt , z\in [ \frac{1}{2},1 ] .\end{array}\right.\label{y61}\ ] ] by eq .( [ 61 ] ) , can be determined by the following two equations : once is found , the solution corresponding to curve 1 can be obtained from eq .( [ y61 ] ) by numerical integration . in fig .3 , we have plotted the solution curves for three different values of the engineering stress . from this figure, we find there is nearly a uniform extension in the middle part , but there are two boundary - layer regions near the two ends of the cylinder in order to satisfy the boundary conditions . + there is another solution which is represented by curve 2 in fig . 2 , and we denote the point as at which .then , eq . ( [ y59 ] ) can be rewritten as },\label{64}\ ] ] where is another real root of the right - hand of eq .( [ y59 ] ) , and and .+ then , we obtain }}dt , z\in [ 0,\frac{1}{2 } ] \\\displaystyle\frac{1}{2}+\frac{a}{\sqrt{2d_2}}\int_v^{\alpha}\sqrt{\frac{1 - 8d_3t}{(\alpha - t)(\beta - t)[(t - m)^2+n^2]}}dt , z\in [ \frac{1}{2},1 ] .\end{array}\right.\label{65}\ ] ] by eq .( [ 61 ] ) , can be determined by by numerical integration , we can get from eqs .( [ 66 ] ) and ( [ 67 ] ) .then the solution corresponding to curve 2 can be obtained from ( [ 65 ] ) by numerical integration . in fig .4 , we have plotted the solution curves for three different values of the engineering stress . in this figure, there is a sharp - change region in the middle of the slender cylinder , that represents the localization and energy concentration .moreover , the tip is sharper when the engineering stress is smaller .+ from eq .( [ 65 ] ) , one can see that the localization solution depends on through the form and this implies that the localization zone width is proportional to for a fixed length ; see jansen and shah ( 1997 ) .+ the solutions obtained above are for a given . to obtain the solutions for a displacement - controlled problem , we follow the idea in dai and bi ( 2006 ) .for that purpose , we need to get the engineering stress - strain curve .+ the total elongation is given by where the total elongation is actually the engineering strain since we have taken the length of the cylinder to be 1 . according to the symmetric phase plane and eqs .( [ y61 ] ) and ( [ 65 ] ) , is a function of , so we can get the total elongation by numerical integrations . in fig . 5 , we have plotted the curves between the total elongation and the engineering stress with different material coefficients corresponding to fig . 1 .segment 1 corresponds to the solution given by eq .( [ y61 ] ) ( we call it as solution 1 ) , and segment 2 corresponds to the solution given by eq . ( [ 65 ] ) ( we call it as solution 2 ) . for a displacement - controlled problem ( i.e. , given ) , from fig . 5, we can get the corresponding value(s ) , then the solution(s ) is given by eq .( [ y61 ] ) or eq .( [ 65 ] ) .+ from eqs .( 1 ) , ( [ 10 ] ) , ( [ 11 ] ) , ( [ 19 ] ) and ( [ 20 ] ) , we can get the shapes of the cylinder corresponding to eq .( [ 65 ] ) under different material coefficients for a given , which are shown in fig . 6 , where we take and . in this figure ,the width of the localization is defined as , where is the point where the rate of the slope of the surface radial displacement is the maximum . from the above figure, one can see that for different material coefficients the localization widths are different and the localization width is almost the same for the same material coefficients with different loads of engineering stress .that is to say , for different materials the localizations have different widths , but for the same material , the localization width is almost uniform during the loading process . + here and thereafter , we fix the material constants to be and . by the same way, we can get the relations between the total elongation and the engineering stress with different radius - length ratios , which are shown in fig .7 . in this figure , we observe that there is a snap - back for a relatively small value of .we also see that the point ( across which there are multiple values for for a given ) moves up and toward right as the value of increases . for example , when and when .the post - peak curves show very significant changes .there is no unique stress - displacement relationship in the post - peak region .the thinner the specimen is , the steeper the curve becomes , which is in agreement with the experimental results by jansen and shah ( 1997 ) . from this figure, we see that for , unstable behavior is predicted for a displacement - controlled loading whereas larger values of yield results that are stable .similar conclusions follow from the examples given by schreyer and chen ( 1986 ) .+ as to , there is a stable relation between the total elongation and the engineering stress , which is in agreement with the experiment result by gopalaratnam and shah ( 1985 ) , who conducted tensile tests on concrete under carefully controlled loading conditions and with refined measuring techniques .+ we note that for a displacement - controlled problem , after the elongation ( cf .7 ) there are bifurcations from one solution to two solutions ( at ) , to three solutions ( ) , to two solutions ( ) , and to one solution ( ) .the shapes of the cylinder corresponding to these solutions are shown in fig .8 for and .the above figure also manifests that the middle of the cylinder becomes thinner than the two ends as we pull the slender cylinder .the middle part is thinner as the engineering stress decreases for a given , which agrees well with the experimental results .as discussed in the previous section , for a relatively small , there are multiple solutions for . of course ,in reality only one solution can be observed at one instant . in this section , we shall further consider the energy values for these solutions to deduce which one is most preferred .+ from eq .( [ 53 ] ) , we have substituting eq .( [ fang66 ] ) into eq .( [ 52 ] ) , we obtain then by using eq .( [ 49 ] ) , we can express the potential energy ( for a given ) in terms of ( scaled by ) : the stored energy is given by then from eqs .( 58 ) , ( 61 ) , ( 67 ) and ( 68 ) , one can calculate the energy distributions for a given elongation .the stored energy curves corresponding to those values of in fig .8 are plotted in fig .9 . in this figure ,labels 1 , 2 and 3 correspond to different values of ( in a decreasing order ) .+ in fig.9(a ) , the total stored energy values for curve 1 and curve 2 are respectively and .thus for a displacement - controlled problem , as , the shape in the right of fig .8(a ) represents a preferred configuration , and at there could be a bifurcation from solution 1 to solution 2 ( a localization solution ) . correspondingly , there is a snap - through in the engineering stress - strain curve at .+ in fig.9(b ) , the total stored energy values for curve 1 , curve 2 and curve 3 are respectively and . for a displacement - controlled problem , as is the smallest , the shape in the bottom of fig .8(b ) represents a preferred configuration .+ in fig.9(c ) , the total stored energy value for curve 1 and curve 2 are respectively and . for a displacement - controlled problem , as , the shape in the right of fig .8(c ) represents a preferred configuration .+ in fig.10 , we have plotted the engineering stress - strain curve corresponding to the preferred configuration for a displacement - controlled problem .we see that a snap - through takes place at , which leads to the localization ( as represented by solution 2 ) .once the localization happens , there is a high concentration of energy around the middle of the cylinder .it is known that if the strain energy density reaches a critical value there will be the material failure .the analytical results obtained here can be used to calculate the stored energy at any material point .the largest energy value is attained at at which ( cf .( 61)(63 ) ) . from eqs .( 40 ) , ( 65 ) , and ( 66 ) , we can express the energy value at this point in terms of , and the result is the values of corresponding to those values of ( in an increasing order ) in fig .8 for preferred configurations are respectively and .+ based on the maximum - distortion - energy theory ( the huber - hencky - von mises theory ; see riley et al .2007 ) , there are two portions of the strain energy intensity : one for volume change and the other for shape change . in the present work , we consider an incompressible material , so there is no strain energy intensity corresponding to the volume change .then the strain energy is only due to distortion . on the other hand ,the strain energy intensity attains its maximum value at the material point .thus , we can give the failure criterion where is the failure value of the strain energy intensity for a given material .fracture will occur whenever the energy by eq .( 69 ) exceeds the limiting value .to study the localization phenomenon , a phenomenological approach is employed to formulate a three - dimensional boundary value problem with an incompressible hyperelastic constitutive law .a coupled series - asymptotic expansion procedure is developed to solve the non - dimensionalized system of governing differential equations with given boundary data for a slender cylinder subjected to axial tension . with the assumptions appropriate for the slender cylinder ,analytical solutions have been obtained for the axisymmetric boundary value problem , which demonstrate the essential features of localization problems and are consistent with the experimental data available . specifically , the width of localization zone depends on the material parameters , and it remains unchanged for the same material in the post - peak regime .also , the snap - back and snap - through phenomena could be predicted with the analytical approach , and a preferred configuration in the post - peak regime could be identified via an energy analysis . due to the lack of three - dimensional analytical results available in the open literature , the analytical work presented in this paper could complement the analytical , experimental and numerical efforts made by the research community for the localization problems over the last several decades . + as indicated by buehler et al .( 2003 ) , the hyperelasticity is crucial for understanding and predicting the dynamics of brittle fracture . especially , the effect of hyperelasticity is important for understanding the failure evolution in nanoscale materials . since localization identifies the onset of material failure , future work will focus on the identification of the parameters proposed in the current phenomenological model , and on the linkage between the continuum and fracture mechanics approaches , via multiscale analysis .the work described in this paper is supported by two grants from cityu ( project nos : 7002107 and 7001861 ) .dai , h. h. , cai , z. x. , 2006 .phase transitions in a slender cylinder composed of an incompressible elastic material .i. asymptotic model equation .ser . a math .462 , 7595 .dai , h. h. , fan , x. j. , 2004 .asymptotically approximate model equations for weakly nonlinear long waves in compressible elastic rods and their comparisons with other simplified model equations .solids 9 , 6179 .
in this paper , we study the localization phenomena in a slender cylinder composed of an incompressible hyperelastic material subjected to axial tension . we aim to construct the analytical solutions based on a three - dimensional setting and use the analytical results to describe the key features observed in the experiments by others . using a novel approach of coupled series - asymptotic expansions , we derive the normal form equation of the original governing nonlinear partial differential equations . by writing the normal form equation into a first - order dynamical system and with the help of the phase plane , we manage to solve two boundary - value problems analytically . the explicit solution expressions ( in terms of integrals ) are obtained . by analyzing the solutions , we find that the width of the localization zone depends on the material parameters but remains almost unchanged for the same material in the post - peak region . also , it is found that when the radius - length ratio is relatively small there is a snap - back phenomenon . these results are well in agreement with the experimental observations . through an energy analysis , we also deduce the preferred configuration and give a prediction when a snap - through can happen . finally , based on the maximum - energy - distortion theory , an analytical criterion for the onset of material failure is provided . , , localization ; hyperelasticity ; cylinder ; bifurcations of pde s
quantum clock synchronization protocols are of fundamental interest in quantum information , since they can illustrate how information about time is encoded in quantum systems . in general , there are presently two approaches to the problem .the first is based on the correlations between photon arrival times , or the related arrival times of optical signals detected by homodyne detection .the second approach is based on the internal time evolution of quantum systems .although the latter approach requires an effective suppression of decoherence and is therefore much more challenging to implement , it might be of greater fundamental interest , since it allows a very general treatment of time in quantum mechanics .specifically , it can show how the time evolution of quantum systems affects the non - classical correlations between entangled quantum systems .initially , it was shown that two - party quantum clock synchronization protocols can be used for efficient clock synchronization by using the enhanced sensitivity of bipartite entangled states to small time differences between the measurements performed by the two parties .later , krco and paul extended this idea to a multi party version , where a w - state was used to simultaneously provide bipartite entanglement between a central clock and several other parties .however , the bipartite entanglement obtained from the w - state decreases rapidly with an increase in the number of clocks .ben - av and exman pointed out that this is a weakness of the w - state that can be overcome by using other dicke states instead .specifically , they showed that the optimal bipartite entanglement for this kind of protocol is obtained by using the symmetric dicke states , where half of the qubits are in the 0 state and half are in the 1 state .interestingly , none of these protocols uses the specific properties of multipartite entanglement .the reason for this may be that it is a straightforward matter to measure and evaluate the correlations between two parties , while genuine multi - partite entanglement is characterized by more complicated correlations that involve the measurement results observed at all locations . here ,we consider the question of whether this different type of entanglement could be used for clock synchronization by constructing a protocol that accesses the maximal entanglement of ghz - type states through an appropriate combination of measurement and communication between the parties .the result should be significant both for determining the limits of multi party clock synchronization and for our general understanding of time in multi - partite entanglement . as we discuss below, a protocol using ghz - type states requires a specific division of the parties into groups during each distribution of the entangled qubits , since there is no ghz - type state that is both an energy eigenstate and symmetric in all parties .the logic of the two party protocol can then be applied to the collective information of all measurements in the two groups . by selecting an appropriate set of divisions ,it is thus possible to use the complete entanglement of the ghz - state to efficiently synchronize all n clocks simultaneously . in the following ,we introduce the protocol , evaluate its efficiency and compare it with the efficiencies obtained from bipartite entanglement in the protocols based on parallel two party synchronization and on the use of symmetric dicke states .here , we consider a type of clock synchronization protocol where entangled qubits are distributed to various parties holding the individual clocks . each quantum system is described as a two - level spin precessing around the z - axis at a fixed frequency . for experimental realizations, it would be important to keep decoherence effects to a minimum , e.g. by using nuclear spins precessing in the intrinsic field of a molecule or crystal .however , it should be kept in mind that the present research is not motivated by such technical considerations , but by an interest in the fundamental nature of quantum clocks , as introduced in the semininal work by buzek et al . . in this spirit, we assume that after state preparation , the evolution of the internal spin state only depends on the passage of time . the problem of clock synchronizationcan then be reduced to the problem of identifying the time differences between time - dependent measurements performed on the different clocks . since clock synchronization should not depend on a knowledge of the time needed for state distribution , the multipartite entangled states used should be energy eigenstates .it is therefore not possible to use ghz states that are superpositions of the two extremal eigenstates of energy , where all qubits are in the same state of their local energy basis . to obtain an energy eigenstate without changing the multipartite entanglement ,half of the local energy eigenstates should be flipped by appropriate local unitary transformations . if the qubits are arranged so that the first half of the qubits is unflipped and the second half of the qubits is flipped , this -partite entangled energy eigenstate can be given in the energy basis as here and in the following , we assume an even number of parties .the states and are local energy eigenstates with energies and , respectively . the state given by eq.([maximal entanglement ] ) divides the qubits into two groups . to ensure clock synchronization between all parties , it is necessary that no two parties are always members of the same group .this is achieved by distributing the qubits in different ways , so that each party sometimes receives a qubit from the unflipped group , and sometimes receives a qubit from the flipped group . to describe each distribution, we define a sequence , where .if the qubit of the clock owner is a flipped qubit , , if not , . since the numbers of flipped and unflipped qubits are equal , the number of possible distributions is given by the binomial coefficient .in the most simple version of the protocol , the division into groups can be decided randomly in each run , with equal probabilities for each distribution .after the distribution of the qubits to the locations of the different clocks , each of the parties measures a time dependent observable on its qubit when their local clock points to a specific time .the observable measured at a time can be written as the eigenvalues of the measurement outcomes are .the eigenstates corresponding to the measurement outcomes are equal superpositions of and , where the phase now depends on the time at which the measurement is performed . as a result, this measurement achieves the maximal time sensitivity for local qubit measurements .the time sensitivity of the maximal multi - partite entanglement of the ghz - type energy eigenstate given in eq.([maximal entanglement ] ) originates from the coherence between the components and .this coherence , which represents the full multi - partite entanglement of the state , changes the probability of the collective measurement outcome depending on the product of the coherences between the and components in the eigenstates representing the local measurement outcomes . as a result, the time sensitivity of multi - partite entanglement can be represented by the espectation value for the product of all outcomes , . if the actual measurement times of the parties are given by the expectation value of this productis since the local measurements represent the maximal time sensitivity for the local qubits , and since the coherence that characterizes multi - parite entanglement of the ghz - type can only be observed in the product of all local measurement outcomes , we can conclude that the time dependence shown in eq.([average number ] ) is the strongest dependence on the measurement outcomes that can be achieved with ghz - type states and local measurements . a protocol that makes use of the time dependent correlations between all of the measurements therefore accesses the full power of maximal multipartite entanglement for clock synchronization . to access the time sensitivity of the ghz - type state, all parties must share their measurement results and determine the product of all outcomes .effectively , the parties cooperate to measure a single -particle interference fringe that is sensitive to the collective phase given by times the difference between all measurement times of the unflipped qubits ( ) and the measurement times of all flipped qubits ( ) .significantly , the use of maximal multi - partite entanglement for clock synchronization critically depends on simultaneous access to all measurement outcomes .it therefore requires classical communication between all the parties , as indicated in fig .the full sensitivity of maximal multipartite entanglement only becomes available for use in the clock synchronization process , if all of the parties cooperate .[ ht ] , scaledwidth=20.0% ] in the final step of the synchronization protocol , the clock owners have to estimate the time differences between their respective local clocks and a standard time .since the present protocol is fully cooperative , with complete symmetry between all parties , it seems natural to define this standard time as the average of all clock times . in the following ,we therefore discuss the synchronization to average time . to achieve synchronization to a standard clock , one can simply change the adjustments of the time , so that instead of changing his or her own time , the owner of the standard clock makes everyone else subtract his or her time adjustment from theirs . to ensure that all parties are treated equally , it is possible to use a random distribution of qubits , so that every distribution of flipped and unflipped qubits is equally likely . to keep track of the different distributions , we assign an index to each , so that the elements of each sequence are given by .the total time difference that defines the phase shift in the multipartite interference fringe observed in the measurement of the distribution with index is then the time differences can be estimated from the outcome statistics of the measurements with an accuracy of , where is the number of times that the distribution is received and measured .after a sufficiently large number of measurements , all parties have the same estimates for all possible time differences .however , the implications of each are different for each party .specifically , each party can obtain the difference of times with and the times with , for , the coefficient in the sum is always , so that the time of the local clock always enters into the sum with a positive value .since all the other times enter into the sum equally , and since the number of coefficients and coefficients is exactly equal , the result can be expressed in terms of the difference between the time and the average of all times other than , the average of all times is obtained by the weighted average of and times . hence , the difference between the local time and the average time can be given by , after this value is determined by each party , it can be subtracted from each local clock time to adjust the clock times so that they correspond to the average time .to determine the efficiency of a clock synchronization protocol , it is necessary to evaluate the precision with which the parties can estimate the adjustment time .in general , this precision is limited by the statistical variance of the measurement results . as mentioned above ,the estimation errors for the time differences are given by , where is the number of times that the distribution was measured .since the adjustment times are linear functions of the , it is sufficient to find the sum of the quadratic errors with the appropriate coefficients to obtain the adjustment errors if each distribution is measured an equal number of times , can be expressed as the total number of measurements divided by the number of possible distributions .likewise , the sum over reduces to a simple multiplication with the number of possibilities . in the end , the estimation error for each adjustment time is given by in the limit of large , this error is simply , independent of the number of parties participating in the clock synchronization .this means that the maximally multipartite entangled states can be used to synchronize clocks in parallel , without any loss of precision when additional parties are added .since the parallel synchronization of clocks can also be achieved by performing separate synchronizations of clocks with the same standard clock , it is not immediately clear whether multipartite entanglement has any advantages over multiple bipartite entangled states . in the following ,we therefore analyze the efficiency of multipartite clock synchronization using the initial proposal for quantum clock synchronization between two parties .the complete multi party protocol is illustrated in fig.2 .there are spatially separated unsynchronized clocks , one of which is the standard clock .the remaining clock owners synchronize their clocks with this standard clock using bipartite entanglement and classical communication . for this purpose ,the owner of the central clock must share maximally entangled two qubit states with all of the other parties for each measurement .effectively , each step of the protocol uses a qubit state given by here , every second qubit is held by the owner of the central clock . at a predetermined time , the owner of the central clock measures the value of on all of her qubits , where is the index of the party that holds the qubit entangled with the qubit .likewise , the other parties measure the value of on their individual qubits according to their local times .the central clock then communicates each result of to the party concerned . after a sufficiently large number of measurements , the owner of clock then determine the expectation value of the product , the clock owners can then determine directly and adjust their clocks accordingly .[ ht ] , scaledwidth=20.0% ] the efficiency of clock synchronization can be evaluated by considering the estimation error for each estimate of adjustment time . for measurements ,this error is given by .thus the precision of the time estimates in this protocol is exactly equal to the result for the protocol using multipartite entanglement .however , the parallel distribution of bipartite entanglement requires qubits for each measurement , as compared to only qubits for the multipartite entangled protocol . in terms of the required number of qubits , the use of multipartite entangled states can thus increase the efficiency by a factor of two . effectively , the main effect of multipartite entanglement seems to be that the need for multiple reference qubits held by the owner of the central clock is removed by allowing the parties to use the qubits of all the other parties as a collective reference instead .previous multi party clock synchronization protocols were based on parallel clock synchronization using the bipartite entanglement available from w - states or from symmetric dicke states . in particular, ben - av and exman showed that the symmetric dicke state is optimal in the sense that it maximizes the bipartite entanglement between the single qubit held by the owner of the central clock and the qubits held by all of the other parties .their protocol is illustrated in fig .it uses the same measurement and communication procedure as in the parallel distribution of entanglement , but with only a single qubit at the central clock for a total number of qubits per measurement - the same number as our ghz state protocol , and almost half the number of qubits used in the parallel distribution protocol .[ ht ] , scaledwidth=20.0% ] the symmetric dicke states is an equal superposition of all energy eigenstates with half of the qubits in the state and half in the state , the bosonic symmetry of the state means that the qubits tend to be found in the same superposition states of and , resulting in positive correlations between the values of obtained by the different parties at the same time .specifically , the correlation between the measurement at the central clock and the measurement at clock is given by at the maximal time derivative of the expectation value , the error in the adjustment time for measurements is given by . in the limit of large , this error is equal to twice the error of our ghz state protocol and the parallel distribution protocol .hence , this protocol requires four times as many qubits to achieve the same accuracy as the ghz state protocol , and twice as many qubits as the parallel distribution protocol .the reduction in qubit number over parallel distribution of bipartite states is therefore more than offset by the loss of sensitivity in each individual measurement due to the reduction in the available bipartite entanglement .we can now summarize our results in terms of the accuracy of clock synchronization achieved with a given number of qubits .since the timescale is defined by the resonant frequency of the qubit dynamics , it is convenient to define the relative accuracy as . for the ghz - type multipartite entanglement ,the accuracy of measurements using qubits is then given by for high , the accuracy is equal to the number of qubits per party , so the accuracy of the multi party protocol scales linearly with the ratio of qubits and parties , .significantly , the straightforward extension of the bipartite protocol by parallel distribution of entangled qubit pairs performs only half as well .specifically , the accuracy of measurements using qubits is the need for extra reference qubits held by the owner of the central clock therefore rapidly reduces the efficiency of each qubit to half the value achieved by the protocol using maximal multipartite entanglement .finally , the protocol using the simultaneous bipartite entanglement between a single central qubit and others achieves a sensitivity reduced by a factor of due to the reduction in bipartite entanglement associated with the increase in entangled partners for each qubit .the accuracy of measurements using qubits is therefore in the limit of high , this is a reduction to one quarter of the ghz - type protocol , twice as much as the reduction in accuracy due to the additional reference qubits in the parallel distribution protocol .we have shown how the maximal -partite entanglement of ghz - type stated can be used for multi party clock synchronization by randomly dividing the parties into two groups during each run and sharing the measurement results with all other parties to determine the adjustments necessary to set each local clock to the average time of all clocks .the accuracy of clock synchronization corresponds to the accuracy achieved by bipartite protocols in parallel , but the number of qubits used is reduced by half .oppositely , the previously proposed use of symmetric dicke states uses the same number of qubits , but the accuracy is only one quarter due to the reduced amount of bipartite entanglement .our results show that the full power of maximal multipartite entanglement can improve the performance of clock synchronization by reducing the number of qubits needed to achieve a given accuracy by a factor of two when compared to the most efficient use of bipartite entanglement .although this is clearly an improvement , it is much less than the improvements of sensitivity when a single parameter is estimated using multi - partite entangled probes .the reason for this limited improvement is that different clock times must be estimated from the same measurement result , leading to a reduction of precision that exactly compensates the gain caused by the increased sensitivity to an average shift in time . from the viewpoint of an individual clock owner, multi - partite entanglement merely replaces the single central clock used in parallel clock synchoronization with bipartite entangled states with the collective of all other clock owners .overall , the efficiency increases by a factor of two , because the simultaneous role of all clock owners as participants and as reference overcomes the need for additional reference qubits . in this sense, multi - partite entanglement simply represents the simultaneous availability of quantum correlations to all parties , without any increase to the individual time sensitivities . from a practical viewpoint, the use of multi - patite entanglement may be difficult , since the loss of a single qubit will completely destroy the essential quantum coherence of the state . to obtain results close to the ones described here, the probability of local losses or dephasing errors must be kept far below .oppositely , this sensitivity to decoherence also highlights the cooperative nature of the protocol : if only a single party sends wrong information , the synchronization of the clocks becomes impossible .thus , clock synchronization with maximal multipartite entanglement also highlights the cooperative nature of multi - party quantum protocols . in conclusion ,the analysis presented here shows how the full power of maximal multipartite entanglement can be used to improve the performance of clock synchronization if all of the parties involved cooperate to share their measurement information .the results may provide interesting insights , both into the role of entanglement in clock synchronization protocols , and into the fundamental nature of time - dependent quantum correlations .part of this work has been supported by the grant - in - aid program of the japanese society for the promotion of science , jsps .
we propose a multi party quantum clock synchronization protocol that makes optimal use of the maximal multipartite entanglement of ghz - type states . to realize the protocol , different versions of maximally entangled eigenstates of collective energy are generated by local transformations that distinguish between different groupings of the parties . the maximal sensitivity of the entangled states to time differences between the local clocks can then be accessed if all parties share the results of their local time dependent measurements . the efficiency of the protocol is evaluated in terms of the statistical errors in the estimation of time differences and the performance of the protocol is compared to alternative protocols previously proposed .
we are envisioning a smart iot system , addressing the key challenges as described below : the first challenging problem is that devices are not interoperable at any level with each other since most of the time technologies differ from one to another .for instance , in contemporary iot applications multiple competing application level protocols such as constrained application protocol ( coap ) , message queue telemetry transport ( mqtt ) and extensible messaging and presence protocol ( xmpp ) are becoming popular .each protocol possesses unique characteristics and messaging architecture helpful for different types of iot applications .however , a smart iot application architecture should be independent of messaging protocol standards , while also providing integration and translation between various popular messaging protocols . similarly to proprietary protocols , at the data level, devices do not use common terms or vocabulary to describe interoperable iot data .the traditional paradigm of the iot service model provides unformated data names as `` raw '' sensor data .this `` raw '' sensor data does not contain any aggregated description ( usually representation through semantic annotations ) and requires specialized knowledge and manual effort in order to build practical applications .much of the current use of iot is targeted to a single domain and most of the times the number of sensors are duplicated unnecessarily .for instance , temperature sensors in a building primarily used for a heating , ventilating , and air conditioning ( hvac ) application .however , values produced by temperature sensors could be used in other applications such as fire detection .the primary advantage of using common sensors into various applications is that it can reduce development , maintenance and deployment costs and promote device reusability . to enable crossdomain applications and address interoperability issues ,a smart iot system is needed to publish their outputs and to describe device information in a wellunderstood format with added metadata and machineprocessable formats , thus making devices accessible and usable in multiple applications . in iot systems , users are primarily interested in realworld entities ( such as people , places and things ) and their highlevel states ( e.g. , deriving snowfall from temperature and precipitation measurements ) rather than raw output data produced by sensors attached with these entities . to achieve this requirement ,a smart iot system has to provide highlevel knowledge that can map sensors to realworld entities and output of raw sensor to highlevel states . to easily develop iot applications at a large scale with little or no human intervention ,a smart iot system should leverage semantic web properties , and follow standards .the web of knowledge also plays a relevant role , by defining the rules and mechanisms to associate information in order to produce knowledge .see `` smart iot : iot as a human agent , human extension , and human complement '' in for the first definition of smart iot , which highlights the challenge of interoperating and integrating the data and information .we are envisioning a smart iot system that enables good decision making and actions .figure [ fig : architecture ] shows an architecture overview of the system inspired by .the architecture largely divided into three layers by their functions : an example of a smart iot application is represented below in figure [ fig : app ] taken from the first column in this series ( from data to decisions and actions : climbing the data , information , knowledge , and wisdom ( dikw ) ladder ) .the lowest level shows `` 150 '' which is a blood pressure reading ( sensor / device data ) .the next level shows labeled ( semantically annotated ) data or information .the third level represents knowledge that is based on the latest nih guidance used by clinicians , this information represents a medical condition of `` elevated blood pressure '' . andyet this knowledge is not actionable the clinician needs to decide whether this is due to hyperthyroidism or hypertension , which is needed before a proper medication can be prescribed .in today s internet of things landscape the cyber , virtual and physical worlds are largely disconnected , requiring a lot of manual efforts to integrate , find , and use information in a meaningful way .to realize the application as we discussed above , we are envisioning a smart iot system that enables good decision making and actions .figure [ fig : architecture ] shows an architecture overview of the system inspired by .the architecture largely divided into three layers by their functions : 1 . * accessing things ( physical ) * :this layer is responsible for turning a device such that an application can interact with it .the gateways use devicespecific protocols to retrieve data produced by resourceconstrained devices .the gateways add semantics to data to unify them , by using semantic web languages ( such as rdf , rdfs , owl ) and domain ontologies .* deducing new knowledge ( virtualization ) * : the second layer is dedicated to frameworks managing unified data available in standard formats produced by the physical layer .it mainly infers high level knowledge using reasoning engines performed on data and by exploiting the web of knowledge available online .such enriched data is provided to the cyber layer to build smart systems , applications and services .* composing services ( cyber ) * : this layer facilitates developers so that they can build largescale and meaningful iot applications on top of the virtualization layer .the goal of this level is to drastically reduce the iot application development , thus enabling rapid prototyping and encourage interoperability of services . to date , to the best of our knowledge, no concrete and robust technical approaches have been designed to build semantic interoperability for iot yet .recently , some semantic interoperability approaches applied to iot are being designed .an iot stack to ensure interoperability has been designed in . to define such architecture , semantic interoperabilityshould be provided as explained in where it is introduced the idea of an effective approach to bring together metadata , information modeling abstractions and ontologies , as well as the model application domain .hereafter , we describe the above three layers , mapping into an example with details that assist developers to design and build smart iot applications .the first layer is responsible for turning a device into such that an application can interact with it . the most straightforward way of accessing a device is to expose it and its services directly through apis .this is applied when a device can support http / web service ( ws ) and tcp / ip and can host an http server . however , integrating resourceconstrained devices into the internet is difficult because internet protocols such as http , tcp / ip are too complex and resourcedemanding . to achieve the integration , typically a gateway node is required . to provide interoperability, we can implement necessary technologies at the resource sufficient gateway node .desai et al . have proposed the concept of `` semantic gateway as service '' ( sgs),shown in figure [ fig : gateway ] , that can act as a bridge between resourceconstrained devices and iot application services . here, the iot application services typically collect data from the various gateway nodes and provide user or event specific services using graphics interfaces , notifications or applications .the sgs architecture broadly provides three functionalities : first , it connects external sink device to the gateway component that supports different protocols such as mqtt , xmpp or coap .second , externally the gateway interfaces cloud services or other sgss via different protocols such as rest or publish / subscribe .third , it annotes data acquired from the sink nodes using w3c semantic sensor networks ( ssn ) and domainspecific ontologies before forwarding data to the gateway interface service .the main benefit is that semantic annotation of sensor data by utilizing a standard mechanism and vocabulary can provide interoperability between iot vertical silos .semantic web community has created and optimized standard ontologies for sensor observation , description , discovery and services . by integrating these annotated data and providing semanticweb enabled messaging interface , a third party service can convert heterogeneous sensor observations to higher level abstractions .a gateway device such as sgs , discussed above , separates physical level implementation of device to iot application services .it provides endpoint to iot application services using a resource interface via rest and publish / subscribe mechanism .the semantic annotation of the sensor data obtained from the gateway assists the iot services to implement analysis and reasoning algorithms , described in the next layer .the second layer is dedicated to frameworks managing data and deducing new knowledge . in iot ,most of the time , raw data is just a number ( e.g. , 38 ) .humans implicitly know that data is associated to a specific unit ( e.g. , degree celsius ) and a specific sensor ( e.g. , thermometer ) .smart iot systems need to interconnect data produced by various sensors to understand the meaning of data to automatically take decisions or to provide suggestions . dealing with interoperability of heterogeneous datais required to build smarter iot systems .data is stored in different files ( e.g. , csv , excel ) and structured with different models ( e.g. , ontology , schemas ) . to deal with heterogeneous data ,semantic web technologies bring several benefits : ( 1 ) unify data , ( 2 ) link iot data to external knowledge bases , ( 3 ) explicitly add metadata ( i.e. , semantic enrichment / enhancement ) , and ( 4 ) deduce new knowledge .semantic web technologies enable interconnecting knowledge graphs . by interconnecting iot data with such knowledge graphs ( datasets and ontologies used to structure the datasets ), iot systems are becoming smarter .one current form of knowledge graphs that is widely available and useful is `` linked open data '' ( lod ) .major internet companies such as facebook and google are buildding private knowledge graphs based on schema.org ( agreement on common schemas to structure data ) that is widely adopted by search engines , in addition to components of lod such as dbpedia , wikipedia and/or wikidata .such knowledge graphs are build to get access to the information requested more easily and in a automatic way .companies have chosen to use different representation for semantic data , that include unlabeled and labeled graphs , w3c ratified semantic web languages such as rdf and rdfs to explicitly describe the data , and owl , a language to describe ontologies / vocabularies .such semantic technologies provide a basis to later infer high level abstractions from sensor data .connecting unified semanticsenriched iot data to the knowledge bases available on the web has a huge potential to build smart systems .for instance , by connecting health measurements to healthcare knowledge bases enable interpreting the raw data itself .for instance , from a body temperature data and by reusing knowledge bases on the web , fever symptom can be deduced .interconnecting better such knowledge bases , particularly in smart iot is required .currently , open ontology catalogues and dataset catalogues are not interconnecting with each other .new systems are required to link ontologies and datasets , along with the methods to deduce new knowledge from structured data .taking inspiration from `` linked open data '' and `` linked open vocabularies '' ( lov ) initiatives , `` linked open reasoning '' should be designed and aligned to such initiatives .different processes and steps are required for combining data from heterogeneous sources and for building innovative and interoperable applications .figure [ fig : seg ] introduces the seg 3.0 methodology that seeks to meet these requirements and comprises the following steps : ( 1 ) composing , ( 2 ) modeling , ( 3 ) linking , ( 4 ) reasoning , ( 5 ) querying , ( 6 ) services , and ( 7 ) composition of services .the seg 3.0 methodology encourages the vision to enhance semantic interoperability from data to enduser applications , which is inspired from the sharing and reusing based approach as depicted in figure [ fig : seg].the seg 3.0 methodology comprises : * * linked open data ( lod ) * is an approach to share and reuse data .previous work regarding ` linked sensor data ' do not provide any tools for visualizing or navigating through iot datasets . for this reason , we envision the design of linked open data cloud for internet of things ( cloudiot ) to share , browse and reuse data produced by sensors . ** linked open vocabularies ( lov ) * is an approach to share and reuse the models / vocabularies / ontologies [ vandenbussche et al .2015 ] . to ensure reusability and high quality ontologies , lov did not reference any ontologies when they do not follow the best practices . due to this requirement , almost all ontologies for iot and relevantdomain ontologies were not referenced by lov since iot community does not yet know the best practices . to overcome this limitation ,the linked open vocabularies for internet of things ( lov4iot ) has been designed , a dataset of almost 300 ontologybased iot projects referencing and classifying : ( 1 ) iot applicative domains , ( 2 ) sensors used , ( 3 ) ontology status ( e.g. , shared online , best practices followed ) , ( 4 ) reasoning used to infer high level knowledge , and ( 5 ) research articles related to the project .this dataset contains a background knowledge required to add value to the data produced by devices . ** linked open reasoning ( lor ) * is an innovative approach to share and reuse the way to interpret data to deduce new information ( e.g. , machine learning algorithm used , reusing rules already designed by domain experts ) .sensorbased linked open rules ( slor ) is a dataset of interoperable rules ( e.g. , if then else rules ) used to interpret data produced by sensor data .such rules are executed with an inference engine which updates the triple store with additional triples .for example , the execution of the rule `` if the body temperature is greater than 38 degree celsius than fever '' updates the triple store with the high level knowledge fever. slor is inspired from the idea of linked rules which provides a language to interchange semantic rules but not the idea of reusing existing rules . * * linked open services ( los ) * is an approach to share and reuse the services / applications .composition of services is required to build complex applications .services can be implemented according to restful principles or with the help of semantic web technologies to enhance interoperability ( e.g. , owls ) .this approach could be extended for designing a set of interoperable services .sharing and reusing data is insufficient .the entire chain from linked open data ( lod ) to linked open services ( los ) should be shared and reused to enhance interoperability and get meaningful knowledge from data . having this vision in mind , the models , the reasoning and the services associated to the data would be interoperable with each other .this entire chain , called seg 3.0 methodology , has been implemented within the m3 framework and extended within the fiesta iot eu platform .m3 enables fast prototyping of iot applications using semantic web technologies to semantically annotate sensor data , deduce new knowledge and combine iot applicative domains .m3 is a semantic engine mainly focused on data interoperability and could be used in other eu projects such as fiestaiot , openiot and vital .fiestaiot aims to achieve interoperability of data , testbeds and experiments by using semantic web technologies .one of the component of the fiestaiot project , called `` experiment as a service '' demonstrates the proof of concept of the `` linked open services '' approach .figure [ fig : seg ] highlights an endtoend scenario from raw value ( e.g. , 38 ) to the final application ( e.g. , naturopathy to suggest home remedies when fever is detected ) .such applications can be developed through the use of the m3 framework .developers can build largescale and meaningful iot applications and services on top of devices and iot data .the goal of this level is to drastically reduce the iot application development , thus enabling rapid prototyping .this layer also gets closers to enduser ( or domain experts with a limited programming expertise ) and enables them to create intelligent applications on top of smart things . in the following ,we describe application development approaches for building iot applications . * generalpurpose programming . * currently , development of iot is performed at the nodelevel , by experts of embedded and distributed systems , who are directly concerned with operations of each individual device .for example , developers use generalpurpose programming languages ( such as javascript , c , java , android , python ) and target a particular middleware api or nodelevel service to communicate data .the key advantage of this approach is that it allows the development of efficient systems based on the complete control over individual devices .however , it is unwieldy for iot applications due to the heterogeneity of systems . * macroprogramming . *it provide abstractions to specify highlevel collaborative behaviors , while hiding lowlevel details such as message passing or state maintenance from stakeholders .a classic example of macroprogramming is nodered .it is a visual tool for wiring together hardware devices , apis , and online services .it provides browserbased environment for creating eventdriven applications , bridging between physical and cyber services .it contains nodes that can be dragged and dropped into an editor .each node offers different functionality that can range from a simple debug functionality to accessing sensors via gateways ( e.g. , raspberry pi ) .macroprogramming is viable approach compared to generalpurpose programming .however , this approach largely lacks proper software development methodology ( e.g. , modular design , separation of concerns ) .that results into a difficult to reuse and platformdependent design. * cloudbased platforms . * to improve development effort , the cloudbased platforms provide apis that provide functions to implement common functionality of iot applications such as sending and storing data to cloud for data visualization .moreover , these platforms provide textual and visual programming running on the cloud to write a custom application logic .they provide abstractions to specify highlevel collaborative behaviours while hiding lowlevel details such as message passing .an example of cloudbased approach is ibminternet of things foundations ] .it is a fully managed and cloudhosted service that makes it simple to derive value from physical devices . using abstractions ( it is called as recipes ), developers can connect devices to the internet , send sensing data securely to the cloud using the open and lightweight mqtt messaging protocol .from there , developer can leverage various cloudbased services such as dashboard services to visualize and derive insight from the collected data , storage services to store data for historical purpose .cloudbased platform is a viable approach compared to the generalpurpose programming languages .it reduces development efforts by providing cloudbased apis to implement common functionality .second advantage is that because application logic is centrally located , this approach offers the ease deployment and evolution .however , this approach sacrifices direct nodetonode communication .this characteristics restricts developers interms of functionality such as innetwork aggregation or direct nodetonode communication locally .third , application logic largely runs on a central cloud , thus an application relies on the availability of cloud provider .so , it may not be suitable for some critical applications .* model driven development ( mdd ) . * macroprogramming and cloudbased approach reduce the application development effort .however , they lead to a platformdependent design . to address this issue ,mdd approaches have been proposed .it applies the basic separation of concerns principle both vertically and horizontally .vertical separation principle reduces the application development complexity by separating the specification platform independent model ( pim ) of the system functionality from its platform platform specific model ( psm ) such as programming languages. horizontal separation principle reduces the development complexity by describing a system using different system views , each view describing a certain facet of the system .an example of mdd approach is iotsuite that aims to make iot application development easy for developers .it achieves this aim by integrating a set of highlevel languages to specify an iot application .it provides automation techniques to parse the specifications written using these highlevel languages and generate platformspecific code .the iotsuite integrates three highlevel languages that abstract platformspecific complexity ( i.e. , horizontal separation of concerns ) : ( 1 ) domain language to describe domainspecific features of an iot application , ( 2 ) architecture language to describe applicationspecific functionality of an iot application , ( 3 ) deployment language to describe deploymentspecific features consisting information about a physical environment where devices are deployed .the iotsuite is supported by automation techniques such as codegenerator that generates platformspecific code by parsing the specification written using the supported highlevel programming languages ( i.e. , vertical separation of concern ) .ensuring semantic interoperability within iot is really challenging since physical , virtual and cyber layers deal with heterogeneity of hardware devices , protocols , data and reasoning mechanisms to infer high level knowledge .the seg 3.0 methodology explained above is mainly focused on data interoperability and is a first step towards building interoperable enduser iot applications and services .as discussed in the first column in this series ( see previous issue ) , iot deployment is experimenting a fast adoption because of the foreseen positive impact to change all the aspects of our lives .this is accompanied by corresponding variety or heterogeneity for all aspects of iot ecosystem , including data , communication and application development frameworks .the vision of smart iot , also discussed in the previous column , envisages hiding this heterogeneity and corresponding complexity , while enabling development of applications based on intelligent real - time processing of data produced variety of sensors , along with relevant knowledge .semantic methods and semantic web standards are key enablers of the three requisite layers of a smart iot system : accessing things ( iots ) , understanding iot data and deducing new knowledge using background / domain knowledge and structured data , and developing composing services .this article ends with a review of four approaches to application development in a smart iot ecosystem .s. chauhan , p. patel , f. c. delicato , and s. chaudhary .a development framework for programming cyber - physical systems . in _ proceedings of the 2nd international workshop on software engineering for smart cyber - physical systems _ , sescps 16 , pages 4753 , new york , ny , usa , 2016 .s. chauhan , p. patel , a. sureka , f. c. delicato , and s. chaudhary .demonstration abstract : iotsuite - a framework to design , implement , and deploy iot applications . in _2016 15th acm / ieee international conference on information processing in sensor networks ( ipsn ) _ ,pages 12 , april 2016 .m. compton , p. barnaghi , l. bermudez , r. garca - castro , o. corcho , s. cox , j. graybeal , m. hauswirth , c. henson , a. herzog , et al .the ssn ontology of the w3c semantic sensor network incubator group . , 17:2532 , 2012 .a. gyrard .a machine - to - machine architecture to merge semantic sensor measurements . in _ proceedings of the 22nd international conference on world wide web _ , www 13 companion , pages 371376 , new york , ny , usa , 2013 .a. gyrard , c. bonnet , and k. boudaoud .helping iot application developers with sensor - based linked open rules . in _ proceedings of the seventh international workshop on semantic sensor networks in conjunction with the 13th international semantic web conference ( iswc14 ) ,riva del garda , italy _ , pages 1923 , 2014 .a. gyrard , c. bonnet , k. boudaoud , and m. serrano .assisting iot projects and developers in designing interoperable semantic web of things applications . in _ 2015 ieee international conference on data science and data intensive systems _, pages 659666 , dec 2015 .a. gyrard , s. k. datta , c. bonnet , and k. boudaoud .cross - domain internet of things application development : m3 framework and evaluation . in_ future internet of things and cloud ( ficloud ) , 2015 3rd international conference on _ , pages 916 .ieee , 2015 .a. gyrard and m. serrano .connected smart cities : interoperability with seg 3.0 for the internet of things . in _2016 30th international conference on advanced information networking and applications workshops ( waina ) _ , pages 796802 , march 2016 .d. pfisterer , k. romer , d. bimschas , o. kleine , r. mietz , c. truong , h. hasemann , a. kroller , m. pagel , m. hauswirth , m. karnstedt , m. leggieri , a. passant , and r. richardson .spitfire : toward a semantic web of things ., 49(11):4048 , november 2011 .m. serrano , h. n. m. quoc , m. hauswirth , w. wang , p. barnaghi , and p. cousin .open services for iot cloud applications in the future internet . in _ world of wireless , mobile and multimedia networks ( wowmom ) , 2013 ieee 14th international symposium and workshops on a _ , pages 16 .ieee , 2013 .m. serrano , h. n. m. quoc , d. le phuoc , m. hauswirth , j. soldatos , n. kefalakis , p. p.jayaraman , and a. zaslavsky . defining the stack for service delivery models and interoperability in the internet of things : a practical case with openiot - vdk . , 33(4):676689 , 2015 .d. soukaras , p. patel , h. song , and s. chaudhary . .in _ the 4th international workshop on computing and networking for internet of things ( comnet - iot ) , co - located with 16th international conference on distributed computing and networking ( icdcn ) _, page 6 , 2015 .
the internet of things ( iot ) is experiencing fast adoption in the society , from industrial to home applications . the number of deployed sensors and connected devices to the internet is changing our perspective and the way we understand the world . the development and generation of iot applications is just starting and they will modify our physical and virtual lives , from how we control remotely appliances at home to how we deal with insurance companies in order to start insurance schemes via smart cards . this massive deployment of iot devices represents a tremendous economic impact and at the same time offers multiple opportunities . however , the potential of iot is underexploited and day by day this gap between devices and useful applications is getting bigger . additionally , the physical and cyber worlds are largely disconnected , requiring a lot of manual efforts to integrate , find , and use information in a meaningful way . to build a connection between the physical and the virtual , we need a knowledge framework that allow bilateral understandings , devices producing data , information systems managing the data and applications transforming information into meaningful knowledge . the first column in this series in the previous issue of this magazine titled `` internet of things to smart iot through semantic , cognitive , and perceptual computing , '' reviews iot growth and potential that have energized research and technology development , centered on aspects of artificial intelligence to build future intelligent system . this column steps back and demonstrates the benefits of using semantic web technologies to get meaningful knowledge from sensor data to design smart systems .
in its native state a protein fluctuates around a configuration corresponding to the absolute minimum of its energy landscape .however , realistic energy landscapes are characterized by many competing minima .protein folding can therefore be viewed as the process of selecting the right minimum among many others .since proteins perform their function at finite temperatures , the entropy of their native states concurs in determining their stability even when folding is driven mainly by an enthalpic bias .when only one basin of the energy landscape is significantly visited , as it happens for folded proteins , entropy is mainly contributed by fluctuations around the minimum it is all _ vibrational _ entropy .native states are highly compact structures , characterized by the presence of secondary structure motifs , such as -helices and -sheets . in turn , the vibrational properties of a heterogeneous system are typically influenced by the presence of simmetries and by its degree of modularity .it is therefore interesting to investigate whether a correlation exists between the presence of and structures and vibrational entropy .previous work has shown that both -helices and -sheets are characterized by a larger flexibility than random - coil conformations . however , a systematic study quantifying the impact of secondary structure content on the vibrational properties of native protein conformations is lacking .it is the aim of this paper to introduce the simplest coarse - grained model able to quantify the correlation between vibrational dynamics of proteins and their structural organization at the secondary level . a sensible starting point for such analysisis represented by the class of coarse - grained network models , that map a given protein structure on a network of point - like aminoacids interacting trough hookean springs . in most implementations aminoacidsare taken to sit at the corresponding c sites and are assigned the same average mass of 120 da . in this framework ,a given native structure specifies by construction the topologies of inter - particles interactions , that is the networks of connectivities and equilibrium bonds .the patterns of low - frequency collective modes are uniquely dictated by the topology of the network of physical interactions that characterizes the native configuration . in this sense ,network models might be considered as the simplest tools to describe the low - frequency regime of protein dynamics . however , due to the coarse - graining element , such schemes are more questionable in the high - frequency part of the vibrational spectrum .this makes them unsuitable to compute vibrational entropies , which include nonlinear contributions from all modes , as it shows from its very definition where the last passage follows in the classical limit . as a consequence, one needs to incorporate further structural and dynamical details within the network description besides the sheer topology of connections in order to correctly compute vibrational entropies . more precisely , we are interested in capturing the structural and dynamical elements that characterize fluctuations at the scale of secondary structures .we identify three crucial features that must be taken into account . * a realistic hierarchy of force constants , reproducing the different strengths of bonds featured in a protein molecule .these comprise covalent bonds ( such as the peptide bond along the chain ) , and in particular the interactions stabilizing secondary structures .i.e. hydrogen bonds , besides long - range forces such as * the degrees of freedom corresponding to side chains , which play a fundamental role in the spatial organization of secondary motifs .hydrophobic or screened electrostatic interactions . * the appropriate aminoacid masses , correctly reproducing the true protein sequence .in fact , as shown in fig .[ f : masses ] , residues of different mass show different and propensities . in particular light residuestend to be less represented in secondary structure motifs .as we will show later , this analysis highlights that -rich ( and -poor ) native conformations tend to have a higher vibrational entropy per residue regardless of protein size and shape .this is particularly interesting with respect to protein aggregation since protein aggregates are know to be rich in -structures . in most cases ,they share the same -spine architecture characteristic of amyloid fibrils : an assembly of beta sheets perpendicular to the fibril axis . moreover , aggregation is a rather common phenomenon for all sorts of polypeptide chains regardless of their sequence .this suggests that the phenomenon should be governed by rather general laws , uniquely related to the dynamical properties of poly - peptide chains in solution . in particular , in view of the rather slow time - scales characteristic of aggregation and in view of our results on the correlation between vibrational entropy and secondary structure , it is tempting to postulate a mechanism of thermodynamic origin , that would favor the growth of structures rich in -content under quite generic conditions .the paper is organized as follows . in the methods sectionwe describe the database of native structures and the models used for our analysis . in the discussion sectionwe discuss the features of vibrational spectra and the emergence of the observed correlation between vibrational entropy and secondary structure .finally , we comment on the biological relevance of our results , particularly for what concerns protein aggregation phenomena .it is our aim to conduct the most general analysis of the interplay between vibrational entropy and the content of secondary structure in a large database of protein structures . of course, we are bound to avoid repeatedly taking into account structures corresponding to homologous proteins .we therefore chose the pdbselect database , which was explicitly built to gather the largest number of protein scaffolds by keeping the structural homology between any two structures lower than 30 % . in order to illustrate the amount of structural diversity displayed by proteins from the pdbselect database , we show in fig .[ f : alpha - beta ] both the -helix and -sheet content of each structure as computed by means of the dssp algorithm introduced by kabsch and sanders .we see that the two measures roughly anti - correlate , their sum being gaussian - distributed around 50 % with a standard deviation of about 15% . as a further piece of information, we report in the inset of fig .[ f : alpha - beta ] the statistical distribution of chain lengths for all sequences contained in the database , which shows to decay exponentially .additionally , we found no appreciable correlation between or -content and other structural indicators such as chain length or surface accessible area . on the contrary , although not surprisingly , the sum of and -content correlates positively with the number of hydrogen bonds ( hb ) , which is indeed the principal physical interaction stabilizing both kind of motifs . versus -content in the pdbselect databasethe inset shows the histogram of chain lengths in the database ( lin - log scale).,width=12 ] our starting point is a widely employed coarse - grained scheme , the anisotropic network model ( anm ) .originally proposed by tirion at the all - atom level , such model has been successively reconsidered by bahar and co - workers in the c approximation , with results in fair agreement with experiment . in the spirit of coarse - grained network models, aminoacids are modelled as spherical beads linked by hookean springs if they are close enough in the native structure , as specified through a pre - assigned cutoff distance .both anm and its scalar analogue , the gaussian network model ( gnm ) , proved very efficient in describing the low - frequency part of protein spectra , which is dominated by large - scale , collective motions of entire domains or other structural sub - units , as measured from x - ray crystallography or electron microscopy , let be the number of residues of a given structure and let denote the distance between the c of the -th and -th aminoacids in the pdb structure .then , the network of residue - residue interactions can be constructed by introducing a distance cutoff , and the corresponding _ connectivity _ matrix accordingly , the total potential energy of the c anm is a sum of harmonic potentials , here measures the instantaneous elongation of the bond and is the strength of all springs connecting interacting pairs .the anm model is known to describe accurately long - wavelength fluctuations , such as the concerted motions of subunits or entire domains , whereas some doubts can be cast on its predictive accuracy concerning aminoacid motions at a scale comparable to the characteristic dimension of a residue .however , we are interested in computing vibrational entropies , which depend on the whole spectrum .importantly , the contributions of high and low frequencies to the entropy are impossible to disentangle in an elementary manner , due to its strong nonlinearity . hence , in order to increase the spectral reliability of the c scheme and make it more accurate in the high - frequency domain , we introduce three additional features within the standard anm model . 1 . in the basic anm protocol , all springs share the same strength , in spite of the wide differences among the real forces governing residue - residue interactions in a real protein . hence , we introduce a _ hierarchy _ of spring constants , aimed at reproducing the strength of the most important interactions , that is covalent , hydrogen - bond and van der waals ( vw ) bonds . consequently , we modify the potential energy function ( [ e : anmpot ] ) as follows where , and are the connectivity matrices of the three distinct sub - networks comprising all aminoacids interacting via peptide bonds along the main chain , through hbs and vw interactions , respectively .the quantities , and are the corresponding spring constants , whose magnitude is taken by construction to span two orders of magnitude .we coin this scheme the hierarchical network model ( hnm ) .+ spring constants customarily used to model covalent bonding are in the range kcal / mol / .here we chose kcal / mol / .a second order expansion of the lennard - jones potentials used to describe hb in different all - atom force fields gives values of in the range kcal / mol / .we tried values of in the range with no appreciable difference in the distinctive features of the vibrational spectra .finally , taking into account that the young modulus of covalent solids is about times that of a van der waals solid , we assume .coarse - graining protein structure at the c level , one neglects all degrees of freedom associated with aminoacid side chains . however , these are known to be subject to different positional constraints in secondary structure motifs such as parallel and anti - parallel -sheets and -helices .moreover , the entropic contribution of side chains has been recently proved fundamental in determining the free energy of protein native states hence , we also consider a variant of the hnm model where we model side chains as additional beads , whose equilibrium position we fix at the side chain center of mass in the native structure .furthermore , we place a covalent bond between each bead - like side chain and its corresponding carbon . in particular , given their small size , we neglect side chains of glycines , which will be only represented by their carbon bead .we remark that this approach is utterly similar to many other unified residue models .usually , in the framework of protein network models , one assigns to each carbon bead the average aminoacid mass of 120 da . in the present study, we adopt the following convention : we split the mass of each aminoacid into a constant contribution of 56 da , that represents the atoms lying along the peptide bond , and assign a variable mass ranging from 35 da ( for alanine ) to 150 da ( for tryptophan ) to the bead representing the side chain . in the following ,we will present results for an _ hybrid _ hnm model , whereby the above three features shall be switched on and off in the calculation of vibrational entropies .our aim is to understand how the corresponding three physical properties affect vibrational spectra in general and vibrational entropy in particular .it is our aim to explore the correlation of vibrational entropy with secondary structure content across the whole database .more precisely , the _ fil rouge _ of our analysis is to check whether entropic contributions might favor the formation of -sheets at high temperatures , thus in turn disfavoring -helices . in view of the positive correlation existing between the and contents , ( see again fig . [f : alpha - beta ] ) we shall consider in the following the correlation of entropy with a global structural indicator defined as ( -content -content ) , which we shall refer to as -preference . positive correlation of this quantity with entropy leads to a negative correlation with free energy at high temperatures , that will in turn favor the formation of -like structures .conversely , a negative correlation would signal a preference towards helices .importantly , in order to compare proteins of different lengths , we shall always consider _ intensive _ observables , namely entropies per residue . a first analysis of the vibrational entropy of the pdbselect structures performed by means of a standard anm model yields somewhat unintuitive results .the intensive vibrational entropy does not correlate with the -helical content ( correlation coefficient 0.14 ) and anti - correlates very weakly with the -sheet content ( correlation coefficient 0.34 ) .this result , implying that -sheet are mildly stiffer than -helices , is manifestly at odds with the well documented rigidity of -helices .we conclude that the anm model , although reliable at the low - frequency end of the vibrational spectra does not include a sufficient amount of detail to realistically reproduce full vibrational entropies . as a first trial towards increasing the level of detail , we consider the bare hnm model , that is we introduce a three - level hierarchy of spring strengths in the framework of the standard anm scheme as from point ( i ) above .moreover , we shall switch on the different force constants one at a time first , in order to investigate the effects of the different physical interactions upon vibrational entropies .a first observation is that a roughly ten - fold separation in the intensity of force strengths is enough to induce quasi - additivity of the vibrational spectrum . in fig .[ f : densities ] , we show how the vibrational spectrum of sequences ` 1kfm ` ( a purely -helical protein ) , ` 1qj9 ` ( a -barrel ) and ` 1lsx ` ( 36% and 36% ) change upon sequentially switching on the three different interactions . ) , hydrogen bonds ( middle row , ) and van der waals ( bottom row ) interactions for an -helical ( panels a , b , c pdb code ` 1kfm ` ) , a -barrel ( panels d , e , f pdb code ` 1qj9 ` ) and a structure with equal and content ( panels g , h , i pdb code ` 1lsx ` ) .parameters are : , kcal / mol / , kcal / mol / and kcal / mol / .,width=12 ] as a general remark , we observe that , for all types of structure , the bare covalent chain has only modes with non - zero frequency .more in detail , we see that the structure with dominant -helical character has a very distinctive peak at high frequency. moreover , such feature shows to be robust with respect to the nature of residue - residue interactions .a peak is also seen to emerge in the structure as soon as the connectivity matrix starts having its off - diagonal regions populated due to hbs and vw interactions .more quantitatively , we found that the high - frequency part of the spectra across all the pdbselect could be well reproduced by a linear combination of two normalized step functions and , with supports lying , respectively , in the ] intervals as shown in fig .[ f : fitting ] .the two resulting coefficients and sum up to unity and might be regarded as a spectral estimate of the actual and contents . at a closer inspection , it turns out that only is a good approximation of the actual -content of the protein ( see fig .[ f : formfactor - alpha ] ) , while is observed to be large not only when the protein has an high beta content but also when it has no pronounced secondary structural features at all . in other words , while the -peak is a clear distinctive feature of -helices , -sheets are rather spectrally indistinguishable from unstructured regions ( see the appendix for a quantitative explication of the -peak ) .-content by fitting the high - frequency portion of the spectrum in a typical case ( mixed structure ` 1lsx ` ) .the two representative functions ( -peak ) and ( -band ) are combined linearly in the step function , which is least - square fitted to the high - frequency portion of the spectrum .parameters as in fig .[ f : densities].,width=12 ] -content as determined from the spectra , , and the actual -content as measured through the dssp protocol for all structures from the pdbselect .parameters as in fig .[ f : densities].,width=12 ] if we now turn the hbs on , vibrational spectra acquire a varying amount of new vibrational modes ( between 5% and 30% ) , on average 18% of the total number of degrees of freedom on the entire pdb select . perhaps not surprisingly , the quantity of such new modes turns out to be linearly correlated with the number of hbs ( see fig .[ f : percentage - newmodes ] ) .furthermore , we see that the appearance of the new hb part of the vibrational spectrum only slightly modifies the spectral density at higher frequencies ( compare panels b and c , e and f or h and i in fig .[ f : densities ] ) . finally , adding the lennard - jones contribution , all the remaining modes are filled , with the exception of the expected six zero - frequency modes corresponding to rigid translations and rotations .once again , due to the lower intensity of their driving interaction , the new lj modes occupy the low - frequency end of the spectrum and do not modify appreciably the spectral density at higher frequencies. .,width=12 ] the marked differences in the spectra of and proteins described so far through the hnm c model do not show up in the same clear - cut fashion if one examines how the vibrational entropy depends on the type of secondary structure of the proteins : the intensive entropy shows no correlation at all with -preference ( data not shown ) . at a deeper thought, this is scarcely surprising , since the only detectable spectral differences between proteins of different secondary organization have already seen to be located at high frequencies , which in turn results in a negligible contribution to vibrational entropy .what is more , that part of the spectrum is also the most sensitive to the structural details of the system and consequently the least reliably captured by a coarse - grained model . in order to test the solidity of our conjectures, we must then increase the level of descriptive accuracy of the model .we have devised two ways to accomplish this , by adding additional degrees of freedom representing the side chains or adding the full sequence of aminoacid masses .we found that both additional features independently produce the same effect , namely destroy the clear -peak versus -band picture illustrated above .however , the spectral additivity is preserved and , all the more significantly , a good correlation between intensive vibrational entropy and -preference appears , as can be clearly seen in fig .[ f : correlationssidechain ] where the two observables are scatter - plotted for the full , weighted hnm model with side chains .with -preference in the weighted hnm model with side chains .parameters are : , kcal / mol / , kcal / mol / and kcal / mol / .,width=12 ] a systematic analysis reveals that such correlation is shaped by a complex interplay between the features that distinguish our model from the basic anm scheme . in table [ t : models ] the arousal of the correlation between entropy and -preference is summarized by reporting both the correlation coefficient and the slope of the first order least - square fit to the data .the two best correlations achieved are reported in bold .the best correlation is obtained when all levels of detail are present . in order to quantify their different contribution , it is useful to compute the average correlation coefficients over the four instances where a single feature is always present .for example , the average correlation coefficient is only 0.27 when side - chain coordinates are included ( last four rows of table [ t : models ] ) and 0.29 in the four cases where true masses are taken into account ( rows 2,4,6,8 ) . on the contrary, the average correlation coefficient of 0.39 obtained by sticking to realistic interaction strengths points to a crucial role of the protein dynamical heterogeneity in reproducing vibrational modes associated with different secondary structure motifs .overall , the same conclusions can be drawn by calculating the correlation drops resulting from the individual elimination of single features from the complete model ( last row in table [ t : models ] ) .in particular the importance of force hierarchy is confirmed ( a correlation drop of about 0.5 from rows 8 to 6 ) .moreover , this analysis reveals that considering equal masses also leads to the same correlation drop ( rows 8 to 7 ) . on the contrary, the elimination of side chains appears less traumatic with a limited loss of correlation ( rows 8 to 4 ) . in summary , concerning the interplay between secondary organization and vibrational properties , the three levels of detail introduced can be ranked as follows : the hierarchy of physical interactions proves to be the most important feature , followed by mass heterogeneity and , finally , inclusion of side - chain ( coarse - grained ) degrees - of - freedom ..summary of the effect of physical features of the hnm model on the correlation between vibrational entropy and -preference : feature on , feature off .the first row corresponds to the bare anm model .[ cols="^,^,^ , > , > " , ] [ t : models ] this correlation reflects a tendency of -rich architectures to host more low - frequency modes .this mechanism can better illustrated by switching off the weakest force constant in the complete model , that is shut down lj bonds . in this case the vibrational modes with non - zero frequency describe the oscillations of the network of interactions determined by hbs and covalent bonds alone , and represent all the oscillations with wavelength shorter than or equal to the typical size of secondary structure motifs .the ratio , with the number of non - zero frequency vibrational modes in the full mode might be regarded as the fraction of modes determined by the hb and covalent interactions .this assumption appears reasonable in view of the spectral quasi - additivity discussed above .more precisely , we expect that zero - frequency modes will be filled when recovering lj interactions , without significantly affecting the high - frequency portion of the vibrational spectrum .[ f : fastmodes ] shows that correlates negatively with the -preference .thus , protein structures with large -preference host fewer high - frequency modes and accordingly more low - frequency ones .this , in turn , leads to larger vibrational entropies for -rich structures .more explicitly , we have shown that eliminating all contacts but those contributed by secondary structure motifs a clear separation of and -like vibrational spectra emerges .this is further illustrated by switching off both van der waals interactions and hbs ( data not shown ) . in this casewe no longer observe a correlation between and -preference .thus , chain topology alone does not introduce any clear spectral signature of secondary structure organization ., the fraction of modes determined by covalent and hydrogen bonds , and -preference over the pdbselect database using the full hnm model with side chains and proper mass sequences .the correlation coefficient is 0.44 .parameters are : , kcal / mol / and kcal / mol / and kcal / mol / .,width=12 ]in this work we have scrutinized a wide database of non - homologous protein structures , with the aim of assessing to what extent vibrational entropy is a sensitive measure of organization at the secondary structure level . to this end , we have introduced a minimally featured coarse - grained model , that we have coined hierarchical network model ( hnm ) . at variance with current , stat - of - the - art schemes ,our recipe ( i ) includes a hierarchy of physical interactions separating the strongest , short - range forces from the weaker , long - range ones , ( ii ) expands the number of degrees of freedom with the inclusion of side - chain coordinates and ( iii ) accounts for the appropriate aminoacid masses .a thorough analysis of all the structures in the data set has allowed us to unveil and quantify the correlation existing between the vibrational entropy of native folds and their specific secondary structure arrangement .more precisely , we found that _ all _ the three above - listed features of the hnm are essential in order to spotlight and quantify such correlation .remarkably , the statistical significance associated with the mixed spectral - structural signature attains its maximum value only for the full - featured scheme. of remarkable interest is the special role played by the requirement of force heterogeneity .in fact , we have proved that the presence of a realistic hierarchy of force constants accounts for the largest contribution to the observed correlation . as a final observation ,it is interesting to discuss our results in the perspective of the widespread , yet still much debated phenomenon of protein aggregation .more precisely , concerning what physical effect could be held responsible for the overwhelming preference of mature peptidic aggregates .the lack of clear signatures of the aggregation propensity at the sequence level suggests that the phenomenon should be governed by rather general laws , uniquely related to the dynamical properties of poly - peptide chains in solution . in particular , in view of the rather slow time - scales characteristic of aggregation, it is reasonable to postulate a mechanism of thermodynamic origin , that would favor the growth of structures rich in -content under quite generic conditions .of course , forces of enthalpic origin represent the strongest interactions along typical aggregation pathways .however , there is no reason a - priori for the strongest forces to also encode the observed bias toward -rich structures .in fact , a composite structure developing from the aggregation of poly - peptides would likely show a tendency toward non - specific organization at the secondary structure level , thus realizing _ minimal frustration _architectures . in view of the above facts ,it is tempting to rationalize the aggregation pathway as _ driven _ by strong , non - specific forces but _ biased _ toward high content of -type motifs by contributions to the total thermodynamic force that are weaker in magnitude but favor -rich architectures due to their higher density of vibrational modes at low frequencies .hence , based on the results reported in this paper , one may speculate that the existing bias toward -rich mature aggregation products be provided by a free - energy gain in vibrational entropy . following this speculation ,the bulk of the free - energy changes occurring during aggregation would be mainly determined by increase of residue - residue contacts and decrease of solvent - exposed surface , while even tiny differences in vibrational entropy could be able to steer an aggregating system toward a -rich configuration .in order to rationalize the emergence of the -peak in the hnm - c model , we diagonalize the hessian matrix around a class of regular chain configurations that include both -helices and beta sheets : those characterized by fixed values of both the bond and dihedral angle .the configurations of a linear polymer composed of consecutive residues can be described by a vector or , alternatively , once is known , by means of the three vectors with .let us now define a configuration of known dihedral angle according to the following three rules , ( bond angle must be constant ) and ( dihedral angle must be constant ) .once , and are known , can be found solving the system of two linear equations [ bond ] and [ dihedral ] and than by imposing the constraint [ r ] . in the simplest case , , the shortest chain on which a dihedral angle can be defined, one can easily show that the eigenvalues of the hessian of the potential are nine null ones plus in other words they only depend on the bond angle and they tend to peak at for .this last feature nicely explains the -peak and the -band , being the average c-c bond angle in an -helix just slightly lower than , while it is around for a -sheet .
in this paper we analyze the vibrational spectra of a large ensemble of non - homologous protein structures by means of a novel tool , that we coin the hierarchical network model ( hnm ) . our coarse - grained scheme accounts for the intrinsic heterogeneity of force constants displayed by protein arrangements and also incorporates side - chain degrees of freedom . our analysis shows that vibrational entropy per unit residue correlates with the content of secondary structure . furthermore , we assess the individual contribution to vibrational entropy of the novel features of our scheme as compared with the predictions of state - of - the - art network models . this analysis highlights the importance of properly accounting for the intrinsic hierarchy in force strengths typical of the different atomic bonds that build up and stabilize protein scaffolds . finally , we discuss possible implications of our findings in the context of protein aggregation phenomena . * author summary * the intricate structure / dynamics / function relations displayed by proteins are at the core of life itself . yet , a thorough and general understanding of such interplay still remains a formidable task , at the frontier of physics and biology . proteins perform their functions while fluctuating at room temperature about their native folds . as a consequence , the entropic contribution to their free energy landscapes , i.e. vibrational entropy , constitutes a crucial element to decipher the dynamical bases of protein functions . in this study , we examine a whole database of highly - non - homologous protein structures in the effort of rationalizing the entropic contributions of the distinct secondary structure motifs in protein scaffolds . with the help of a novel coarse - grained model , we measure a significant correlation between secondary structure content and vibrational entropy , thus shedding light into the structural roots of protein flexibility . finally , we discuss our findings in the context of the unsolved problem of protein aggregation .
estimating the direction of arrival ( doa ) of multiple signals impinging on an array of sensors from observation of a finite number of array snapshots has been extensively studied in the literature .maximum likelihood estimators ( mle ) and cramr - rao bounds ( crb ) , derived under the assumption of additive white gaussian noise , and either for the so - called conditional or unconditional model , serve as references to which newly developed doa estimators have been systematically compared . in many instances however , additive noise is usually colored and , consequently , the problem of doa estimation in spatially correlated noise fields has been studied , see e.g. , . when the spatial covariance matrix of this additive noise is known a priori , maximum likelihood estimators and cramr - rao bounds are changing in a straightforward way with whitening operations .the new statistical problem appears when the covariance matrix of the additive noise is not known a priori and information about this matrix is substituted by a number of independent and identically distributed ( i.i.d . )training samples , that form the so - called secondary training sample data set . in many casesone can assume that the statistical properties of the training noise data are the same as per noise data within the primary training set data : such conditions are usually referred to as the supervised training conditions .therefore , under these conditions , one has two sets of measurements , one primary set which contains signals of interest ( soi ) and noise , and a second set ( secondary training set ) which contains noise only . examples of this problem formulation are numerous in the area of passive location and direction finding . for instance , in the so - called over - sampled 2d hf antenna arrays , ionospherically propagated external noise is spatially non white , and some parts of hf spectrum ( distress signals for example ) with no signals may be used for external noise sampling . despite its relevance in many practical situations ,this problem has been relatively scarcely studied . for parametric description of the gaussian noise covariance matrix with the unknown parameter vector , in , the authors derive the cramr - rao bound for joint soi parameters ( doa ) and noise parameters estimation , assuming a conventional unconditional model , i.e. , and where stands for the complex gaussian distribution whose respective parameters are the mean , row covariance matrix and column covariance matrix . is the usual steering matrix with the vector of unknowns doa , denotes the waveforms covariance matrix and corresponds to the noise covariance matrix , which is parameterized by vector . in many cases however, the gaussian assumption for the predominant part of the noise can not be advocated .typical example is the hf external noise , heavily dominated by powerful lighting strikes . evidence of deviations from the gaussian assumption has been demonstrated numerous times for different applications , with the relevance of the compound - gaussian ( cg ) models being justified .in essence , the individual -variate snapshot of such a noise over the face of an antenna array may be treated as a gaussian random vector , whose power can randomly fluctuate from sample to sample .cg models belong to a larger class of distributions , namely multivariate elliptically contoured distributions ( ecd ) . for the sake of clarity , we briefly review the main definitions of ecd .a vector follows an ec distribution if it admits the following stochastic representation where means `` has the same distribution as '' . in, is a non - negative real random variable and is independent of the complex random vector which is uniformly distributed over the complex sphere .the matrix is such that where is the so - called scatter matrix , and we assume here that is non - singular . the probability density function ( p.d.f . ) of can then be written as where stands for proportional to .the function is called the density generator and satisfies finite moment condition .it is related to the p.d.f . of the modular variate by .going back to our scenario of two data sets and , we assume that they are independent , and that their columns are independent and distributed ( i.i.d . ) according to .in other words , one has and , where and are i.i.d .variables drawn from , and and are i.i.d .random vectors uniformly distributed on the unit sphere .it then follows that the joint distribution of is given by where and [ p(xp , xs ) ] where .additionally , we assume that depends on a parameter vector while depends on .our objective is then to estimate from .let us emphasize an essential difference of the problem in with respect to the typical problem of target detection in cg clutter .there , within each range resolution cell the clutter is perfectly gaussian and therefore the optimum space - time processing is the same as per the standard gaussian problem formulation .it is the data dependent threshold and clutter covariance matrix ( in adaptive formulation ) that needs to be calculated from the secondary data , if not known a priori . in the problem, the soi doa estimation should be performed on a number of ecd i.i.d .primary training samples , and maximum likelihood doa estimation algorithm and crb should be expected to be very different from the gaussian case .the paper is organized in the following way . in section [ section : crb ] , we derive a general expression of the fim for elliptically distributed noise using two data sets .section [ section : doak ] focuses on the case of doa estimation in -distributed noise . in section [ section : crbk ] , we derive conditions under which the fim is bounded / unbounded , and provide a sufficient condition for unboundedness of the fim with general elliptical distribution . the maximum likelihood estimate , as well as an approximation ,are derived in section [ section : mlek ] . in the same section ,we derive lower and upper bounds on the mean - square error of the mle for non - regular estimation conditions , i.e. , when the fisher information matrix is unbounded .numerical simulations serve to evaluate the performance of the estimators in section [ section : numerical ] and our conclusions are drawn in section [ section : conclu ] .in this section , we derive the crb for estimation of parameter vector from the distribution in .the fisher information matrix ( fim ) for the problem at hand can be written as where we used the fact that hence , the total fim is the sum of two matrices , with straightforward definition from . in order to derive each matrix, we will make use of the general expression of the fisher information matrix for ecd recently derived in .first , let us introduce where . then, we have from that the -th element of the fisher information matrices is given by { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{j}}\ } } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{k}}\ } } \nonumber \\ & + \frac{\alpha_{2 } { t_{p}}}{m(m+1 ) } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{j}}{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{k}}\}}\end{aligned}\ ] ] { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{j}}\ } } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{k}}\ } } \nonumber \\ & + \frac{\alpha_{2 } { t_{s}}}{m(m+1 ) } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{j}}{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}_{k}}\}}\end{aligned}\ ] ] where .since depends only on , it follows that takes the following form with { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}\ } } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}\ } } \nonumber \\ & + \frac{\alpha_{2 } { t_{s}}}{m(m+1 ) } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{k}}\}}\end{aligned}\ ] ] where .let us now consider .using the fact that depends only on and depends only on , is block - diagonal , i.e. , with { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}\ } } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}\ } } \nonumber \\ & + \frac{\alpha_{2 } { t_{p}}}{m(m+1 ) } { { \mathrm{tr}}\{{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{j}}{{{\mathbf{r}}}^{-1}}{{{\mathbf{r}}}^n_{k}}\}}.\end{aligned}\ ] ] the whole fim is thus given by the crb for estimation of is obtained as the upper - left block of the inverse of the fim and is thus simply .similarly to the gaussian case , the crb for estimation of in the conditional model is the same as if was known .as for the crb for estimation of , it is the same as if we had a set of noise only samples .we address the specific problem where the primary data can be written as where follows a gamma distribution with shape parameter and scale parameter , i.e. , its p.d.f. is given by which we denote as , and .the noise component is known to follow a distribution and in admits a ces representation similar to with . the p.d.f . of in this case is given by where is the modified bessel function .note that the -th order moment of is where we used the fact that the density generator is thus here where , for the sake of notational convenience , we have dropped the subscript .the fim for -distributed noise can be obtained from the fim for gaussian distributed noise and the calculation of the scalar ^{2}\right\}}\ ] ] for .for the signal parameters part only , we indeed have where the subscript and stand for -distributed and gaussian distributed noise . using the fact that , it follows that \nonumber \\ & = - { q}^{\frac{\nu - m-1}{2 } } \beta^{-1/2 } k_{m+1-\nu}\left(2 \sqrt{{q}/ \beta}\right).\end{aligned}\ ] ] itthen ensues that and thus ^{2 } \right\ } } \nonumber \\ & = \frac{2 \beta^{-\frac{\nu+m}{2}-1}}{\gamma(\nu ) \gamma(m ) } \int_{0}^{\infty } { q}^{\mu-2 + \frac{\nu+m}{2 } } \frac{k^{2}_{m+1-\nu}\left(2 \sqrt{{q}/ \beta}\right)}{k_{m-\nu}\left(2 \sqrt{{q}/ \beta}\right ) } d { q}\nonumber \\ & = \frac{\beta^{\mu-2}}{2^{2\mu+\nu+m-4}\gamma(\nu ) \gamma(m ) } \int_{0}^{\infty } z^{2\mu+\nu+m-3 } \frac{k^{2}_{m+1-\nu}\left(z\right)}{k_{m-\nu}\left(z\right ) } d z.\end{aligned}\ ] ] a formula for the fim in case of -distributed noise was derived in based on the compound gaussian representation .while it resembles our derivations based on the fim for ecd derived in , it does not match exactly our expression herein .moreover , we study herein the _ existence of the fim _ and derive _ a closed - form approximation of the fim_. let us investigate the conditions under which the integral converges . towards this end , let us use the following inequality which holds for and it follows that the first integral converges for while the second converges for .hence , for , one has accordingly , one has , for which implies that the first integral converges for and the second converges for . in the former case , one has consequently , we conclude that _ the integral converges only for _ : for this implies that which is verified .in contrast , when , one must have . in other words , the term in _ the fim corresponding to the noise parametersis always bounded _ since it depends on only .the situation is different for signal parameters . in an unconditional model where would depend on signal parameters as well, the fim is bounded .in contrast , in the conditional model where signal parameters are embedded in the mean of the distribution , the fim corresponding to signal parameters _ is bounded only for _ : otherwise , it is unbounded .the latter case corresponds to the so - called non regular case corresponding to distributions with singularities , as studied e.g. , in . before pursuing our study of the fim forthe specific case of -distributed noise , let us make an important observation . for the distribution, we have just proven that does not exist for . however , see , exists if and only if and . the latter condition implies that , when , does not exist .observe that convergence of the latter integral is problematic in a neighborhood of , since for , as is a density .therefore , at least for -distributed noise , if does not exist , then is unbounded . at this stage ,one may wonder if this property extends to any other elliptical distribution .it turns out that this is indeed the case , as stated and proved in the next proposition .whatever the p.d.f . of the modular variate , if then . for the sake of notational convenience, we temporarily omit the subscript and use instead of .let us first observe that since , one can write which implies that ^{2}.\end{aligned}\ ] ] therefore ^{2 } p({q } ) d{q}\nonumber \\ & = ( m-1)^{2 } \int_{a}^{b } { q}^{-1 } p({q } ) d{q}- 2 ( m-1 ) \int_{a}^{b } p'({q } ) d{q}\nonumber \\ & + \int_{a}^{b } { q}\left [ \frac{\partial \ln p({q})}{\partial { q } } \right]^{2 } p({q } ) d{q}\nonumber \\ & = ( m-1)^{2 } \int_{a}^{b } { q}^{-1 } p({q } ) d{q}- 2 ( m-1 ) \left [ p(b ) - p(a ) \right ] \nonumber \\ & + \int_{a}^{b } { q}\left [ \frac{\partial \ln p({q})}{\partial { q } } \right]^{2 } p({q } ) d{q}. \end{aligned}\ ] ] the third term of the sum is always positive . in the second term , we have that .it follows that divergence of is a sufficient condition for divergence of . as said before exists , andtherefore a sufficient condition for to be undounded is that is unbounded .let us now go back to the -distributed case and investigate whether it is possible to derive a simple expression for and subsequently , assuming that . towards this end ,let us make use of to write that the last term is obviously not possible to obtain in closed - form so that we use a `` large '' approximation of the modified bessel function which results in therefore , we finally have if the large approximation is made from the start , then one has so that and hence figure [ fig : alphamu_approx ] compares the approximations in and , as well as a method which uses random number generation to approximate based on its initial definition in .more precisely , we generated a large number of random variables and replace the statistical expectation of by an average over the so - generated random variables .as can be observed from figure [ fig : alphamu_approx ] , the approximations provide very close values , which enable one to validate the closed - form expressions in and . in and . and .,width=283 ] we now focus on maximum likelihood ( ml ) estimation of direction of arrival , signal waveforms and covariance matrix in the model where , and .the joint distribution of is given by ^{\frac{\nu - m}{2 } } k_{m-\nu}\left ( 2 \sqrt{{{\mathbf{y}}_{{t_{s}}}}^{h } { { { \mathbf{r}}}^{-1}}{{\mathbf{y}}_{{t_{s}}}}/ \beta } \right ) \nonumber \\ & \times \prod_{{t_{p}}=1}^{{t_{p } } } \left [ { { { \mathbf{z}}}_{{t_{p}}}}^{h } { { { \mathbf{r}}}^{-1}}{{{\mathbf{z}}}_{{t_{p}}}}\right]^{\frac{\nu - m}{2 } } k_{m-\nu}\left ( 2 \sqrt { { { { \mathbf{z}}}_{{t_{p}}}}^{h } { { { \mathbf{r}}}^{-1}}{{{\mathbf{z}}}_{{t_{p}}}}/ \beta } \right)\end{aligned}\ ] ] where .joint estimation of all parameters appears to be very complicated and hence we will proceed in two steps . at first , we assume that is known and derive the ml estimates of and . then , is substituted for some estimate obtained from observation of only . assuming that is known , one needs to maximize with respect to and ^{h } { { { \mathbf{r}}}^{-1}}\left[{{\mathbf{x}}_{{t_{p}}}}- { { \mathbf{a}}}(\phi ) s_{{t_{p}}}\right ] \right)\ ] ] where is given by .since is monotonically decreasing , see , it follows that is maximized when the argument of is minimized .however , ^{h } { { { \mathbf{r}}}^{-1}}\left[{{\mathbf{x}}_{{t_{p}}}}- { { \mathbf{a}}}(\phi ) s_{{t_{p}}}\right ] \nonumber \\ & = \left[{{\mathbf{a}}}^{h}(\phi ) { { { \mathbf{r}}}^{-1}}{{\mathbf{a}}}(\phi ) \right ] \left| s_{{t_{p } } } - \frac{{{\mathbf{a}}}^{h}(\phi ) { { { \mathbf{r}}}^{-1}}{{\mathbf{x}}_{{t_{p}}}}}{{{\mathbf{a}}}^{h}(\phi ) { { { \mathbf{r}}}^{-1}}{{\mathbf{a}}}(\phi ) } \right|^{2 } \nonumber \\ & + { { \mathbf{x}}_{{t_{p}}}}^{h } { { { \mathbf{r}}}^{-1}}{{\mathbf{x}}_{{t_{p}}}}- \frac{\left| { { \mathbf{a}}}^{h}(\phi ) { { { \mathbf{r}}}^{-1}}{{\mathbf{x}}_{{t_{p}}}}\right|^{2}}{{{\mathbf{a}}}^{h}(\phi ) { { { \mathbf{r}}}^{-1}}{{\mathbf{a}}}(\phi)}.\end{aligned}\ ] ] therefore , for any , is maximized when it ensues that one needs now to maximize , with respect to with . in order to avoid calculation of a modified bessel function and thus in order to simplify estimation, we propose to make use of the `` large '' approximation of the modified bessel function given in to write this approximation results in an approximate maximum likelihood ( aml ) estimator of which consists in maximizing ^{\nu - m}.\ ] ] note that \ ] ] which should be compared to the concentrated log likelihood function in the gaussian case , as given by .\ ] ] a few remarks are in order about these estimates , in particular about the behavior of the aml estimator in the case of unbounded fim , i.e. , when .first , note that all estimates will be a function of where is the projection onto the orthogonal complement of .compared to , the logarithm operation in will strongly emphasize those snapshots for which is small .let us thus investigate the properties of this statistic , when evaluated at the _ true _ value of signal doa .using the fact that , where and is a short - hand notation for , one has for small ( ) , it follows that , in the vicinity of , the snapshot with minimal is more or less the snapshot for which is minimum , hence the snapshot for which noise power is minimum , which makes sense . if we let , then its cumulative density function ( c.d.f . )is given by } & = 1 - \left ( 1 - { { \mathrm{pr}}\left[\tau_{{t_{p } } } \leq \eta\right ] } \right)^{{t_{p } } } \nonumber \\ & = 1 - \left [ 1 - \gamma\left(\nu , \eta \beta^{-1}\right ) \right]^{{t_{p } } } \end{aligned}\ ] ] which is shown in figure [ fig : cdf_min_tau_t=4 ] .obviously , with small , the snapshot which corresponds to the minimum value of exhibits a very high signal to noise ratio and , due to the emphasizing effect of the operation in , the performance of the aml estimator is likely to be driven mainly by this particular snapshot .this is illustrated in figure [ fig : mse_rknown_order_tau_vs_t_vs_nu ] where we display the mean - square error ( mse ) of the aml estimate which uses all snapshots and the mse of an hypothetical aml estimator which would use only the snapshot corresponding to the minimum value of . the scenario of this simulation is described in the next section .this figure shows a marginal loss of the aml estimator using only , as compared to the full aml estimator , especially for small . . and .,width=283 ] . known , and .,width=283 ] let us thus analyze the behavior of the aml estimators .for the sake of notational convenience , let and denote the aml estimator using snapshots with -distributed noise and the aml estimator using the snapshot corresponding to the minimal , respectively . observe that , when using a single snapshot , minimizing is equivalent to minimizing the gaussian likelihood function in with .since exhibits a high signal to noise ratio , is close to , one can make a taylor expansion and relate the error to the error as where is some vector that depends essentially on the derivatives of and whose expression is not needed here .one can simply notice that would be the same with gaussian noise and a single snapshot , since maximizing or is equivalent when one snapshot is used .this implies that observe that is the mean - square error ( mse ) that would obtained in gaussian noise and a single snapshot , which is about times the mse obtained in the gaussian case and using snapshots , and the latter is approximately the gaussian crb .the mse of depends on where is the minimum value of a set of independent and identically distributed ( actually gamma distributed ) variables .therefore , in order to obtain , one must consider statistics of extreme values , a field that has received considerable attention for a long time , see e.g. , .it turns out that only asymptotic ( as ) results are available and we build upon them to derive the rate of convergence of .first , note that } & = { { \mathrm{pr}}\left[u_{{t_{p } } } \geq { t_{p}}^{-1/ \nu } x\right ] } \nonumber \\ & = \left ( { { \mathrm{pr}}\left[\tau_{{t_{p } } } \geq { t_{p}}^{-1/ \nu } x\right ] } \right)^{{t_{p } } } \nonumber \\ & = \left(1 - { { \mathrm{pr}}\left[\tau_{{t_{p } } } \leq { t_{p}}^{-1/ \nu } x\right ] } \right)^{{t_{p } } } \nonumber \\ & = \left [ 1 - \gamma\left(\nu,\beta^{-1}{t_{p}}^{-1/ \nu } x\right ) \right]^{{t_{p}}}.\end{aligned}\ ] ] now since is small and is large , is very small and we can approximate ^{-1 } y^{a} ] where is the half - power beamwidth of the array .the signal waveform was generated from i.i.d .gaussian variables with power and the signal to noise ratio ( snr ) is defined as .the asymptotic gaussian crb , multiplied by the scalar was used as the bound for -distributed noise . for the regularized covariance matrix estimator of ,the value of was set to . monte - carlo simulations were used to evaluate the mean - square error ( mse ) of the estimates . in figures [ fig : mse_vs_t_ts=32_nu>1 ] and [ fig : mse_vs_t_ts=32_nu<1 ] we plot the crb ( for ) or the lower and upper bounds of when , as well as the mse of the ml and aml estimators , as a function of , and compare the case where is known to the case where it is estimated from with snapshots in the secondary data .the following observations can be made : * there is almost no difference between the mle and the amle , and therefore the latter should be favored since it does not require evaluating modified bessel functions . *the mse in the case where is known is lower than that when is to be estimated , which is expected .however , the difference is smaller when : in other words , it seems that adaptive whitening is not so much penalizing with small while it seems more crucial for .indeed , for small , what matters most is the fact that some snapshots are nearly noiseless , and this is more influential than obtaining a very good whitening . *the decrease of the mse for is roughly of the order . when , this rate is significantly increased and the mse decreases very quickly as , as predicted by the analysis above .this rate of convergence is also observed in figure [ fig : mse_multi_rknown_vs_t_vs_nu ] where we consider a scenario with two sources at . * the upper bound in seems to provide quite a good approximation of the actual mse , at least for large enough .the influence of is investigated in figure [ fig : mse_vs_ts_t=16 ] , where one can observe that about is necessary for the performance with estimated to be very close to the performance for known .however , as indicated above , this is less pronounced when , where the difference becomes smaller with lower .finally , we investigate whether the rate of convergence of the mle or amle when varies is impacted by a small amount of gaussian noise .more precisely , we run simulations where the data is generated as where , i.e. , the noise is a mixture of -distributed noise and gaussian distributed noise .the covariance matrix of the noise is now and we use the aml estimator assuming that the noise has a distribution with parameter and known covariance matrix . in figure[ fig : mse_mixture_vs_t ] , we display the mse of the aml estimator versus and versus for different values of .clearly , the rate of convergence of the estimator is affected by a small amount of gaussian noise , even when is small .this indicates that , if noise is not purely -distributed with small , we recover the usual behavior of the mse versus .in this paper we addressed the doa estimation problem in -distributed noise using two data sets .the main result of the paper was to show that , when the shape parameter of the texture gamma distribution is below , the fim is unbounded .on the other hand , for , the fim is bounded and we derived an accurate closed - form approximation of the crb .the maximum likelihood estimator was derived as well as an approximation , which induces non significant losses compared to the exact mle . in the non regular case where , we derived lower and upper bounds on the mean - square error of the ( a)ml estimates and we showed that the rate of convergence of these ( a)ml estimates is about where is the number of snapshots . c. j. coleman .the directionality of atmospheric noise and its impact upon an hf receiving system . in _proceedings 8th international conference hf radio systems techniques _ , pages 363366 , guildford , uk , 2000 .abramovich , g. san antonio , and g. j. frazer . over - the horizon radar signal - to -external noise ratio improvement in over - sampled uniform 2d antenna arrays : theoretical analysis of superdirective snr gains . in _proceedings ieee radar conference _ , pages 15 , ottawa , canada , 29 april-3 may 2013 .abramovich and g. san antonio . over - the horizon radarpotential signal parameter estimation accuracy in harsh sensing environment . in _proceedings ieeee international conference acoustics speech signal processing _ , pages 801804 , florence , italy , 4 - 9 may 2014 . f. pascal , y. chitour , j .-ovarlez , p. forster , and p. larzabal .covariance structure maximum - likelihood estimates in compound gaussian noise : existence and algorithm analysis ., 56(1):3448 , january 2008 .m. n. el korso , a. renaux , and p. forster . under -distributed observation withparameterized mean . in _proceedings 8th ieee sensor array and multichannel signal processing workshop _ , pages 461464 , a corua , spain , 22 - 25 june 2014 .y. i. abramovich and n. k. spencer .diagonally loaded normalised sample matrix inversion ( lnsmi ) for outlier - resistant adaptive filtering . in _proceedings icassp _ , pages 11051108 , honolulu , hi , april 2007 . y. i. abramovich and o. besson .regularized covariance matrix estimation in complex elliptically symmetric distributions using the expected likelihood approach - part 1 : the oversampled case . , 61(23):58075818 ,december 2013 .o. besson and y. i. abramovich .regularized covariance matrix estimation in complex elliptically symmetric distributions using the expected likelihood approach - part 2 : the under - sampled case . , 61(23):58195829 , december 2013 .
we consider the problem of estimating the direction of arrival of a signal embedded in -distributed noise , when secondary data which contains noise only are assumed to be available . based upon a recent formula of the fisher information matrix ( fim ) for complex elliptically distributed data , we provide a simple expression of the fim with the two data sets framework . in the specific case of -distributed noise , we show that , under certain conditions , the fim for the deterministic part of the model can be unbounded , while the fim for the covariance part of the model is always bounded . in the general case of elliptical distributions , we provide a sufficient condition for unboundedness of the fim . accurate approximations of the fim for -distributed noise are also derived when it is bounded . additionally , the maximum likelihood estimator of the signal doa and an approximated version are derived , assuming known covariance matrix : the latter is then estimated from secondary data using a conventional regularization technique . when the fim is unbounded , an analysis of the estimators reveals a rate of convergence much faster than the usual . simulations illustrate the different behaviors of the estimators , depending on the fim being bounded or not .
network calculus provides an elegant way to characterize traffic and service processes of network and communication systems . unlike traditional queueing analysisin which one has to make strong assumptions on arrival or service processes ( e.g. , poission arrival process , exponential service distribution , etc ) so as to derive closed - form solutions , network calculus allows general arrival and service processes . instead of getting exact solutions , one can derive network delay and backlog bounds easily by network calculus .deterministic network calculus was proposed in , etc .however , most traffic and service processes are stochastic and deterministic network calculus is often not applicable for them .therefore , stochastic network calculus was proposed to deal with stochastic arrival and service processes .there have been some applications of stochastic network calculus .however , little effort has been made on applying it to multi - access communication systems . in the paper , we take the first step to apply stochastic network calculus to an 802.11 wireless local network ( wlan ) . in particular , we address the following questions : * under what situations can we derive stable backlog and delay bounds ? * how to derive the backlog and delay bounds of an 802.11 wireless node ? *how tight are these bounds when compared with simulations ? in this paper , we answer these questions and make the following contributions : * we derive the general stability condition of a wireless node based on the theorems of stochastic network calculus . from this, we give the specific stability condition of an 802.11 wireless node . *we derive the service curve of an 802.11 node based on an existing model of 802.11 . from the service curve, we then derive the backlog and delay bounds of the node .* the derived bounds are loose in many cases when compared with ns-2 simulations .we discuss the reasons and point out future work .this paper is organized as follows . in section [ sec : snc ] , we give a brief overview of stochastic network calculus . in section [ sec : model ] , we present the stochastic network calculus model of a wireless node . in section [ sec : stab ] , we derive the general stability condition of a wireless node . in section [ sec:802_11 ], we derive the backlog and delay bounds and the stability condition for an 802.11 node . in section [ sec :simulation ] , we compare the derived bounds with simulation results .related work is given in section [ sec : related ] and finally , section [ sec : conclusion ] concludes the paper and points out future directions .in this section , we first review basic terms of network calculus and then cite the results of stochastic network calculus which we will use in this paper .there are various versions of arrival and service curves .we adopt _ virtual backlog centric ( v.b.c ) stochastic arrival curve _ and _ weak stochastic service curve _ in our analysis .we consider a discrete time system where time is slotted ( ) .a process is a function of time . by default, we use to denote the _ arrival process _ to a network element with . is the total amount of traffic arrived to this network element up to time .we use to denote the _ departure process _ of the network element with . is the total amount of traffic departed from the network element up to time .let ( ) represents the set of non - negative wide - sense increasing ( decreasing ) functions . clearly , and .for any process , say , we define , for .we define the backlog of the network element at time by and the delay of the network element at by fig .[ fig : curves_eg ] illustrates an example of and with and at . , , and ,title="fig:",scaledwidth=80.0% ] + in deterministic network calculus , can be upper - bounded by an arrival curve .that is , for all , we have where is called the _ arrival curve _ of . a _ busy period _ is a time period during which the backlog in the network element is always nonzero . for any busy period ] , which can be written as where is called the operator of _ min - plus convolution _ and is called the _ service curve _ of the network element .we cite the following definitions and theorems from except that we define definition [ def : stability ] by ourselves .[ def : ac ] a flow is said to have a virtual - backlog - centric ( v.b.c ) stochastic arrival curve with bounding function , denoted by , if for all and all , there holds >x\ } \leq f(x).\end{aligned}\ ] ] originally , in deterministic network calculus , we have for all .however , there is usually some randomness in stochastic arrival processes and may not be upper - bounded by any arrival curve deterministically ( e.g. , traffic arrivals in ] , the output of the flow from the server satisfies we can easily find the weak stochastic service curve of a stochastic strict server by the following theorem .[ theo : strict_sc ] consider a stochastic strict server providing a stochastic strict service curve with an impairment process .if the impairment process has a v.b.c stochastic arrival curve , or , and , then the server provides a weak stochastic service curve with so far , we have cited all results of stochastic network calculus which we will use in this paper .finally , we define stable backlog and stable delay .a natural definition is to check whether the expectation of backlog ( or delay ) is finite .[ def : stability ] the backlog is stable , if similarly , the delay is stable , if we say that the backlog ( or delay ) bound of stochastic network calculus is stable if they can derive stable backlog ( or delay ) .in this section we model a wireless node ( not restricted to 802.11 ) by stochastic network calculus . in general , we can define one slot ( ) to be any duration of time and measure traffic amount in any unit ( e.g. bits , bytes or packets ) .we consider a wireless node .let denote the traffic arrived at the node from the application layer .suppose is -upper constrained . from theorem [ theo : ac_theta ] , we have , where for any .we can model a wireless node by a stochastic strict server .let the channel capacity be traffic unit per slot .the departure process during any backlogged period ] and we resort to numerical methods and use algorithm 2 to get a near - optimal solution ( see appendix a-2 ) . as noticed recently by researchers of network calculus ,the delay bound in theorem [ theo : backlog_delay ] often returns trivial results . in our model in section [ sec : model ] , it is easy to see that .we propose the following way to estimate delay bound .little s law states that the average number of customers in a queueing system is equal to the average arrival rate of customers to that system , times the average time spent in that system .let the average arrival rate is . assume the system can reach _ steady state _when .then we have the average backlog is and the average delay of each packet is greater than or equal to ( by its definition in eq .( [ eq : delay ] ) , can be less than the delay of the bottom - of - line packet at ) .therefore , by little s law , we have finally , we apply markov s inequality to the above equation and we have besides , according to eq .( [ eq : eb ] ) , . andwe can use eq .( [ eq : my_backlog_bound ] ) to bound .note that eq .( [ eq : my_delay_bound ] ) is derived when . in practice, we can use this result to estimate delay bound when is sufficiently large .in this section , we use ns-2 simulations to verify our derived backlog and delay bounds for poisson and constant bit rate ( cbr ) traffic arrivals .we carry out all experiments for scenario 1 ( fig .[ fig : scen1 ] ) .each simulation duration is 100 seconds ( _ s _ ) which is long enough to let a node transmit thousands of packets .each data point ( e.g. ) is calculated over 100 independent simulations .let be the average traffic rate ( packets / slot ) .in this case , we have ( see definition [ def : er ] ) . for poisson traffic , we have where is the probability of packets arriving within ] at and , and in ns-2 simulations .we observe that there is sudden jump when , indicating the critical point of stability is indeed around .this figure also indicates the accuracy of the 802.11 model in eq .( [ eq : tau_gamma ] ) and eq .( [ eq : gamma_tau ] ) .( ) when ,title="fig:",scaledwidth=60.0% ] + * experiment 1*(scenario 1 with low poisson traffic load ) we set to simulate low traffic load .[ fig : plot_10_poisson_low](a ) shows in ns-2 simulations and fig .[ fig : plot_10_poisson_low](b ) shows the upper bound of calculated by eq .( [ eq : my_backlog_bound ] ) .note that stochastic network calculus gives very loose upper bounds. there may be two reasons .one is that we use the worst - case analysis in deriving the weak stochastic service curve of 802.11 .the other is that there are many relaxations in proving the theorems of stochastic network calculus .for example , relaxations are used in deriving in theorem [ theo : ac_theta ] , and this theorem is popularly used in deriving arrival curves and service curves ( see section [ sec : model ] ) .the first reason may not be the key reason because we will see in experiment 2 the bound is even looser when we increase arrival rate and make the channel near saturated .the second reason seems to be the key reason .we will see in experiment 3 that backlog bounds improve substantially for cbr traffic where we are able to derive the arrival curve by hand without using theorem [ theo : ac_theta ] .this indicates that refinements are needed in stochastic network calculus so as to tighten the bounds .moreover , we found that backlog bounds are sensitive to adjusting parameters ( i.e. , , , and ) .so it is necessary to use algorithm 2 to minimize the bounds .we also conduct simulations to verify delay bounds at . since the backlog bounds are too loose , in order to avoid trivial validation , we use in ns-2 simulations to validate eq .( [ eq : little ] ) and eq .( [ eq : my_delay_bound ] ) ( assume that is sufficiently large so that we can apply these equations ) . actually , we get by eq .( [ eq : little ] ) , which tightly bounds in ns-2 simulations . fig .[ fig : plot_10_poisson_low](c ) shows in ns-2 simulations and fig .[ fig : plot_10_poisson_low](d ) shows the upper bound of calculated by eq .( [ eq : my_delay_bound ] ) . clearly , is upper - bounded by eq .( [ eq : my_delay_bound ] ) .( a ) ( b ) upper bound of ( c ) ( d ) upper bound of ,title="fig:",scaledwidth=60.0% ] + * experiment 2*(scenario 1 with high poisson traffic load ) we set to simulate high traffic load . fig .[ fig : plot_10_poisson_high](a ) shows in ns-2 simulations and fig .[ fig : plot_10_poisson_high](b ) shows the upper bound of calculated by eq .( [ eq : my_backlog_bound ] ) . in this case, ( ) is much smaller than that of experiment 1 so as to satisfy the constraint in eq .( [ eq : my_backlog_bound ] ) , resulting in looser and . therefore , stochastic network calculus gives further loose backlog bounds .( a ) ( b ) upper bound of ( c ) ( d ) upper bound of ,title="fig:",scaledwidth=60.0% ] + we also conduct simulations to verify delay bounds at . since the backlog bounds are too loose , in order to avoid trivial validation , we use in ns-2 simulations to validate eq .( [ eq : little ] ) and eq .( [ eq : my_delay_bound ] ) ( assume that is sufficiently large so that we can apply these equations ) . actually , we get by eq .( [ eq : little ] ) , which tightly bounds in ns-2 simulations . fig .[ fig : plot_10_poisson_high](c ) shows in ns-2 simulations and fig .[ fig : plot_10_poisson_high](d ) shows the upper bound of calculated by eq .( [ eq : my_delay_bound ] ) . clearly , is upper - bounded by eq .( [ eq : my_delay_bound ] ) .let be the average traffic rate ( packets / slot ) .in this case , we have .it is easy to see that < 1 ] at for , and in ns-2 simulations .we observe that there is sudden jump when , indicating the critical point of stability is indeed around .this figure also indicates the accuracy of the 802.11 model in eq .( [ eq : tau_gamma])and([eq : gamma_tau ] ) .( ) when ,title="fig:",scaledwidth=60.0% ] + * experiment 3*(scenario 1 with low cbr traffic load ) we set to simulate low traffic load .[ fig : plot_10_cbr_low](a ) shows in ns-2 simulations and fig .[ fig : plot_10_cbr_low](b ) shows upper bound of calculated by eq .( [ eq : my_backlog_bound ] ) .the backlog bounds are much tighter in cbr traffic than those in poisson traffic ( see experiment 1 ) .the main reason is that we can derive a tight by hand instead of by theorem [ theo : ac_theta ] .( a ) ( b ) upper bound of ( c ) ( d ) upper bound of ,title="fig:",scaledwidth=60.0% ] + we also conduct simulations to verify delay bounds at . since the backlog bounds are still loose , in order to avoid trivial validation , we use in ns-2 simulations to validate eq .( [ eq : little ] ) and eq .( [ eq : my_delay_bound ] ) ( assume that is sufficiently large so that we can apply these equations ) . actually , we get by eq .( [ eq : little ] ) , which tightly bounds in ns-2 simulations . fig .[ fig : plot_10_cbr_low](c ) shows in ns-2 simulations and fig .[ fig : plot_10_cbr_low](d ) shows the upper bound of calculated by eq .( [ eq : my_delay_bound ] ) . clearly , is upper - bounded by eq .( [ eq : my_delay_bound ] ) . *experiment 4*(scenario 1 with high cbr traffic load ) we set to simulate high traffic load . fig .[ fig : plot_10_cbr_high](a ) shows in ns-2 simulations and fig .[ fig : plot_10_cbr_high](b ) shows the upper bound of calculated by eq .( [ eq : my_backlog_bound ] ) .the backlog bounds are much tighter in cbr traffic than those in poisson traffic ( see experiment 2 ) because is tight here .( a ) ( b ) upper bound of ( c ) ( d ) upper bound of ,title="fig:",scaledwidth=60.0% ] + we also conduct simulations to verify delay bounds at . since the backlog bounds are still loose , in order to avoid trivial validation , we use in ns-2 simulations to validate eq .( [ eq : little ] ) and eq .( [ eq : my_delay_bound ] ) ( assume that is sufficiently large so that we can apply these equations ) . actually , we get by eq .( [ eq : little ] ) , which is close to ( although does not bound ) in ns-2 simulations .[ fig : plot_10_cbr_high](c ) shows in ns-2 simulations and fig . [ fig : plot_10_cbr_high](d ) shows the upper bound of calculated by eq .( [ eq : my_delay_bound ] ) .clearly , is upper - bounded by eq .( [ eq : my_delay_bound ] ) . to sum up, the current version of stochastic network calculus often derives loose bounds when compared with simulations , especially in the case of high traffic load .therefore , stochastic network calculus may not be effective in practice .in this section , we first present relate work on stochastic network calculus and then on the performance analysis of 802.11 .the increasing demand on transmitting multimedia and other real time applications over the internet has motivated the study of quality of service guarantees . towards it ,stochastic network calculus , the probabilistic version of the deterministic network calculus , has been recognized by researchers as a promising step . during its development ,traffic - amount - centric ( t.a.c ) stochastic arrival curve is proposed in , virtual - backlog - centric ( v.b.c ) stochastic arrival curve is proposed in and maximum - backlog - centric ( m.b.c ) stochastic arrival curve is proposed in .weak stochastic service curve is proposed in and stochastic service curve is proposed in . in , jiang showed that only the combination of m.b.c stochastic arrival curve and stochastic service curve has all five basic properties required by a network calculus ( i.e. , superposition , concatenation , output characterization , per - flow service , service guarantees ) and the other combinations only have parts of these properties .jiang also proposed the concept of stochastic strict server to facilitate calculation of stochastic service curve .moreover , he presented independent case analysis to obtain tighter performance bounds for the case that flows and servers are independent .however , there are a few bugs in his results recently found by researchers of network calculus , such as the trivial delay bound in theorem 3.5 and theorem 5.1 .therefore , we adopt v.b.c stochastic arrival curve and weak stochastic service curve in our study since we only consider backlog and delay bounds ( i.e. , service guarantee ) , ignoring the other properties .there have been some applications of stochastic network calculus . in ,jiang et al . analyzed a dynamic priority measurement - based admission control ( mbac ) scheme based on stochastic network calculus . in ,liu et al . applied stochastic network calculus to studying the conformance deterioration problem in networks with service level agreements . in ,based on stochastic network calculus , x. yu et al .developed several upper bounds on the queue length distribution of generalized processor sharing ( gps ) scheduling discipline with long range dependent ( lrd ) traffic .they also extended the gps results to a packet - based gps ( pgps ) system .finally , agharebparast et al . modeled the behavior of a single wireless link using stochastic network calculus , little effort has been made on applying stochastic network calculus to multi - access communication systems such as 802.11 .existing work on the performance of 802.11 has focused primarily on its throughput and capacity . in ,bianchi proposed a markov chain throughput model of 802.11 . in , kumarproposed a probability throughput model which is simpler than bianchi s model . in our paper , we adopt kumar s model to derive the service curve of 802.11. there are also some work on queueing analysis of 802.11 . in , zhai et al . assumed poisson traffic arrival and proposed an m / g/1 queueing model of 802.11 . more generally , tickoo proposed a g / g/1 queueing model of 802.11 . to our best knowledge ,we are the first to model the queueing process of 802.11 based on stochastic network calculus .in this paper , we presented a stochastic network calculus model of 802.11 . from stochastic network calculus ,we first derived the general stability condition of a wireless node .then we derived the stochastic service curve and the specific stability condition of an 802.11 node based on an existing model of 802.11 .thus , we obtained the backlog and delay bounds of the node by using the corresponding theorem of stochastic network calculus .finally , we carried out ns-2 simulations to verify these bounds .there are some open problems for future work .first , we derived the service curve based on an existing 802.11 model .thus , the accuracy of the service curve depends on the accuracy of the model. an open question may be whether we can derive the service curve of 802.11 without using any existing models .second , we assumed the worst - case condition ( i.e. , saturate condition ) in our analysis .can we remove this conservative assumption ? besides , under the worst - case assumption , we can assume flows and servers are independent and perform independent case analysis obtaining tighter backlog and delay bounds .this is also one of our future work .third , we observe that the derived bounds are loose when compared with ns-2 simulations , calling for further improvements in the current version of stochastic network calculus .y. jiang , p. emstad , a. nevin , v. nicola , and m. fidler , `` measurement - based admission control for a flow - aware network , '' procs . , _eurongi 1st conference on next generation internet networks - traffic engineering _ ,2005 .f. agharebparast and v.c.m .leung , `` link - layer modeling of a wireless channel using stochastic network calculus , '' _canadian conference on electrical and computer engineering _ , vol .4 , pp . 1923 - 1926 , 2004 . 1. let .obviously , is an increasing function of with .we define axes and axes ( vertical to ) on a plane , and plot on it .we define the slope of , .we calculate for until it converges at some , i.e. , where is a small number , e.g. .we draw a straight line with the slope crossing the point . obviously , the line crosses the point . the maximum displacement between and ( in the direction of ) , .we shift by in the direction of and get .clearly , upperbounds . in other words, we have and . in each iteration, we generate a sample of , , and . if they satisfy the condition of eq .( [ eq : my_backlog_bound ] ) , we calculate $ ] for the current and past iterations until it converges .sample generations can use the interpolation or monte carlo method over valid ranges of the variables .
stochastic network calculus provides an elegant way to characterize traffic and service processes . however , little effort has been made on applying it to multi - access communication systems such as 802.11 . in this paper , we take the first step to apply it to the backlog and delay analysis of an 802.11 wireless local network . in particular , we address the following questions : in applying stochastic network calculus , under what situations can we derive stable backlog and delay bounds ? how to derive the backlog and delay bounds of an 802.11 wireless node ? and how tight are these bounds when compared with simulations ? to answer these questions , we first derive the general stability condition of a wireless node ( not restricted to 802.11 ) . from this , we give the specific stability condition of an 802.11 wireless node . then we derive the backlog and delay bounds of an 802.11 node based on an existing model of 802.11 . we observe that the derived bounds are loose when compared with ns-2 simulations , indicating that improvements are needed in the current version of stochastic network calculus .
network routing based on content identifiers has recently become a topic of extensive discussion , due to the benefits that could be provided by a location - independent data distribution network , more commonly referred to as an information - centric network ( icn ) . for instance , the icn _ request - response _ mode of operation alleviates client mobility issues and natively supports interdomain multicast .furthermore , content security ( as opposed to channel security ) is inherently supported by transmitting signed copies of content .this in turn allows for in - network caching , which can transform the internet into a native content distribution network .finally , as shown recently , the icn paradigm can bring benefits also at the transport layer , where caches can be exploited to alleviate congestion .on the other hand , enormous effort has been spent to de - ossify the end - to - end internet transmission model to enable new functionalities .examples include ip multicast and anycast and supporting ip mobility at the network layer , , . however , the difficulties of deploying those solutions at large scale led to the design of application - layer solutions such as overlay caching instead of native , in - network caching , overlay indirection techniques , , , dnssec and ipsec to enhance security , just to name a few . even though these solutions have the potential to enable new services ( or applications ) , they appear inferior compared to an icn mode of operation , as they can not natively support security , mobility , in - network caching and multicast : in all cases the in - network forwarding entities are forced to operate on the five - tuple ` < sourceip , destinationip , sourceport , destinationport , protocol > ` being , therefore , completely content - agnostic . arguably , the icn paradigm has the potential to deal with the internet s most daunting problems in a native manner . to reach this point ,however , a new architecture based on core icn principles will have to be deployed over the current ip internet architecture , clearly , a rather challenging task . in this paper , we identify two main obstacles that hinder the deployment of icn on top of the current internet .these are : _i ) _ scalability of name resolution , a core networking problem and _ ii ) _ content provider - controlled access to content , a business model problem , which however , is deeply integrated into the core networking principles of today s internet and therefore , affects the design of any new architecture .content access control here is linked to content access logging and the transmission of content transparently to the content provider from in - network caches .we discuss each of these two challenges in more detail next .based on these considerations , in this paper , we propose a fully backward - compatible and incrementally - deployable icn - oriented architecture that meets scalability concerns , but at the same time takes into account the business requirements of the main internet market players .two main schools of thought have emerged in the icn - related literature regarding name resolution and name - to - location mapping .the first one , mainly adopted by the original ccn / ndn proposal , advocates the hop - by - hop resolution of _ requests _ or _ interests _ at the data plane .effectively , name resolution is coupled with name - based forwarding with each interest packet being locally resolved to the next ( router ) hop .this approach has the advantage of locally making forwarding decisions , but on the downside , huge volumes of state need to be maintained in ( manually - set ) fib tables .ccn / ndn routers effectively have to keep state _ per packet _ ,an issue traditionally considered as an implementation challenge . to deal withthe scalability problems of the original proposal and the huge state that needs to be kept at all routers , recent developments in the ndn space have proposed an ndn - based dns system , dubbed ndns , as well as the involvement of content providers to help in the name - resolution process .the second school of thought decouples name - resolution from name - based routing by using a separate name - resolution system , similar in nature to dns ( _ e.g. _ , , , , ) .although this approach avoids pushing excessive state to router forwarding tables , it requires the deployment of new infrastructure by operators .for instance , as shown in , the support of the dona architecture at tier-1 autonomous systems ( ases ) requires the deployment of small - to - medium size data centres to support name resolution . such , extra infrastructure built in from scratchhas the obvious downsides of huge investment requirements , as well as the shift challenge to this new mode of operation .moreover , focusing on the practical deployment of icn , the full cycle of the name resolution process still remains unclear .name resolution and data delivery mechanisms often build on the implicit assumption that content names or identifiers are already available to the end users , prior the aforementioned coupled or decoupled resolution steps . obviously , developing a mechanism for the retrieval and delivery of content names to the end users raises concerns regarding both scalability aspects related to the enormous size of the namespace , and compatibility issues with respect to both application and network layer interfaces .we also note that the requirements of today s dynamic and interactive applications would not be served adequately by fully transparent in - network caching driven solely from search engine content name results .we discuss and evaluate these concerns later .content providers ( cps ) and cdns require , for commercial and regulatory reasons , full control over the content requested and transferred .this has been largely overlooked by research efforts in the icn area , which have mainly focused on naming schemes and name resolution systems to address scalability issues .for instance , the consensus around opaque and permanent content names ignores the fact that content can be served from isp - operated in - network caches , transparently to the cp or cdn ._ pay per click " _ business models , however , would face significant limitations from this design choice in an icn setting , that would practically prevent cps and cdns from billing their customers .alternative approaches based on isp - cdn collaborations to log content requests can not but be unrealistic : dnss can keep track of requested content and could possibly report back to the relevant cps / cdns .this , however , would mean that slas should be in place between _ all isps and all cps / cdns at a global scale _ , a rather unrealistic assumption .at the same time , transparent in - network caching mechanisms would typically allow only limited control over the content delivered to clients .that is , coarse grained ttl - based mechanisms would be the only means for cps / cdns to manipulate updated content , leading either to the delivery of stale content , or the unnecessary delivery from the cp .that said , active cache purging is another requirement that calls for control of content from cps and cdns .although content access control might sound as a trivial implementation or a business model issue , we argue that it might well hinder the engagement of cps and cdns from the adoption of icn .summarising , we argue that these concerns of : _i ) _ scalability and incremental deployment support of a name - oriented architecture , and _ii ) _ exclusive content access control at the cp side with simultaneous support for transparent in - network caching have been overlooked by the community so far . as a consequence, the full potential of an icn mode of operation has not been exploited in full yet , making the adoption and deployment of the icn paradigm an unrealistic target .although clean slate research has revealed many of the benefits that icn can bring , we argue that deployability has to be put at the forefront of any icn design , rather than being treated as an afterthought .we address the deployability concerns discussed above by introducing a novel information - focused network architecture , which overcomes scalability concerns and is fully backward compatible with the current ip architecture .our proposed architecture first introduces a name resolution process tailored to carefully manage information exposure _ e.g. , _ enabling content access logging ( section [ cc::cpbnm ] ) .this name resolution process is combined with a new naming scheme , which builds on the notion of _ ephemeral names _ ( section [ cc : en ] ) ._ name resolution is controlled by content providers _ based on a fully backward - compatible mechanism that supports in - network caching and the direct control of ephemeral names lifetime , thus facilitating content access logging and active purging of stale cached data .the proposed mechanism completes the full cycle of the name resolution process , delivering content names to clients , without imposing any requirement for additional mechanisms . in - network caching , name - based routing and support for network - layer multicast are all integrated in the _ location - independent routing layer _( lira ) , an extra layer in the protocol stack placed at level 3.5 " of the protocol stack , above the ip and below the transport - layer ( section [ cc : cl ] ) .lira absorbs " the location - independence nature of icn , leaving the network layer to operate based on ip addresses .resolution of content names does not rely on large volumes of fib table entries , and routing takes place based on a hybrid of ip addresses ( at the ip layer ) and location - independent transient content names ( at the lira layer ) ( section [ rsn - details ] ) .our design does not require blanket adoption in order to realise the benefits of icn . instead , ispscan incrementally deploy lira nodes with little investment .furthermore , the fact that routing is ( in the worst case ) based on ip addresses guarantees full backward compatibility with the current internet architecture .our results show that even with a subset of nodes upgraded to support lira functionality , our design achieves considerable performance gains ( section [ eval ] ) .in order to deal with the scalability concerns raised above , we design a name resolution scheme which involves the content provider and does not require extra name - based resolution machinery ( _ e.g. , _ , , , ) , or manually - set , bloated fib tables ( _ e.g. , _ , ) .in particular , any user will have to consult the cp ( or cdn ) and ask " for the name/`contentid ` before any content transfer can start ( see next section for details on the ` contentid ` ) .users reach the content provider based on the standard procedure of the current internet , that is , based on urls , dns resolution and ip addresses .this first part of the resolution ( _ i.e. , _ reaching the cp to get the ` contentid ` ) is based on ip addresses and is location - dependent .we note that users do not get the whole of the chunk from the cp ( but only the ` contentid ` ) , which can be served from any other cache in the network . in this way , we realise _ semi - transparent _ in - network content caching , which we argue is in the best interests of both cp / cdns and isps alike . as discussed later on in this section ,the second part of the name resolution , which also leads to the content transfer itself is location - independent , according to the philosophy of icn .summarising , the _ content provider - controlled name resolution procedure " _ introduced here is fully backward compatible and does not require extra investment from isps , or cps / cdns . to provide full content access control to cps ,we introduce the concept of _ ephemeral names _ , which are used for location - independent content delivery .our primary motivation behind the introduction of _ ephemeral names _ is to avoid dissemination of the name/`cid ` of a content to other users , as this could potentially lead to accessing the content from in - network caches , transparently to the cp / cdn .this section explains the structure , usage and design principles of these names .the lira architecture uses flat names composed of two parts ( see fig .[ ephemeral - names ] ) .the main part of the naming structure , the ` contentid ` , or ` cid ` reflects the name of the content itself and is based on the premise of _ ephemeral or transient names_. according to this concept , content providers choose arbitrary strings and assign them to the content they host .the names are flat , in the sense that they bear no structure related to routing ( _ e.g. , _ aggregation ) ; however , cps may impose structures related to the internal organisation of their content .ephemeral names should be unique to guarantee collision - free name resolution , which can be easily achieved with the use of arbitrary hashes .the names are self - certifying and expire " after some time interval .this transitioning interval should be coarser than the time needed to support in - network caching and multicast ( _ e.g. , _names should not change on a per - request basis ) - see section [ transitioning - interval ] for details . the second part of the _ ephemeral name _, the ` serviceoptions ` , can be used to realise preferential treatment of content .although the use of this part of the name is not necessary in our architecture , and is not necessarily of ephemeral nature , we believe that it can help in the caching and scheduling process .for instance , the ` serviceoptions ` part can be used to flag content that should or should not be cached .we leave such investigations for future work . in case of permanent names , search engines would operate based on names ( similarly to today s operation based on urls ) .this operation is clearly not in the best interests of cps / cdns given the _ pay per click " _ models in use today and transparent in - network caches used in icn .transient names dis - incentivise search engines from disseminating ` cids ` , but at the same time allow for both access logging at the cp / cdn and transparent in - network caching .one might claim that search engines would prefer to provide the ` cid ` directly to users , as this would lead to faster content access ( _ i.e. , _ users would not need the extra rtt to travel to the cp / cdn to get the ` cid ` ) .however , given ( i ) the transient character of names , and ( ii ) the delivery of bundles of ` cid`s by cps ( see section [ transitioning - interval ] ) , this would require search engines to devise mechanisms for retrieving and disseminating ` cid`s each time they change , only to save a single rtt in each bundle .this limits the incentives of search engines to provide ` cid`s without the consent of cps / cdns .moreover , and most importantly , ephemeral names allow cps / cdns to actively control the cached content served to their clients _e.g. , _ by changing the ` cid`s of content chunks existing cached copies get practically invalidated .this is an important feature of the proposed approach , which can not be supported in alternative proposals ( _ e.g. , _ ) .the combination of the name resolution at the cp , together with the ephemeral nature of content names supports a number of desirable features .first and foremost , name resolution is under the control of the cp , enabling access logging .secondly , versioning of updated content and purging of old content from in - network caches is also under the control of the cp .although ttl - like techniques , such as the ccn staleness option , can support content updating , it is not easy to set such values given today s interactive applications . setting ttl values for individual content items ( _ e.g. , _ ) would always face the tradeoff of short ttls resulting in unnecessary delivery from the content provider , while longer ttls would result in delivering outdated content .using ephemeral names , cached content can instead be actively invalidated when needed . along the same lines ,the transitioning interval of ephemeral content names is an issue that requires further attention and is related , among others , to the popularity of the content as well as the size of content chunks .frequent change of the name can result in suboptimal performance , since each change purges the content in caches .we deal with this tradeoff by setting the transitioning interval of content names to a value inversely proportional to the popularity of the content itself .popularity is measured by the cp and can be based on the number of requests for the content in question , per some time interval . although more sophisticated settings can be found , with this simple setting for the transitioning interval we avoid changing the ` cid ` of rarely accessed content too frequently , and we also avoid leaving the ` cid ` of popular content the same for too long .finally , to alleviate the need to travel to the cp for every chunk request , we assume that upon each request for a content item , the cp sends back to the client the up - to - date " ephemeral names of the next few subsequent chunks , that is , not only the name of the immediately following one .the number of subsequent ` cid`s sent by the cp to the client is left for future investigation .adding extra functionality , or altering completely the operation of _ existing _ core network protocols can prove difficult to be done incrementally ( _ e.g. , _ipv6 ) and flag - days " are not an option for incorporating new components at a global scale . for these reasons ,we propose _ addition _ instead of _ replacement _ of an extra layer to the protocol stack , which we call _ location - independent routing layer _lira sits on top of the network ( ip ) layer and below the transport layer .it operates based on _ ephemeral names _ and integrates all the required functionality to realise _ location independence _ , taking advantage of _ information centricity _ and its well - known gains .although recent studies have proposed http as the layer that can integrate information or content centricity , here we argue that in order for in - network caching and multicast to be smoothly incorporated in the new ecosystem , any information - centric operation needs to be _ below the transport layer_. otherwise , the transport protocol can merely connect two specific endpoints cancelling any notion of location - independent content transfer . instead , breaking the end - to - end transmission model below the transport layer allows to leverage ( icn enabled ) in - network caching , both in terms of native multi - source routing and localised congestion control , going far beyond traditional ip multicast or anycast mechanisms .lira is implemented in just a small subset of nodes ( see section [ lira - nodes ] ) , which can be transparently planted in the network , and it manages incoming and outgoing content based on their names .the main name management functionality is implemented in a routing table , which we call _ content forwarding information base _ ( c - fib ) ( section [ rsn - table ] ) .a similar notion to the lira layer has been proposed in the past in , but in a totally different context , addressing the exhaustion of ipv4 addresses .the evolution of nat boxes ( together with the painfully slow incremental deployment of ipv6 ) has dealt with this problem and hence , the related efforts became obsolete .the content forwarding information base ( c - fib ) table keeps track of recently requested and served content ( in terms of ` cid`s ) and maintains forwarding information used for the delivery of those content items , providing also support for in - network caching and multicast . upon subsequent request(s ) for a content already in the c - fib table , lira is redirecting requests towards the direction where the content has been sent , or served from , similarly in principle to breadcrumbs routing .we note that _ the c - fib table essentially acts as a cache _ for ` cid`s served recently through this router ( somewhat similarly to and ). however , c - fib table entries are not permanent , as in ccn s fib , but rather are assisting in location - independent content delivery from neighbouring nodes ( see section [ rsn - details ] for details on the c - fib structure ) .the typical structure of the c - fib table is illustrated in table [ table : r1routing - table ] .the table maintains one entry per content chunk .the following information is maintained for each entry : _i ) _ ` cid ` , the content identifier of the chunk , _ ii ) _ , the _ incoming interface _ _i.e. _ , the index of the interface from which the content is received , that is , the content source indicated by the dns , _ iii ) _ , the _ outgoing interface _ _i.e. , _ the index(es ) of the interface(s ) towards which the content is currently being forwarded , _ iv ) _ , the _ temporary interface _ , _i.e. , _ the index(es ) of the interface(s ) where the content has been forwarded , _ , the _ multicast ip _ field that holds ip addresses of clients participating in a multicast session .note that interface entries in the c - fib table denote real interfaces ( _ i.e. , _ directions towards which requests / content should be forwarded ) and not ip addresses of sources / destinations ( apart from the multicast ip field ) . by doingso we realise the _ location independence _property of icn in lira .the lira node structure is the main component of the proposed architecture , which integrates information centricity .lira nodes implement the lira layer with its c - fib table discussed above in order to realise named content management and subsequently location independence .lira nodes also include caches that temporarily store named content chunks ( _ i.e. , _ in - network caching ) .although by default all lira nodes include both the c - fib table and content caches , we also evaluate ( in section [ eval ] ) the case of lighter " lira nodes , where , based on node centrality metrics and to facilitate incremental deployment , some nodes implement the c - fib table and some others implement caches .our design does not require all nodes of a domain to become lira nodes and it is operational regardless of this .being always based on ip , nodes fall back to normal ip operation and route towards the direction indicated by location - based addresses .note that all routers maintain the default ip - based fib table . therefore , incompatibility issues or requirements for simultaneous shift to icn operation do not exist . as we show later on in the evaluation section , an average of 50% of nodes within a domain can provide considerable performance gain .careful network planning ( _ e.g. , _ depending on topological issues ) and incremental upgrade of normal routers to lira nodes gives a major advantage to the proposed architecture in terms of deployability compared to other icn architectures .we proceed with the description of the name resolution and content delivery process , illustrated in fig . [fig : summary ] .we then give details of the entries of the _ content forwarding information base _ ( c - fib ) table during the content delivery process .for this purpose we use the network topology presented in fig .[ example ] .tables [ table : r1routing - table ] and [ table : r2routing - table ] are also used to present the entries of the c - fib table(s ) for a sequence of important events taking place in our example scenarios ( denoted with timestamp ) .the name resolution process is initiated through existing protocols ( _ i.e. , _ dns and http ) to guarantee backwards compatibility and facilitate adoption of icn . as a first step ( fig .[ fig : summary ] ) and identically to what is happening today , users resolve urls through a request to the dns .the dns responds with the ip address of the content provider .the user generates an http head request at the application layer . at this stage , routing is location - dependent and is based on the ip address indicated by the dns . at the content layer , the request is asking for the ` cid ` .the cp sends back an http response packet containing the up - to - date name , _i.e. , _ ` cid ` , of the requested content in the etag field of the http response header .the destination ip address of that packet is that of the requesting client .this packet can be piggybacked with data to avoid an extra rtt between the client and the cp . in this case , however , given that requests are sent per chunk , we can not take advantage of in - network caching .this option can be considered in special cases ( _ e.g. , _ when a client is close to the cp and chances of finding the content cached are slim ) .the client issues a request for the first chunk of the content object ( _ e.g. , _ client a in the example of fig .[ example ] ) .the request includes the ip address of the cp at the ip layer and the ` cid ` of the chunk at the lira layer . [ !t ] [ ! t ] l |*5c & & & & & + & & 1 & 3 & - & - + & & 1 & - & 3 & - + & & 1 & 2 & 3 & - + & & 1 & - & 2 , 3 & - + & & 1 & 3 & - & - + & & 1 & 2 , 3 & - & b s ip + & & 1 & - & 2 , 3 & - + l |*5c & & & & & + & & 1 & 2 & - & - + & & 1 & - & 2 & - + & & 1 & 2 & - & - + & & 1 & - & 2 & - + lira nodes along the path check the ` cid ` included in the request against the entries of their c - fib table .if an entry for the ` cid ` exists , then they forward according to this entry . if not , they forward according to the ip address .the ip address points to the cp , hence , content can always be resolved according to that in the worst case , _ e.g. , _ in case of lira - incompatible nodes or domains . at this point , assuming the content is not locally cached ( see section [ caching ] for details on in - network caching ) , the request is forwarded towards the cp .the index of the network interface used to forward the request is marked as the for this content chunk ( _ i.e. , _ interface 1 - see time in table [ table : r1routing - table ] ) . at the same time , the index of network interface from which the request was received is marked as an output interface ( interface 3 in our example ) .the content chunk is then sent back from the cp ( or any other cache further down the path ) . during the data transferno change is made in the c - fib table entries of intermediate lira nodes ( time ) .when the chunk transfer completes , which is denoted by an end of chunk ( eoc ) field , the intermediate lira nodes change their c - fib entries for this ` cid ` by marking the interfaces through which they forwarded the data ( _ i.e. , _ ) as ( _ temporary interface _ ) - interface 3 is moved to at in table [ table : r1routing - table ] .this is done since the content can possibly be delivered from there too ( _ i.e. , _ the content has possibly been cached towards this direction ) .when the client sees the eoc field / bit set , it forwards the next request towards the original cp ( similarly to the initial request - step above ) in order to obtain the ` cid ` of the next chunk .lira nodes by default support in - network caching . in the simplest case , on - path in - networkcaching is supported by simply performing a lookup of the ` cid ` of a request message , at the local cache index . in casethe requested content chunk is cached locally , the corresponding data is returned through the network interface the request was received from ( ) . in our example scenario , client b issues a request for content . once the request for reaches , the c - fib table of is updated to include and ( in table [ table : r2routing - table ] ) . then , at time , the request for reaches .content chunk is found cached at whose interface 2 is marked as and the content is sent towards client b. by introducing the field in the c - fib table we further realise off - path in - network caching , as well as user - assisted in - network caching , . when a content chuck is not found in the local cache, the lira node sends the received request towards both the ( permanent ) incoming interface ( as indicated by the name resolution process ) and the temporary interface(s ) . in our example , sends two requests for towards both the ( permanent ) 1 and the 3 ( in table [ table : r1routing - table ] ) . whichever of the two interfaces ( 1 or 3 ) starts receiving the requested data first is marked as the incoming interface for this content and the remaining ( temporary ) interfaces are pruned down .pruning here can be realised through a negative ack ( nack ) packet which travels towards the source of the content .if answers first , the is removed from the corresponding c - fib table entry .alternative strategies can be applied here , by selectively forwarding a request to one or more of the available interfaces _e.g. , _ always forwarding only towards an off - path cache , since requests are always routable to the cp at the ip layer . finally , at time when transfer completes ( from either the local , or a remote cache ) , interface 2 is added to the list of temporary incoming interfaces ( ) at , since can now be found this way too ( similarly to ) .the c - fib table of is also updated to include interface 2 as ( step ) .we note that in order to avoid routing loops in case no other device towards ( client a in our case ) has the content cached , we discard requests ( for items in the c - fib table ) that come in through its marked .this is done because any lira node towards the ( client a in this case ) will forward the request based on its ip address ( carried at the ip layer and always pointing towards the permanent content source , hence through in our example ) if it finds no entry in its c - fib table for the requested content . in turn ,upon receipt of the request , will send the request back towards the same direction ( towards client a here ) , since it still has got the related entry in its c - fib table .this will result in the request travelling back and forth creating an endless routing loop .multicast support is enabled through the use of the and fields of the c - fib table .as described above , during the chunk transfer , the network interface of the lira node where the incoming data is forwarded towards is marked in the field .this entry enables the lira node to suppress any subsequent request for the same content chunk by adding an extra outgoing interface to its c - fib .this is similar to the pit functionality in ccn .note that in all above steps the ip address ( at the ip layer ) of request packets has been pointing to the cp and of content chunks to the corresponding clients .however , in order to realise multicast transmission in this case ( _ i.e. , _ avoid sending a second request for the same chunk towards the same direction ) , the lira node that suppresses subsequent requests needs to keep the extra ip address of the clients that generated the requests .we deal with this situation through the multicast ip " ( ) field in the c - fib table .when data arrives at the branching lira node , it gets forwarded to all interfaces .the entries are used at the ip layer to allow for the delivery of the duplicated data to the requesting recipients .note however that multicast forks further down the path are handled locally . in our example , if an additional client c attaches to and requests for during the multicast session , its request will be suppressed by which will also store client c s ip address in the corresponding field . will not be aware of client c s existence and is responsible for duplicating data for this client .thus , the state load is distributed to the participating lira nodes avoiding the overloading of nodes closer to the root of the multicast tree . in our example network ,client a issues a request for content . the c - fib table at marks and for ` cid ` ( step ) .before the transfer of towards a completes through client b issues a request for , which goes through and reaches . updates its c - fib table by putting and ( step ) . does not forward this request further ; instead it adds interface 2 to the field of and also stores the ip address of client b ( taken from the corresponding ip layer field ) in the field ( step ) .when arrives at ( step ) it is forwarded towards client a through , but it is also replicated and forwarded towards client b , through , using as the destination ip address .when the chunk transfer completes , router moves interfaces 2 and 3 and moves interface 2 to the field ( step - table [ table : r1routing - table ] and [ table : r2routing - table ] ) .+ note that the c - fib table introduced here , incorporates the functionality of both the pit and the fib tables of ccn .for as long as the chunk transfer goes on and hence , the field is filled ( and the field is empty - and in s c - fib , see table [ table : r1routing - table ] ) , the c - fib table represents the pit table of ccn / ndn .that is , based on this state , lira nodes are able to collapse / suppress subsequent requests for content already requested ( or under transmission ) and realise multicast . when the chunk transfer completes and the entry in is moved to the field ( and in table [ table : r1routing - table ] ) , then the c - fib table reflects the fib table of ccn / ndn . as mentioned above, however , the c - fib table acts as a cache for recently served content and hence , it does not need to keep huge amounts of state information in the fib part of the c - fib .we discuss and evaluate both parts of the c - fib table later in section [ eval ] .it is generally not common to evaluate a network architecture merely in quantitative terms , given that the contribution of such studies comes mainly at a conceptual level . in our case ,the contribution of the lira architecture comes mainly in terms of incremental deployment with backward compatibility guarantees . at the same time , however , lira can achieve all the quantifiable benefits of an icn mode of operation . to provide a thorough performance evaluation ,we analyse conceptual and qualitative gains in sec .[ qualitative - eval ] as well as quantitative gains in sec .[ quantitative - eval ] .the quantitative evaluation focuses on the deployment of the lira concept from the operators perspective .in particular , given a fixed monetary budget that the operator is prepared to spend in order to deploy lira , we assess the best strategies of investing the capital in terms of extra equipment , which in our case translates to cache memory and c - fib tables .we also demonstrate and quantify the benefits brought by lira to cps , with a particular focus on cache purging and the control over the freshness of the cached content .* name resolution : * by handing control of the name resolution process to cps , lira avoids the need for either the deployment of a costly name resolution infrastructure , or the investment on in - network resources for the support of line - speed name resolution .the operation of the c - fib as a cache for names/`cid`s is similar to .however , lira does not necessitate the use of an explicit off - path name resolution mechanism , as it rather falls back to ip , in a backward compatible manner .at the same time , by following a backwards compatible http - supported name resolution mechanism , lira presents a complete interface for the interaction of end - hosts with an information - centric network . to the best of our knowledge, no exact mechanism has been proposed for the discovery ( _ i.e. , _ not only resolution ) of content names in alternative icn architectural proposals . * control of content access : * lira enables cps to directly monitor and control the access of end users to their content .route - by - name approaches such as fail to provide such support .cps would be reluctant to accept transparent access to their content , thus dis - incentivising the adoption of such an approach to icn by isps .lookup - by - name approaches , on the other hand , such as and , enable this type of control , by decoupling name resolution from forwarding .however , this comes at the cost of additional name resolution infrastructure and directly places the content access information in the hands of isps ; in turn , this introduces the burden of new ( business and technical ) interfaces between all cps and all isps at a global scale .* mobility : * although the issue of mobility in case of lira requires further investigation and at first sight it might seem that lira can not deal with mobility efficiently , due to its dependence on ip , we note the following : upon a content request , the cp or cdn is sending back to the client the ` cid`s of the next few chunks , _i.e. , _ not just the next one . that said , the clients operate based on ip - agnostic ` cid`s .therefore , client mobility can be natively supported , as clients request for content based on identifiers ( in combination to the ip address at the ip layer ) .source mobility , on the other hand , is an issue that requires further investigation as is the case with all icn architectural proposals . *security : * by supporting self - certifying ` cids ` , lira secures the content itself rather than the communication channel , similarly to other icn architectures _e.g. , _ .* implementation : * the proposed lira functionalities can be deployed on nodes with only firmware updates without the need for hardware replacement or upgrade .in fact , by relying on ip forwarding as a fallback in case of c - fib misses , lira will never result in un - routable requests / content even if deployed on just a few nodes and with minimal memory .this is in stark contrast with previous icn proposals like ccn and ndn which require well - dimensioned fib and pit structures to operate correctly and at line speed .c - fib can be loaded in dram , which has been shown to be able to support line - speed per - packet lookups , , is inexpensive and abundant on modern routers based on either network processors or general purpose processors .it is in fact common to have at least a few gbs of spare dram on modern routers . since the binary code implementing lira functionalities is likely to require negligible space ,all available dram can be used for c - fib and caching space .c - fib entries in particular have very low memory requirements .in fact , even assuming that ( i ) the c - fib is implemented using a hash - table with a load factor of and with a circular queue for replacement and ( ii ) the unfavourable case that lira chunks are named using sha-512 hashes and next hops information are coded on 2b , it is still possible to store over 15 million c - fib entries per gb of dram .this makes c - fibs and more generally the lira node architecture easy to incrementally deploy on today s routers . as mentioned earlier, the objective of this section is to evaluate the best possible way to invest in deploying the lira concept , from the operator s perspective .that said , we initially evaluate the main concepts of our proposal with regard to their projected gains in terms of cache hits .although lira is far from a caching - specific architecture , caching is : _i ) _ the only straightforward quantitatively measurable aspect of an icn architecture , and most importantly , _ii ) _ the main feature that requires investment from network operators . for these reasons and without by any means underestimating the gains from the above - mentioned qualitative benefits of the lira architecture , in this section , we focus on the evaluation of the main concepts included in lira as seen from an in - network caching perspective .we use icarus to evaluate the performance of various aspects of our proposed framework based on real isp topologies from the rocketfuel dataset and synthetic workloads . due to space limitations , we omit evaluation of the multicast functionality offered by lira , since the related performance benefit is straightforward. moreover , we only show results for the telstra and abovenet topologies , though we report that we obtain similar results with other topologies as well .we make the code , documentation and data required to reproduce our results publicly available .lira nodes can have a content cache or a c - fib table , or both . given a fixed total cache and c - fib capacity budget , in this section, we identify the best possible combination of cache and c - fib deployment along two dimensions : i ) deployment strategies and ii ) caching strategies .we attempt to capture the interaction dynamics between nodes that cache content and nodes that can route to this content , in a location - independent manner , _i.e. , _ through c - fib table entries ._ our first objective is to investigate the effectiveness of c - fib table entries in mapping the content cached in neighbour nodes .our second objective is to see how c - fib table entries eventually translate to cache hits . _modelling the performance of a network of caches is known to be complex , . as a result , it is extremely difficult to formulate optimal cache placement algorithms which are also robust to realistic traffic variations . arguably , the complexity of the optimal cache placement problem is another obstacle hindering icn deployments .therefore , motivated by practical reasons , we propose four simple content cache and c - fib placement algorithms and show that they are sufficient to provide tangible performance gains even with partial deployments . to deploy caches and c - fibs , we rank nodes according to their betweenness centrality ( _ i.e. , _ the amount of traffic traversing them following shortest path routing ) and deploy lira functionality using the following strategies : + _ ( i ) _ cache in top 50% high centrality nodes , c - fib table in all nodes : . + _( ii ) _ cache in top 50% high centrality nodes , c - fib table in top 50% high centrality nodes : . + _( iii ) _ cache in all nodes , c - fib table in all nodes : . + _( iv ) _ cache in all nodes , c - fib table in top 50% high centrality nodes : .we run simulations and measure the mean _ c - fib freshness _ , which we define as the ratio of entries stored in c - fib tables which can correctly route to a copy of a content stored in a nearby cache .this metric captures how well the entries of the c - fib tables deployed in the network reflect the current state of nearby caches .we further characterise the correct c - fib entries by the hop distance to the lira node that caches the corresponding content .note that in all cases , and regardless of the deployment strategy , the ratio of c - fib table to cache entries is fixed ( see next subsection for the evaluation of this ratio ) .as a result , c - fib tables in fewer nodes ( than those that deploy caches ) keep more entries to match the number of cache slots ( and vice versa ) .we also analyse the results under different caching strategies : _ leave copy everywhere _( lce ) , according to which a copy of a content is stored in every cache traversed and _ random choice _ , according to which a content is stored only in one randomly selected caching node along the delivery path .the rationale behind our choice is to evaluate deployment and caching performance under varying _ caching redundancy _ .our results are shown in figs .[ c - fib - freshness ] and [ deployment - strategies ] . +* c - fib efficiency .* first of all , it is important to highlight the fact that the c - fib table entries depict precisely the state of neighbour caches .this is proved by the fact that the _ freshness ratio _ in fig .[ c - fib - freshness ] directly translates to off - path cache gain in fig .[ deployment - strategies ] : for instance , the freshness result in case of in fig .[ freshness-1221 ] indicates that 5% of entries in the c - fib table can correctly route to the content in neighbour caches . in turn , in fig .[ cachehits-1221 ] , the gain from off - path caching ( red , top part of bar ) is 4.5% .this is an important result that highlights the effectiveness of the c - fib table in keeping an accurate record of the state of nearby caches ( _ i.e. , _ up to 3 hops away in our evaluation ) . +* deployment strategy . * in terms of c - fib freshness , deploying smaller caches over more / all nodes , _ , seems to be more effective in capturing the state of caches from the c - fib tables ( _ i.e. , _ higher freshness in fig .[ c - fib - freshness ] ) .this is explained by the fact that the monitoring and mapping " mechanism provided by the c - fib table has got a wider view of the neighbourhood and can therefore , find more content items locally .this also translates to more off - path cache hits in fig .[ deployment - strategies ] for . out of the four deployment strategies under consideration here , and consistently perform best in terms of cache hits ( in fig . [ deployment - strategies ] ) .this is irrespective of the freshness result , which shows that freshness improves when caches are deployed over all nodes ( _ i.e. , _ ) . in other words , it is better to have fewer but bigger caches placed in high centrality nodes ( as also shown in ) , rather than having smaller caches deployed in all nodes of the network . + * caching strategy . * as expected , in terms of cache hits , __ always performs best , for all topologies and for all deployment strategies , as a result of its reduced caching redundancy .similar results have been reported before in .lce on the other hand , performs roughly the same across all deployment strategies .note that the lce result in fig .[ deployment - strategies ] effectively reveals the performance of the ccn / ndn architecture . due to space limitations, we do not present a full - fledged comparison between the architectures , but fig .[ deployment - strategies ] reveals very well the cache - related performance of ccn / ndn .we next quantify the performance benefit of off - path , c - fib - routed , caching for various values of the c - fib - to - cache size ratio ( expressed in number of entries ) and for the strategy ( fig .[ hits - vs - c - fib ] ) .considering the overall cache hit ratio ( both on- and off - path ) , we see a considerable increase when moving from a ratio value of 0.25 to a ratio value of 16 , due to c - fib routing redirections .the results are similar for a ratio equal to 32 , but the gain in this case is marginal .therefore , given that larger memory is required in order to deploy c - fib tables 32 times bigger than the entries in the respective caches , we conclude that a value of 16 is optimal .although in absolute values , off - path , c - fib - based , caching contributes less than on - path caching , the gain is still far from negligible ( _ i.e. , _ it can reach up to 50% in fig .[ hits - vs - c - fib ] ) .we report that in our simulations , the gain from off - path caching can reach 100% , effectively doubling the gain from on - path caching .finally , it is interesting to note the slight decrease of on - path cache hit ratio as the c - fib - to - cache size ratio increases .this is attributed to cases where a content request encounters a c - fib table entry and gets diverted to an off - path cache , before it actually hits an existing on - path cache . in this case , and given that the c - fib table entry is found earlier in the path , we report that the delay to deliver the content back to the user is even shorter than finding the requested item in an on - path cache .this is especially so in case of lce caching , where due to increased caching redundancy , a copy of a content has good chances of being found along the shortest path .one of the departing points in the design of the lira architecture is the direct control of content by the cps / cdns , as discussed earlier .we identify two main features that give direct control of the content to the cp or cdn .the first one is the control of access logging . in lirathis is accomplished by the content provider - controlled name resolution , where clients need to get the up - to - date ` cid ` from the content provider .this requires an extra rtt to get to the cp or cdn .we remind that according to our discussion in section [ transitioning - interval ] , cps / cdns send more than one ` cid ` to the client , therefore , the journey to the cp / cdn happens rarely during the data transfer , or even only once in case of small files ( _ e.g. , _ web ) .we assume this extra rtt to incur only a tiny performance penalty compared to alternative proposals that do not necessarily require this extra roundtrip .the second feature that provides control of published content to cps is the ability to actively perform cache purging . as described in section [ cc : en ] , when cps change the ` cid ` of a content item , previously cached items no longer get hits from new requests and eventually get evicted ( denoted as _lira w / o replacement _ ) .taking a step further , we consider an extended version of this mechanism , where data packets explicitly indicate the ` cid ` values of the content items that should be immediately evicted from encountered caches ( denoted as _lira w/ replacement _ ) .figure [ fig : purging - cache - hit - ratio ] shows the cache hit ratio of the above mechanisms along with that of a simple ttl - based mechanism , where any cache hit returns the content to the client , even if this content is stale ( _ ttl - based ( all hits ) _ ) for various ttl values . in fig .[ fig : purging - cache - hit - ratio ] , we see that the cache hit ratio in the ( _ ttl - based ( all hits ) _ ) case increases with the value of ttl , since content remains longer in the cache .however , this also means that the corresponding cache hits result in the reception of stale content .[ fig : purging - cache - hit - ratio ] also shows the cache hit ratio for fresh only content ( _ ttl - based ( fresh only ) _ ) , which initially increases , but then steadily drops as a result of high ttl values that increase stale cached content .the _ lira w/ replacement _ mechanism performs considerably better than _ lira w / o replacement _ , as it immediately frees the caching space from unnecessary stale content , and better than its _ ttl - based ( fresh only ) _ counterpart .it must be noted that the _ ttl - based ( fresh only ) _ ratio is only provided as a benchmark , as ttl - based mechanisms can not avoid serving stale content . on the other hand, lira provides a precise mechanism to avoid serving stale cached content altogether .we proceed to evaluate the last of the design targets behind lira , that of incremental deployability . to assess the performance gain of incrementally deploying the lira architecture , we begin by progressively adding c - fib tables starting from the highest centrality nodes .we evaluate the performance in terms of cache hits in case of caches deployed in 25% , 50% , 75% and 100% of the nodes , starting from the highest centrality ones .we observe in fig .[ fig : incremental - deployment ] that performance stabilises with the c - fib table present in 20 - 30% of the nodes .c - fib in less than 20% results in suboptimal performance , but performance does not increase considerably if we continue adding c - fib to more nodes . in terms of caches , a 25% deployment rate results in poor performance , while the performance does not improve considerably when caches are deployed in more than 50% of nodes . the difference in performance between the 50% and 100% of nodes is in the area of 1% improvement in terms of cache hit ratio for the two topologies shown here ( telstra and abovenet ) .we conclude that adding c - fib to the top 20 - 30% highest centrality nodes and caches to 50%-75% of highest centrality nodes achieves the full performance gain of the lira architecture .although here we present results for telstra and abovenet topologies , our results are consistent along all six evaluated topologies of the rocketfuel dataset .there is a constant trend towards extra flexibility " in communication networks , which started with the shift from ( rigid ) circuit - switching to ( queuing - based ) packet - switching .we see location - independent , information - centric networking as the natural next step towards _ content switching"_. to move towards this direction , however , the research community needs to take into account the interests of the main internet market players , as well as those of users .we argue that icn research so far has focused on designing conceptually sound and scalable name - based routing architectures , but largely ignored any incentives ( provided through those architectures ) to adopt the icn technology .the interests of content providers and cdns are largely different to those of isps and the shift to an icn environment environment makes this difference even more pronounced .that said , unless a shift to an icn environment takes into account the interests of both cps / cdns and isps , the incentives to adopt this technology will be limited . in this paperwe have taken these concerns into consideration and have designed an incrementally - deployable icn architecture .the proposed architecture is based on the location - independent routing layer ( lira ) and directly involves the content provider in the name resolution process .furthermore , ephemeral names give more power to the cps / cdns over the content they publish .our evaluation shows that even with a limited number of nodes implementing the lira architecture , isps achieve a clear performance gain , while at the same time cps / cdns have full control of their content .this work was supported by epsrc uk ( comit project ) grant no .ep / k019589/1 and eu fp7/nict ( greenicn project ) grant no . ( eu ) 608518/(nict)167 .the research leading to these results was funded by the eu - japan initiative under european commission fp7 grant agreement no .608518 and nict contract no .167 ( the greenicn project ) and by the uk engineering and physical sciences research council ( epsrc ) under grant no .ep / k019589/1 ( comit ) .m. dehghan , a. seetharam , b. jiang , t. he , t. salonidis , j. kurose , d. towsley , and r. sitaraman . on the complexity of optimal routing and content caching in heterogeneous networks . in _infocom , 2015 proceedings ieee _ , april 2015 .
we identify the obstacles hindering the deployment of information centric networking ( icn ) and the shift from the current ip architecture . in particular , we argue that scalability of name resolution and the lack of control of content access from content providers are two important barriers that keep icn away from deployment . we design solutions to incentivise icn deployment and present a new network architecture that incorporates an extra layer in the protocol stack ( the _ location independent routing layer _ , lira ) to integrate location - independent content delivery . according to our design , content names need not ( and should not ) be _ permanent _ , but rather should be _ ephemeral_. resolution of non - permanent names requires the involvement of content providers , enabling desirable features such as request logging and cache purging , while avoiding the need for the deployment of a new name resolution infrastructure . our results show that with half of the network s nodes operating under the lira framework , we can get the full gain of the icn mode of operation . = 10000 = 10000 [ distributed networks ]
graph - based codes , such as low - density parity - check ( ldpc ) , turbo , and repeat - accumulate codes , together with belief propagation ( bp ) decoding have shown to perform extremely close to the shannon limit with reasonable decoding complexity .these graph - based codes can be represented by a bipartite tanner graph in which the variable and check nodes respectively correspond to the codewords symbols and the parity check constraints .the error - correcting performance of a code is mainly characterized by the connectivity among the nodes in the tanner graph where the node degree plays an important role . to specify the node degree distribution in the tanner graph ,the concept of degree distribution in either node perspective or edge perspective is introduced .a code ensemble is then defined as the set of all codes with a particular degree distribution . as a unifying framework for graph - based codes , richardson and urbanke proposed multi - edge type low density parity - check ( met - ldpc ) codes .the benefit of the met generalization is greater flexibility in the code structure , which can improve decoding performances .this generalization is particularly useful under traditionally difficult requirements , such as high - rate codes with very low error floors or low - rate codes . a numerical technique , referred to as density evolution ( de ) ,was formulated to analyze the convergence behavior of the bp decoder ( i.e. , the code threshold ) for a given ldpc or met - ldpc code ensemble under bp decoding , where the code threshold is defined as the maximum channel noise level at which the decoding error probability converges to zero as the code length goes to infinity .de determines the performance of bp decoding for a given code ensemble by tracking the probability density function ( pdf ) of messages passed along the edges in the corresponding tanner graph through the iterative decoding process .then , it is possible to test whether , for a given channel condition and a given degree distribution , the decoder can successfully decode the transmitted message ( with the decoding error probability tends to zero as the iterations progress ) .this allows us to design and optimize ldpc and met - ldpc degree distributions using the de threshold ( i.e. , the code threshold found using de ) as the cost function . calculating thresholds and designing ldpc and met - ldpc degree distributions using de are computationally intensive as they require numerical evaluations of the pdfs of the messages passed along the tanner graph edges in each decoding iteration . because of this , for ldpc codes on the binary input additive white gaussian noise ( bi - awgn ) channel , chung _ et al . _ and lehmann and maggio approximated the message pdfs by gaussian pdfs , each using a single parameter , to simplify the analysis .existing work concerning gaussian approximations has relied on four different parameters in order to obtain a single - parameter model of the message pdfs , including mean value of the pdf , bit - error rate ( ber ) , reciprocal - channel approximation ( rca ) and mutual information ( e.g. , exit charts ) .several papers have investigated the accuracy of the gaussian approximation for bp decoding of standard ldpc codes and shown that it is accurate for medium - to - high rates .however in most of the literature regarding de for met - ldpc codes , only the full density evolution ( full - de ) has been studied . in full - de, the quantized pdfs of the messages are passed along the edges without any approximation .typically , for full - de , thousands of points are used to accurately describe one message pdf .schmalen and brink have used the gaussian approximation based on the mean of the message pdf to evaluate the behavior of protograph based ldpc codes , which is a subset of met - ldpc codes .the contributions of this paper are as follows : 1 ) we investigate the accuracy of gaussian approximations for bp decoding .we follow the approximation techniques suggested for ldpc codes , which describe each de - message pdf using a single parameter . based on our observations of the evolution of pdfs in the met - ldpc codes, we found that those gaussian approximations are not accurate for the scenarios where met - ldpc codes are useful , i.e. , at low rate and with punctured variable nodes .2 ) in light of this , we propose a hybrid - de method , which combines the full - de and a gaussian approximation .our proposed hybrid - de allows us to evaluate the code threshold ( i.e. , the cost function in the code optimization ) of met - ldpc and ldpc code ensembles significantly faster than the full - de and with accuracy better than gaussian approximations .3 ) we design good met - ldpc codes using the proposed hybrid - de and show that the hybrid - de well approximates the full - de for code design .this paper is organized as follows .section [ background ] briefly reviews the basic concepts of met - ldpc codes . in section [ gaussian app ]we extend gaussian approximations for ldpc codes to met - ldpc codes , and in section [ validity of ga ] , we discuss the accuracy of the gaussian approximations under the conditions where met - ldpc codes are more beneficial .section [ hybrid de ] presents the proposed hybrid - de method , and section [ code design ga ] demonstrates the benefits of code design using the proposed hybrid - de method over existing gaussian approximations .finally , section [ conclusion ] concludes the paper .unlike standard ldpc code ensembles where the graph connectivity is constrained only by the node degrees , in the multi - edge setting , several edge - types can be defined , and every node is characterized by the number of connections to edges of each edge - type . within this framework, the degree distribution of met - ldpc code ensemble can be specified through two node - perspective multinomials related to the variable nodes and check nodes respectively : where and are vectors defined as follows .let denote the number of edge - types corresponding to the graph and denote the number of different channels over which codeword bits may be transmitted .a vector ] has only two entries since in a bi - awgn channel , a codeword bit is either punctured ( the codeword bits not transmitted : ] ) . finally , and are non - negative real numbers corresponding to the fraction of variable nodes of type ( ) and the fraction of check nodes of type in the ensemble , respectively .( resp . , ) represents the unpunctured ( resp . ,punctured ) variable nodes and represents the check nodes .the number of nodes for different edge - types are shown as fractions of the code length , where is the number of transmitted ( i.e. unpunctured ) codeword bits . ]node - perspective degree distributions can be converted to edge - perspective via the following multinomials , where and are related to the variable nodes and check nodes , respectively : where the rate of a met - ldpc code is given by where denotes a vector of all s with the length determined by the context .a rate met - ldpc code ensemble is shown in fig .[ fig : met_tanner ] , where the node - perspective degree distributions are given by and . here denotes punctured nodes and denotes unpunctured nodes . in the bp decoding algorithm ,messages are passed along the edges of the tanner graph from variable nodes to their incident check nodes and vice versa until a valid codeword is found or a predefined maximum number of decoding iterations has been reached .each bp decoding iteration involves two steps . 1 . a variable node processes the messages it receives from its neighboring check nodes and from its corresponding channel and outputs messages to its neighboring check nodes .a check node processes inputs from its neighboring variable nodes and passes messages back to its neighboring variable nodes . in most cases of binary codes transmitted on bi - awgn channel , thesebp decoding messages are expressed as log - likelihood ratios ( llrs ) .this is used to reduce the complexity of the bp decoder , as multiplication of message probabilities corresponds to the summation of corresponding llrs .let denote the message llr sent by a variable node to a check node along edge , ( i.e , variable - to - check ) at the iteration of the bp decoding , and denote the message llr sent by a check node to a variable node along edge , ( i , e , check - to - variable ) at the iteration of the bp decoding .at the variable node , is computed based on the observed channel llr ( ) and message llrs received from the neighboring check nodes except for the incoming message on the current edge for which the output message is being calculated .thus the variable - to - check message on edge at the decoding iteration is as follows : the message outputs on edge of a check node at the decoding iteration can be obtained from the `` tanh rule '' : for more details we refer readers to ryan and lin .de is the main tool for analyzing the average asymptotic behavior of the bp decoder for met - ldpc code ensembles , when the block length goes to infinity . for bp decoding on a bi - awgn channel , these llr values ( i.e. , )are continuous random variables , thus can be described by pdfs for analysis using de . to analyze the evolution of theses pdfs in the bp decoder, we define which denote the pdf of the variable - to - check message , check - to - variable message and channel llr , respectively . unlike standard ldpc codes , in the met framework , because of the existence of multiple edge - types , only the incoming messages from same edge - type are assumed to be identically distributed .however , all the incoming messages are assumed to be mutually independent .recall that in met - ldpc codes , a variable node is identified by its type , ( ) , and a check node by its type , .thus from ( [ eq.vn update ] ) the pdf of the variable - to - check message from a variable node type , ( ) , along edge - type at the decoding iteration can be written as follows : ^{\bigotimes ( d_i-1 ) } \\\bigotimes_{k=1 , k\neq i}^{m_e}\left [ f\big({u}^{(\ell-1)}(k)\big)\right]^{\bigotimes d_k},\end{gathered}\ ] ] where denotes convolution .the -fold and -fold convolutions follow from the assumption that the incoming messages from a check node along edge - type are independent and identically distributed and is used to denote this common pdf . is the pdf of the channel llr . from ( [ eq.cn update ] ) the pdf of the check - to - variable message from a check node type , , along edge - type at the decoding iterationcan be calculated as follows : ^{\boxtimes ( d_i-1 ) } \stackrel[k=1 , k\neq i]{m_e}{\boxtimes } \left [ f\big({v}^{(\ell)}(k)\big)\right]^{\boxtimes d_k } , \end{aligned}\ ] ] where is the pdf of the message from a variable node along edge - type at the decoding iteration .the computation of is not as straightforward as that for and requires the transformation of and .so we use to denote the convolution when computing the pdf of for check - to - variable messages . for more detailswe refer readers to richardson and urbanke .in this section , we consider met - ldpc codes over bi - awgn channels with gaussian approximations to de .as already shown for ldpc codes , the pdfs of variable - to - check and check - to - variable messages can be close to a gaussian distribution in certain cases , such as when check node degrees are small and variable node degrees are relatively large .since a gaussian pdf can be completely specified by its mean ( ) and variance ( ) , we need to track only these two parameters during the bp decoding algorithm .furthermore , it was shown by richardson _ et al . _ that the pdfs of variable - to - check and check - to - variable messages and channel inputs satisfy the symmetry condition : where is the pdf of variable .this condition greatly simplifies the analysis because it implies and reduces the description of the pdf to a single parameter .thus , by tracking the changes of the mean ( ) during iterations , we can determine the evolution of the pdf of the check node message , and the variable node message , where is the gaussian pdf with mean and variance . and are the mean of the variable - to - check and check - to - variable messages , respectively . in this subsection, we will extend the gaussian approximation method proposed by chung _ et al . _ for the threshold estimation of standard ldpc codes to that of met - ldpc codes .this method is based on approximating message pdfs using a single parameter , i.e. , the mean of the message pdf .recall that a variable node is identified by its type , ( ) , and a check node by its type , .since the pdfs of the messages sent by the variable node are approximated as gaussian , the mean of the variable - to - check message from a variable node type , ( ) , along edge - type at the decoding iteration is given by where is the mean of the message from the channel and is the mean of the check - to - variable message along edge - type at the decoding iteration .the updated mean of the check - to - variable message from check node type of along edge - type at the decoding iteration can be written as ^{d_i-1 } \\\prod_{k=1,k\neq i}^{m_e } \left [ 1- \phi(m_v^{(\ell)}(k))\right]^{d_k } \big),\end{gathered}\ ] ] where is the mean of the variable - to - check message along edge - type at the decoding iteration .the mean of the variable - to - check and the check - to - variable messages along edge - type at the decoding iteration is given by where and are the variable and check node edge - degree distributions with respect to edge - type , respectively and it is important to note that is continuous and monotonically decreasing over with and .in this subsection , we will extend a gaussian approximation method proposed by lehmann _ et al ._ that estimates thresholds of standard ldpc codes to that of met - ldpc codes .this method is based on a closed - form expression in terms of error probabilities ( i.e. , the probability that a variable node is sending an incorrect message ) .consider a check node of type , .the error probability of a check - to - variable message from a check node type , along edge - type at the decoding iteration is given by ,\end{gathered}\ ] ] where is the average error probability of the variable - to - check message along edge - type at the decoding iteration .since we suppose that the all - zero codeword is sent , the error probability of a variable node at the decoding iteration is simply the average probability that the variable - to - check messages are negative .we also assume that the pdf of variable - to - check message is symmetric gaussian ; therefore the error probability of a variable - to - check message from a variable node type , along edge - type at the decoding iteration is given by where is the mean of the variable - to - check message from a variable node type , ( ) , along edge - type at the decoding iteration , and can be calculated using ( [ eq : vn to cn met maen ] ) by substituting for each , where is the average error probability of the check - to - variable message along edge - type at the decoding iteration .the average error probability of the variable - to - check and the check - to - variable messages along edge - type at the decoding iteration is given by where and are the variable and check node edge - degree distributions with respect to edge - type , respectively .in this subsection , we will extend another gaussian approximation method , proposed by chung to estimates thresholds of regular ldpc codes , to that of met - ldpc codes .this method is called reciprocal - channel approximation ( rca ) , which is based on reciprocal - channel mapping and mean ( ) of the node message is used as the one - dimensional tracking parameter for the bi - awgn channel . with the rca technique in de , is additive at the variable nodes similar to approximation 1 ( see ( [ eq : vn to cn met maen ] ) ) . the difference between approximation 1 and approximation 3is how the check nodes calculate their output messages .instead of evaluating the function in approximation 1 , approximation 3 uses the reciprocal - channel mapping , , which is additive at the check nodes . is defined as follows : where is the capacity of the bi - awgn channel as a function of the mean of the channel message , and then the mean of check - to - variable message from a check node type , along edge - type at the decoding iteration is given by and can be calculated from ( [ avg_mv ] ) and ( [ avg_mu ] ) , respectively .as we discussed in section [ background ] , in the bp decoder , there are three types of messages : the channel message , the variable - to - check message , and the check - to - variable message .we analyze the pdf of these messages on the bi - awgn channel to evaluate the gaussian assumption for de message pdfs .let be a binary codeword ( ) on a bi - awgn channel . a codeword bit, can be mapped to the transmitted symbol if and otherwise. then , the received symbol at the output of the awgn channel is where is the energy per transmitted symbol and is the awgn , .the llr ( ) for the received signal , is given by assuming that the all - zero codeword is sent and that is 1 , which is a gaussian random variable with = \frac{2}{\sigma_n^2} ] .since the variance is twice the mean , the channel message has a symmetric gaussian distribution .consider the variable node update in ( [ eq.vn update ] ) .in the first iteration of the bp decoding , each variable node receives only a non - zero message from the channel .hence the first set of messages passed from the variable nodes to the check nodes follow a symmetric gaussian pdf .the following theorem describes the variable - to - check message exchanges in the , , iteration of the bp decoder .[ variable node update ] the pdf of the variable - to - check message at the decoding iteration ( ) , is a gaussian distribution if all check - to - variable messages ( ) are gaussian .if are not gaussian then the pdf of converges to a gaussian distribution as the variable node degree tends to infinity . the update rule at a variable node in ( [ eq.vn update ] )is the summation of the channel message and incoming messages from check nodes ( ) .since the channel is bi - awgn , follows a symmetric gaussian distribution . if all ( which are mutually independent ) are gaussian , then is also gaussian , because it is the sum of independent gaussian random variables. if are not gaussian then the pdf of converges to a gaussian distribution as variable node degree tends to infinity , which directly follows from the central limit theorem .if is non - zero and has a reasonably large mean compared to then minimizes the effect of non - gaussian pdfs coming from check nodes and tends to sway the variable - to - check message ( ) to be more gaussian .moreover , can be well approximated by a gaussian distribution if the variable node degree is large enough . before analyzing the check - to - variable messages ,let us first state a few useful lemmas and definitions , upon which our analysis is based .[ lognormal ] if the random variable is gaussian distributed and , then random variable is said to be lognormally distributed .[ sum of gaussain ] if are independent gaussian random variables with means and variances , and is a set of arbitrary non - zero constants , then the linear combination , follows a gaussian distribution with mean and variance .[ lognormal sum_scaled ] let be a lognormal random variable .then follows a lognormal distribution , where . since is a lognormal random variable then from definition [ lognormal ] , where is gaussian random variable .according to lemma [ sum of gaussain ] , also follows a gaussian distribution .thus from definition [ lognormal ] , follows a lognormal distribution .[ remark_lognormal sum ] the assumption for the gaussian approximation is that the sum of independent lognormal random variables can be well approximated by another lognormal random variable .this has been shown to be true for .now consider the check node update in ( [ eq.cn update ] ) at the decoding iteration .[ check node update ] the pdf of the check - to - variable message at the decoding iteration ( ) is a gaussian distribution provided that the variable - to - check messages are approximately gaussian and reasonably reliable , and the degrees of the check nodes are small . consider the check node with degree .we can rewrite ( [ eq.cn update ] ) as follows : where we define and note that suppose are reasonably reliable . using taylor series expansion, can be expressed as for simplicity , we omitted the indices , .since follows an approximate gaussian distribution , according to definition [ lognormal ] , follows an approximate lognormal distribution and from lemma [ lognormal sum_scaled ] , follows an approximate lognormal distribution . using the assumption that , the sum of a set of independent lognormal random variables is approximately lognormalwhen the set size is small ( see remark [ remark_lognormal sum ] ) , ( in step 1 ) follows an approximate lognormal distribution .this is because , when is large with high probability , the higher - order terms in the taylor series expansion of are insignificant compared to the first few terms .next , the are mutually independent .thus , according to remark [ remark_lognormal sum ] , ( in step 2 ) follows an approximate lognormal distribution when the check node degree is small .finally , from definition [ lognormal ] , ( in step 3 ) will follow an approximate gaussian distribution if the result of step 2 is lognormally distributed .[ remark_lognormal sum_not_correct ] the assumption that , the sum of independent lognormal random variables can be well approximated by another lognormal random variable , is not true when is large .using the above results , we investigate the accuracy of gaussian approximations to full - de of low rate met - ldpc codes , with punctured and degree - one variable nodes .these are the cases where met - ldpc codes are most beneficial .we also evaluate full - de simulations for these codes to measure how close the actual message pdf is ( under the full - de ) to a gaussian pdf using the kullback - leibler ( kl ) divergence as our measure .a small value of kl divergence indicates that actual pdf is close to a gaussian pdf .we calculate the kl divergence between 1 ) the actual message pdf ( under the full - de ) and a gaussian pdf with the same mean and variance to check whether it follows a gaussian distribution , 2 ) the actual message pdf ( under the full - de ) and a symmetric gaussian pdf with the same mean to check whether it follows a symmetric gaussian distribution .+ in the case of standard ldpc codes , it has been observed that the check - to - variable messages significantly deviate from a symmetric gaussian distribution at low signal - to - noise ratios ( snr ) , even if the variable - to - check messages are close to a gaussian distribution .thus gaussian approximations based on single - parameter models do not perform well for the codes at low snrs . herewe explain the reason behind this , based on the assumptions required for gaussian approximations to be accurate .met - ldpc code with is compared with the symmetric gaussian pdf of the same mean . ] met - ldpc code with . ] according to remark [ check node update ] , the pdf of the check - to - variable messages ( ) can be well approximated by a gaussian pdf , if the variable - to - check messages ( ) are reasonably reliable given that are approximately gaussian and check node degrees are small . at low snr , the initial are not reasonably reliable .thus may not follow a gaussian distribution in early decoding iterations . however , if the snr is above the code threshold , the decoder converges to zero error probability as decoding iterations proceed , thus the pdf of moves to right and the become more reliable .hence may follow a gaussian distribution at later decoding iterations . to illustrate this via an example ,we plot the actual message pdfs in the bp decoding and the kl divergence between the pdf of check - to - variable message from edge - type two and the corresponding symmetric gaussian pdf for a rate met - ldpc code in figs . [fig : pdf_r ] and [ fig : kl_low rate ] , respectively .it is clear from fig .[ fig : kl_low rate ] that the kl divergence at a low snr has a larger value than that at high snrs .this shows that significantly deviates from a gaussian distribution when the snr is low .we can also see from figs .[ fig : pdf_r ] and [ fig : kl_low rate ] that follows a gaussian distribution at later decoding iterations , and the lower the snr the more decoding iterations are required for this to happen .based on these observations we claim that single - parameter gaussian approximations may not be a good approximation to de at low snrs .+ met - ldpc code with at the first decoding iteration . ]it has been observed that the check - to - variable messages significantly deviate from a symmetric gaussian distribution when the check node degree is large , even if the variable - to - check messages are close to a gaussian distribution .thus single - parameter gaussian approximation models do not perform well for the standard ldpc codes with large check node degrees . herewe explain the reason behind this , based on the assumptions required for gaussian approximations to be accurate .according to remark [ check node update ] , the pdf of the check - to - variable messages ( ) can be well approximated by a gaussian pdf , when check node degrees are small given that are approximately gaussian and reasonably reliable .the assumption in step 2 ( see remark [ remark_lognormal sum ] ) , that is the sum of independent lognormal random variables can be well approximated by another lognormal random variable , is clearly not true if the check node degree is large ( see remark [ remark_lognormal sum_not_correct ] ) .thus may not follow a gaussian distribution for larger the check node degrees . to evaluate the combined effect of the snr and the check node degree , we plot the kl divergence of check - to - variable message pdfs to the corresponding symmetric gaussian pdfs for a rate met - ldpc code at the first decoding iteration for different snrs and different check node degrees in fig .[ fig : snrandcheckdegree ] .our simulations show that with a large check node degree of 15 , the kl divergence is large . based on thiswe claim that single - parameter gaussian approximations may not be a good approximation to de for codes with large check node degrees .+ one of the modifications of met - ldpc codes over standard ldpc codes is the addition of punctured variable nodes to improve the code threshold ( a different use of puncturing than its typical use to increase the rate ) .we observe that these punctured nodes have a significant impact on the accuracy of the gaussian approximation of both variable - to - check messages and check - to - variable messages .according to theorem [ variable node update ] , if are not gaussian , the pdf of the variable - to - check messages ( ) converges to a gaussian distribution as the variable node degree tends to infinity . in the case of punctured nodes, equals zeros as punctured bits are not transmitted through a channel .hence in punctured variable nodes , is equivalent to the sum of only , which are heavily non gaussian at early decoding iterations .thus if the variable node degree is not large enough , then from punctured variable nodes may not follow a gaussian distribution at early decoding iterations .the punctured variable nodes adversely affect the check - to - variable messages as well . according to remark [ check node update ], the pdf of the check - to - variable messages ( ) can be well approximated by a gaussian pdf , if the variable - to - check messages ( ) are well approximated by a gaussian pdf .this is because , in step 1 of ( [ check update 2 ] ) follows an approximate lognormal distribution only if is following a gaussian distribution . since from a punctured variable nodedoes not follow a gaussian distribution , also may not follow a gaussian distribution .the end result is that punctured nodes reduce the validity of the gaussian approximation for and . to illustrate the effect of punctured nodes via an example , we plot the kl divergence of variable - to - check and check - to - variable messages to the corresponding symmetric gaussian pdfs of rate a met - ldpc code with punctured nodes in fig . [fig : kl_p ] .it is clear from fig .[ fig : kl_varibale_p ] that the kl divergence of the variable - to - check message to the corresponding symmetric gaussian pdf from punctured nodes has a larger value than that from the unpunctured nodes in the same code .[ fig : kl_check_p ] shows the corresponding effect on the kl divergence of the check - to - variable messages .furthermore , the decrease of the kl divergence with decoding iterations in fig .[ fig : kl_p ] implies that and are following a gaussian distribution at later decoding iterations .however in general , to become gaussian it takes more decoding iterations than typical for a code without punctured nodes .one of the advantages of the met - ldpc codes is the addition of degree - one variable nodes to improve the code threshold .however we observe that degree - one variable nodes can affect the gaussian approximation for the check - to - variable messages . herewe explain the reason behind this , based on the assumptions required for gaussian approximations to be accurate . according to remark [ check node update ], the pdf of the check - to - variable messages ( ) can be well approximated by a gaussian pdf , if the variable - to - check messages ( ) are reasonably reliable given that are gaussian .even though received from edges connected to degree - one variable nodes are gaussian , they are not reasonably reliable .this is because variable nodes of degree - one never update their as they do not receive information from more than one neighboring check node .so the may not follow gaussian distribution .consequently , this may reduce the validity of the gaussian approximation to de for met - ldpc codes .similarly any check node that receives input messages from a degree - one variable node never outputs a distribution with an infinitely large mean in any of its edge messages updated with information from degree - one variable nodes .this is because , if are a set of independent random variables , then and as \rightarrow \infty$ ] , as is the case with degree - one variable nodes .we observed through the simulations that the degree - one variable nodes have a small impact on the accuracy of the gaussian approximation of both variable - to - check messages and check - to - variable messages .all the gaussian approximations we discussed in section [ gaussian app ] are based on the assumption that the pdfs of the variable - to - check and the check - to - variable messages can be well approximated by symmetric gaussian distributions .this assumption is quite accurate at the later decoding iterations , but least accurate in the early decoding iterations particularly at low snrs or with punctured variable nodes or with large check node degrees as we have observed in section [ validity of ga ] .making the assumption of symmetric gaussian distributions at the beginning of the de calculation produces large errors between the estimated and true distributions .even when the true distributions do become gaussian , the approximations give incorrect gaussian distributions due to the earlier errors .these errors propagate throughout the de calculation and cause significant errors in the final code threshold result for met - ldpc codes as we will see in section [ code design ga ] . through simulations , we observed that , when the channel snr is above the code threshold , the pdfs of the node messages ( i.e. , variable - to - check and the check - to - variable messages ) eventually do become symmetric gaussian distributions as decoding iterations proceeds .this implies that making the assumption of symmetric gaussian distributions in the later decoding iterations of the de calculation is reasonable .this motivates us to propose a hybrid density evolution ( hybrid - de ) algorithm for met - ldpc codes which is a combination of the full - de and the mean - based gaussian approximation ( approximation 1 ) .the key idea in hybrid - de is that we do not assume that the node messages are symmetric gaussian at the beginning of the de calculation , i.e. , hybrid - de method initializes the node message pdfs using the full - de and then switches to the gaussian approximation .there are two options for switching from the full - de to the gaussian approximation . as option one, we can impose a limitation for the number of maximum full - de iterations in hybrid - de , in which we do few full - de iterations and then switch to the gaussian approximation .although this is the simplest option , it gives a nice trade - off between accuracy and efficiency of threshold computation as shown in fig .[ fig : met_threshold_it ] .the second option is that we can switch from full - de to the gaussian approximation after that the pdfs for the node messages become nearly symmetric gaussian .the kl divergence can be used to check whether a message pdf is close to a symmetric gaussian distribution .thus , as the second option , we can impose a limitation for the kl divergence between the actual node message pdf and a symmetric gaussian pdf .this is a more accurate way of switching than option one . because the value of the kl divergence depends on the shape of the pdf of the node messages , thus the switching point is changing appropriately with the condition ( such as snr , code rate ) we are looking at .each option has its own pros and cons .for instance , if the channel snr is well above the code threshold , the node messages can be close to a symmetric gaussian distribution before the imposed limit in option one for the full - de iterations .thus we are doing extra full - de iterations that are not necessary .this reduces the benefit of hybrid - de by adding extra run - time . in such a situation, we can introduce the second option ( i.e. , kl divergence limit ) in addition to reduce run - time by halting the full - de iterations once the pdf is sufficiently gaussian . on the other hand ,if the channel snr is below the code threshold , the decoder never converge to a zero - error probability as decoding iterations proceed , thus node messages may not ever follow a symmetric gaussian distribution .this makes the option two hybrid - de always remain at full - de , as it never meets the target kl divergence limit . thus forcing a limitation for the number of full - de iterations ( i.e. , option one ) is required in order to improve the run - time of hybrid - de . because of these reasons, we can introduce both options to the hybrid - de where option one acts as a hard limit and option two acts as a soft limit .this is a particularly beneficial way to do the trade - off between accuracy and efficiency when computing the code threshold .we found that it is possible to impose both options in the hybrid - de to significantly improve computational time without significantly reducing the accuracy of the threshold calculation .t. j. richardson , m. a. shokrollahi , and r. l. urbanke , `` design of capacity - approaching irregular low - density parity - check codes , '' _ ieee trans .inform . theory _47 , no . 2 ,619637 , feb .2001 .chung , t. j. richardson , and r. l. urbanke , `` analysis of sum - product decoding of low - density parity - check codes using a gaussian approximation , '' _ ieee trans .inform . theory _47 , no . 2 ,pp . 657670 , 2001 .l. schmalen and s. t. brink , `` combining spatially coupled ldpc codes with modulation and detection , '' in _ proc .itg conf . on systems ,commun . and coding ( scc)_.1em plus 0.5em minus 0.4emvde , 2013 , pp . 16 .
this paper considers density evolution for low - density parity - check ( ldpc ) and multi - edge type low - density parity - check ( met - ldpc ) codes over the binary input additive white gaussian noise channel . we first analyze three single - parameter gaussian approximations for density evolution and discuss their accuracy under several conditions , namely at low rates , with punctured and degree - one variable nodes . we observe that the assumption of symmetric gaussian distribution for the density - evolution messages is not accurate in the early decoding iterations , particularly at low rates and with punctured variable nodes . thus single - parameter gaussian approximation methods produce very poor results in these cases . based on these observations , we then introduce a new density evolution approximation algorithm for ldpc and met - ldpc codes . our method is a combination of full density evolution and a single - parameter gaussian approximation , where we assume a symmetric gaussian distribution only after density - evolution messages closely follow a symmetric gaussian distribution . our method significantly improves the accuracy of the code threshold estimation . additionally , the proposed method significantly reduces the computational time of evaluating the code threshold compared to full density evolution thereby making it more suitable for code design . belief - propagation , density evolution , gaussian approximation , low - density parity check ( ldpc ) codes , multi - edge type lpdc codes .
the phenotypic diversity of a species impacts its ability to evolve . in particular ,the importance of the variance of the population along a phenotypic trait is illustrated by the _fundamental theorem of natural selection _ , and the _ breeder s equation _ : the evolution speed of a population along a one dimensional fitness gradient ( or under artificial selection ) is proportional to the variance of the initial population .recently , the phenotypic variance of populations has also come to light as an important element to describe the evolutionary dynamics of ecosystems ( where many interacting species are considered ) . over the last decade ,the thematic of _ evolutionary rescue _ has emerged as an important question ( see also the seminal work of luria and delbrck ) , and led to a new interest in the phenotypic distribution of populations , beyond phenotypic variance .evolutionary rescue is concerned with a population living in an environment that changes suddenly. the population will survive either if some individuals in the population carry an unusual trait that turns out to be successful in the new environment , or if new mutants able to survive in the new environment appear before the population goes extinct ( see for a discussion on the relative effect of _ de novo mutations _ and _ standing variance _ in evolutionary rescue ) . in any case , the fate of the population will not be decided by the properties of the bulk of its density , but rather by the properties of the tail of the initial distribution of the populations , close to the favourable traits for the new environment .a first example of such problem comes from emerging disease : animal infections sometimes are able to infect humans .this phenomena , called zoonose , is the source of many human epidemics : hiv , sars , ebola , mers - cov , etc .a zoonose may happen if a pathogen that reaches a human has the unusual property of being adapted to this new human host .a second example comes from the emergence of microbes resistant to an antimicrobial drug that is suddenly spread in the environment of the microbe .this second phenomenon can easily be tested experimentally , and has major public health implications .most papers devoted to the genetic diversity of populations structured by a continuous phenotypic trait describe the properties of mutation - selection equilibria .it is however also interesting to describe the genetic diversity of population that are not at equilibrium ( _ transient dynamics _ ) : pathogen populations for instance are often in transient situations , either invading a new host , or being eliminated by the immune system .we refer to for a review on transient dynamics in ecology . for asexual populations structured by a continuous phenotypic trait ,several models exist , corresponding to different biological assumptions .if the mutations are modeled by a diffusion , the steady populations ( for a model close to , but where mutations are modelled by a laplacian ) are gaussian distributions .furthermore , have considered some transient dynamics for this model . in the model that we will consider ( see ) , the mutations are modelled by a non - local term .it was shown in ( see also ) that mutation - selection equilibria are then cauchy profiles ( under some assumptions ) , and this result has been extended to more general mutation kernels in , provided that the mutation rate is small enough .finally , let us notice that the case of sexual population is rather different , since recombinations by themselves can imply that a _ mutation - recombination equilibrium _ exists , even without selection .we refer to the infinitesimal model , and to for some studies on the phenotypic distribution of sexual species in a context close to the one presented here for asexual populations . in this article , we consider a population consisting of individuals structured by a quantitative phenotypic trait ( open interval of containing ) , and denote by its density . here , the trait is fully inherited by the offspring ( if no mutation occurs ) , so that is indeed rather a breeding value than a phenotypic trait ( see ) .we assume that the individuals reproduce with a rate , and die at a rate this means that the individuals with trait are those who are best adapted to their environment , and that the fitness decreases like a parabola around this optimal trait ( this is expected in the surroundings of a trait of maximal fitness ) .it also means that the strength of the competition modeled by the logistic term is identical for all traits . when an individual of trait gives birth , we assume that the offspring will have the trait with probability , and a different trait with probability . is then the probability that a mutation affects the phenotypic trait of the offspring .we can now define the growth rate of the population of trait ( that is the difference between the rate of _ births without mutation _ , minus the death rate ) as when a mutation affects the trait of the offspring , we assume that the trait of the mutated offspring is drawn from a law over the set of phenotypes with a density .the function then satisfies and we assume moreover that is bounded , , with bounded derivative and strictly positive on . the main assumption hereis that the law of the trait of a mutated offspring does not depend of the trait of its parent .this classical assumption , known as _ house of cards _ is not the most realistic , but it can be justified when the mutation rate is small ( see also ) .all in all , we end up with the following equation : this paper is devoted to the study of the asymptotic behaviour of the solutions of equation when is small and large and it is organized as follows . in the rest of section 1 the main results are quoted , first in an informal way , and then as rigourous statements .section 2 contains the proof of theorem [ theorem1 ] and its corollary and finally , in section 3 , theorem [ thm : smallt ] is proved .when we consider the solutions of , two particular profiles naturally appear : * _ a cauchy profile : _ for a given mutation rate small enough , one expects that will converge , as goes to infinity , to the unique steady - state of , wich is the following cauchy profile where is such that .this steady - state of is the so - called _ mutation - selection equilibrium _ of the _ house of cards_ model , which has been introduced in ( we also refer to for a broader presentation of existing results ) . *_ a gaussian profile : _ if , the solution of ( [ eqq0 ] ) can be written where , so that a gaussian - like behavior ( with respect to ) naturally appears in this case. surprisingly , we are not aware of any reference to this property in the population genetics literature .we will show that , as suggested by the above arguments , we can describe the phenotypic distribution of the population , that is , when either ( large time for a given mutation rate ) , or ( small mutation rate , for a given time interval ] , and ( see the definition of in eq .( [ eqq4 ] ) below ) .the initial condition corresponds to a population at the mutation - selection equilibrium which environment suddenly changes ( the optimal trait originally in moves to at ) .this example is guided by the evolutionary rescue experiments described in subsection [ subsec : sel - mut - comp ] , where the sudden change is obtained by the addition of e.g. salt or antibiotic to a bacterial culture .we describe two phases of the dynamics of the population : * _ large time : cauchy profile ._ we show that is asymptotically ( when the mutation rate is small ) close to provided .the population is then a time - independent cauchy distribution for large times .this theoretical result is coherent with numerical results : we see in fig .[ fig1 ] that is well described by , as soon as , which is confirmed by the value of for given by fig .[ fig2 ] . * _ short time : gaussian profile ._ we also show that is asymptotically ( when the mutation rate is small ) close to provided .the population has then a gaussian - type distribution for short ( but not too short ) times .this theoretical result is coherent with numerical results : we see in fig .[ fig1 ] that is well described by for ] given by fig .[ fig2 ] . to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) .in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] to , of the same simulation of for ( see in the text for a complete description ) . in each of these plots , the blue ( resp .red , black ) line represents ( resp . , ) .note that in this figure , the scales of both axis change from one graph to the other , to accommodate with the dynamics of the solution .,title="fig : " ] ( see in the text for a complete description ) .the red line represents , while the black line represents . ]another way to look at these results is to consider and as two parameters , and to see the approximations presented above as approximations of for some set of parameters : for , while for .we have represented these sets in fig [ fig3 ] .( in blue ) , where the approximation holds provided that is small enough ; and of the set ( in red ) , where the approximation holds provided that is small enough . ]as described in the subsection [ subsec : sel - mut - comp ] , the phenotypic distribution of species is involved in many ecological and epidemiological problematics .our study is a general analysis of this problem and we do not have a particular application in mind .an interesting and ( to our knowledge ) new feature described by our study is that the tails of the traits distribution in a population can change drastically between `` short times '' , that is and `` large times '' , that is : the distribution is initially close to a gaussian distribution , with small tails , and then converges to a thick tailed cauchy distribution .this result could have significant consequences in for _ evolutionary rescue _ :the tails of the distribution then play an important role . quantifyingthe effect of this property of the tails of the distributions would however require further work , in particular on the impact of stochasticity ( the number of pathogen is typically large , but finite ) .the plasticity of the pathogen ( see ) may also play an important role . herewe state the two main theorems of the paper , each of them followed by a corollary . to do so we start by defining the linear operator and denoting by the dominant eigenvalue of and by the corresponding eigenvector ( see proposition [ proposition1 ] ) .[ theorem1 ] let us assume that the initial datum is integrable on ( , b[ ] , , and is a bounded , function with bounded derivative , such that and .we begin with the study of the linear operator associated to eq .( [ eqq0 ] ) .let us recall that is the operator corresponding to the linear part in eq .( [ eqq0 ] ) .it acts on functions of the variable .we begin with a basic lemma which enables to define the semigroup associated with this operator .[ lemanou ] the linear operator , defined on and with domain , generates an irreducible positive -semigroup ( denoted from now on by ) .the multiplication linear operator is the generator of a positive -semigroup . since is strictly positive, is a positive bounded perturbation whose only invariant closed ideals are and the whole space .so is irreducible ( see , corollary 9.22 ) .next , we present a proposition which gives information about the spectrum of .[ proposition1 ] the linear operator has only one eigenvalue .it is a strictly dominant algebraically simple eigenvalue and a pole of the resolvent , with corresponding normalized positive eigenvector moreover , for small enough , .the rest of the spectrum of the linear operator is equal to the interval . ] for such that \subset i ] is contained in the spectrum of .the claim follows from the fact that the spectrum is a closed set .on the other hand , notice that ( for ) , is not an eigenvalue , since the potential corresponding eigenfunction is not an integrable function on ( remember that does not vanish ) .let us now compute the resolvent operator of , that is , let us try to solve the equation for , defining , ( [ eq3 ] ) gives integrating , we get and belongs to the resolvent set unless the factor of on the left hand side vanishes . therefore .since for any real number , the function is continuous , strictly decreasing , and satisfies ( recall that ) and , we see that there is a unique real solution of in .we denote it by .taking in ( [ eq3 ] ) , we see that is an eigenvalue with corresponding normalized strictly positive eigenvector taking and we see that the left hand side in ( [ eq5 ] ) vanishes , whereas the right hand side is strictly negative , so that has no solution and hence is algebraically simple .+ indeed , it also follows from ( [ eq5 ] ) that the range of coincides with the kernel of the linear form defined on by the function ( which is the eigenvector corresponding to the eigenvalue of the adjoint operator ) and hence it is a closed subspace of .therefore , is a pole of the resolvent ( see theorem a.3.3 of ) .furthermore , since we see that for small enough , and hence .substituting by in the characteristic equation we have that the imaginary part is . since , there are no non real solutions of note that we now write an expansion of the eigenvalue .[ proposition2 ]let be the dominant eigenvalue of the operator .then let us consider the change of variable where .we have then where we have used and have denoted and .+ since and we obtain which implies since we have for small enough .+ therefore , using in we get finally , by and , let us start this subsection with a lemma in which properties of the spectrum of are used to study the asymptotic behavior of the semigroup generated by .[ lemma2.0 ] * the essential growth bound of the semigroup generated by is * the growth bound of the semigroup generated by is * is a compact ( one rank ) perturbation of then where is the semigroup generated by ( see ) .+ since is a multiplication operator , and the result follows .* by proposition [ proposition1 ] , the spectral bound of is and the spectral mapping theorem holds for any positive on ( see ) .let us now write , for a positive non identically zero , where is the eigenvector corresponding to the eigenvalue 0 of and is the spectral projection of on the kernel of ( note that since is positive and is the generator of an irreducible positive semigroup ) .we also define .the following lemma gives the asymptotic behavior of : [ lemma4 ] let us assume that is a positive integrable function on .then there exist positive constants , ( independent of but depending on ) such that moreover , recall that where is the eigenvector of the adjoint operator corresponding to the eigenvalue , normalized such that .since we see that let us start by bounding the denominator from above . using that , by proposition [ proposition2 ] , for small enough , we obtain the bound similarly , since for small enough , , so that }\gamma(x ) \int_{-a}^{a}\frac{\,dx}{\left(2(\gamma(0)\pi\varepsilon)^2+x^2\right)^2 } \geq \frac{k_3}{\varepsilon^2 } .\ ] ] for the numerator we have , on the one hand , where the right hand side tends to when goes to by an easy application of the lebesgue dominated convergence theorem ( note that the integrand is bounded above by ) .on the other hand , notice that there exists an interval which does not contain such that .then , since and there exists a constant such that by and , and by and , and small enough , this completes the proof .if is bounded below by a positive number in a neighbourhood of , then the lower estimate can be improved using that indeed , for small enough so in this case , for small enough , for some constant independent of .the next two lemmas enable to estimate ( defined above lemma [ lemma4 ] ) .in the first one , the dependence w.r.t . is not explicit .[ lemma1 ] for small enough and any there exists such that since by lemma [ lemma2.0 ] , we can apply theorem in , and get the estimate proposition [ proposition2 ] gives then the statement .we now give an estimate of the dependence of on , provided that is chosen far enough from its limit value .more precisely , we choose [ lem6 ] for small enough there exists a constant independent of and of such that since the proof of this result is quite technical we delay it to the end of this section ( subsection [ app ] ) .we now rewrite equation ( [ eqq0 ] ) as we look for solutions of ( [ eq6 ] ) ( with positive initial condition ) which can be written as , with a function of time such that . substituting in ( [ eq6 ] ) , it follows that is indeed a solution of eq .( [ eqq0 ] ) if satisfies the following initial value problem for an ordinary differential equation : or equivalently the two next lemmas explain the asymptotic behavior of .in the first one , the dependence w.r.t . of the constants is not explicit .[ lemma3 ] for small enough and any , there exists a positive constant such that the solution of is explicitly given by then where for the last inequality we have used that the denominator is a positive continuous function bounded below ( it takes the value for and its limit is when goes to infinity ) .we also used the following estimate for the numerator : since , by lemma [ lemma1 ] , , then in order to give an estimate of the dependence of w.r.t . we need to bound the denominator more precisely and to take a value of separated of its limit value . as in lemma[ lem6 ] , we choose [ lem5 ] for small enough , there exist constants and ( independent of ) such that using lemma [ lemma1 ] and the fact that the second term is positive we see that for any such that ( notice that the left hand side in is an increasing function of ) .this indeed happens if and since the second condition is weaker than the first one for small enough , holds whenever is such that , i.e. , and is sufficiently small .so , is also a lower bound in , and we finally have using the bound on the numerator given in the proof of lemma [ lemma3 ] , the previous estimate and using also lemma [ lem6 ] , lemma [ lemma4 ] and proposition [ proposition2 ] , we obtain we are now in position to conclude the proof of theorem [ theorem1 ] .we recall that satisfies the integral equation from which the following identity follows i.e. , is a solution of the variations of constants equation .+ on the other hand , the nonlinear part of the right hand side of ( [ eq6 ] ) is a locally lipschitz function of . from this uniqueness follows , whereas global existence is clear from the previous lemmas .finally , a standard application of the triangular inequality and lemmas [ lemma4 ] , [ lemma1 ] and [ lemma3 ] gives using lemmas [ lem6 ] and [ lem5 ] in the second inequality of , the last statement of theorem [ theorem1 ] follows . by the triangular inequality , hence by proposition [ proposition2 ] and theorem [ theorem1 ] , we only need to estimate the last term , for which we have let us bound the three terms . for the first one we have , by proposition [ proposition2 ] , for the second one , by proposition [ proposition2 ] and , for the third one , similarly to the proof of proposition [ proposition2 ] , denoting by , let us consider the linear initial value problem where .let us recall that and ( see proposition [ proposition1 ] ) . applying the laplace transform with respect to to the previous equation, we obtain the identity (\mu , x)-u_{0}(x)= ( a_{\varepsilon}(x)-\lambda_{\varepsilon})\,\mathcal{l}[u](\mu , x)+\varepsilon\ , \gamma(x)\,\int_{i}\mathcal{l}[u](\mu , y)\ , \,dy,\ ] ] that is (\mu , x)=\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+\frac{\varepsilon \,\gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\int_{i}\mathcal{l}[u](\mu , y)\,dy.\ ] ] integrating ( with respect to ) , we obtain (\mu , x)\,dx=\frac{\int_{i}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx } { 1- \int_{i}\frac{\varepsilon \gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx}=\frac{\int_{i}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx } { \varepsilon \mu \int_{i}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx},\ ] ] where we have used , for the second equality , . substituting in ,we get (\mu , x)=\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+ \frac{\int_{i}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,dx } { \mu \int_{i}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}\,dx}\frac{\gamma(x)}{(\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x))}.\ ] ] this laplace transform is analytic for re ( note that is positive and tends to zero when tends to zero ) .then , for , we know , by the inversion theorem , that (\mu , x)\,e^{\mu t}\ , \,d\mu.\ ] ] using the theorem of residues , we can shift the integration path to the left in order to obtain , for any (\mu , x)e^{\mu t}\big)+\frac{1}{2\pi i}\int_{s'-i\infty}^{s'+i\infty}\mathcal{l}[u](\mu , x)e^{\mu\ , t}\ , \,d\mu,\ ] ] where (\mu , x)e^{\mu t}\big)&= & \lim_{\mu \to 0 } \mu \mathcal{l}[u](\mu , x)\\ \\ & = & \lim_{\mu \to 0 } \left(\frac{\mu u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}+\frac { \int_{i}\frac{u_{0}(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\,{d}x } { \int_{i}\frac{\gamma(x)}{(\lambda_{\varepsilon}-a_{\varepsilon}(x))(\mu+\lambda_{\varepsilon } -a_{\varepsilon}(x))}\,{d}x}\,\frac{\gamma(x)}{\mu+\lambda_{\varepsilon}-a_{\varepsilon}(x)}\right)\\ \\ & = & \frac{\langle u_0 , \psi_{\varepsilon}^{*}\rangle}{\langle \psi_{\varepsilon } , \psi_{\varepsilon}^{*}\rangle}\,\psi_{\varepsilon}(x)=c_{u_0}\,\psi_{\varepsilon}(x ) \end{array}\ ] ] ( let us recall that and ) .thus , we obtain that , for , (s'+i\tau , x)\ , e^{(s'+i\tau)\ , t}\ , \,d\tau .\ ] ] we now define so that we can write (s'+i\tau , x)e^{(s'+i\tau ) t } \,d\tau&= & \frac{1}{2\pi } u_0(x)e^{s't}\int_{-\infty}^{\infty}\frac{e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau}\,d\tau\\ \\ & + & \frac{1}{2\pi } \gamma(x)e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i \tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau}\,d\tau\\ \\ & = & e^{-(\lambda_{\varepsilon}-a_{\varepsilon}(x))t}u_{0}(x)\\ \\ & + & \frac{1}{2\pi } \gamma(x)e^{s't}\int_{-\infty}^{\infty}\frac{g_{\varepsilon}(s'+i \tau)e^{i\tau t}}{s'+\lambda_{\varepsilon}-a_{\varepsilon}(x)+i \tau}\,d\tau , \end{array}\ ] ] where we used the estimate and the identity ( for ) .we now would like to find a bound for .+ we see that and since . for the numerator of we can estimate we now find a lower bound for the denominator of .we use the elementary estimate and we start with the real part . where in the last inequality we used the estimates , .we also used that , since is strictly positive and tends to zero when goes to zero , there exists such that we have .+ in a similar way , for the imaginary part , defining we see that and then , using , and now , since and are strictly positive continuous functions , and tends to a positive limit when goes to , there exists a constant ( independent of ) such that . choosing , where we can write finally , going back to and using , we end up with start here the proof of theorem [ thm : smallt ] . from now on , will designate a strictly positive constant depending only on some upper bounds on , , a lower bound on ( see remark [ rem : c ] ) , and on .thanks to the variation of the constant formula , the solution of ( [ eqq0 ] ) satisfies : where obtaining a precise estimate on is the key to prove theorem [ thm : smallt ] .if we differentiate with respect to , we get which implies , since , that thanks to , and the nonnegativity of , one gets while for some constants , for .note that here we used a lower bound on around ( we have assumed that and that is continuous ) . thanks to this lower bound , becomes thanks to and , we can estimate the second term of as follows : in order to estimate , we proceed as follows : this estimate combined with implies that satisfies since , we can estimate where thanks to ( and the definition of and : see and respectively ) , we see that so that we will now estimate each of the terms on the right hand side of .we start by estimating the third term on the right hand side , thanks to and an integration by parts : \nonumber\\ & \leq & \frac{c}{1-\varepsilon}\left[\frac{e^{(1-\varepsilon)t}}{t}+t\max_{s\in [ 1,t]}\frac{e^{(1-\varepsilon)s}}{s^2}\right]\nonumber\\ & \leq & \frac{2c}{(1-\varepsilon)t}e^{(1-\varepsilon)t},\label{eq : esttruc}\end{aligned}\ ] ] provided is large enough , and is small enough ( to ensure that }\frac{e^{(1-\varepsilon)s}}{s^2}=\frac{e^{(1-\varepsilon)t}}{t^2}$ ] ) .we now estimate the second term on the right hand side of , using an integration by parts : and then , applying an estimate similar to the one used to obtain , we get , provided that is large enough , and that is small enough , finally , we estimate the last term of the right hand side of , thanks to estimates and : where we have used an integration by part to obtain the last inequality .[ [ estimation - of - lefte1-varepsilont - int_0tmathcal - isds - fracsqrt - tf00-int_i - e - x2dxright ] ] estimation of ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ using the bounds and , we can show that as soon as , so that identity leads to the bound notice also , as this is going to be useful further on , that for , thanks to and , in this last part of the proof , we consider times .we estimate let us start by estimating the second term on the right hand side of , thanks to estimate : we now rewrite the first term on the right hand side of , using formula and : and then , thanks to , and , we estimate and then where we have used the fact that . thanks to and , becomes : theorem [ thm : smallt ] follows from this estimate . * acknowledgement * : the research leading to this paper was funded by the french `` anr blanche '' project kibord : anr-13-bs01 - 0004 , and `` anr jcjc '' project modevol anr-13-js01 - 0009 . a.c . and s.c .were partially supported by the ministerio de ciencia e innovacin , grants mtm2011 - 27739-c04 - 02 and mtm2014 - 52402-c3 - 2-p .w. arendt , a , grabosch , g. greiner , i. groh , h.p .lotz , u. moustakas , r. nagel , f. neubrander , u. schlotterbeck .one - parameter semigroups of positive operators .lecture notes in mathematics , 1184 .springer - verlag , berlin , 1986 .d. i. bolnicke , p. amarasekare , m. s. arajo , r. brger , j. m. levine , m. novak , v. h. w. rudolf , s. j. schreiber , m. c. urban , d. a. vasseur , why intraspecific trait variation matters in community ecology ._ trends ecol ._ 26 , 183192 ( 2011 ) .o. diekmann , j. a. p. heesterbeek , j. a. metz , on the definition and the computation of the basic reproduction ratio in models for infectious diseases in heterogeneous populations . j. math. biol . * 28*(4 ) , 365 - 382 ( 1990 ) .g. martin , r. aguile , j. ramsayer , o. kaltz , o. ronce , the probability of evolutionary rescue : towards a quantitative comparison between theory and evolution experiments .b _ * 368 * , 20120088 ( 2012 ) . c. violle, b. j. enquist , b. j. mcgill , l. jiang , c. h. albert , c. hulshof , v. jung , j. messier , the return of the variance : intraspecific variability in community ecology ._ trends ecol ._ * 27*(4 ) , 24452 ( 2012 ) .
in this paper , we study the asymptotic ( large time ) behavior of a selection - mutation - competition model for a population structured with respect to a phenotypic trait , when the rate of mutation is very small . we assume that the reproduction is asexual , and that the mutations can be described by a linear integral operator . we are interested in the interplay between the time variable and the rate of mutations . we show that depending on , the limit with can lead to population number densities which are either gaussian - like ( when is small ) or cauchy - like ( when is large ) .
graphs are used to model infrastructure networks , the world wide web , computer traffic , molecular interactions , ecological systems , epidemics , citations , and social interactions , among others . despite the differences in the motivating applications ,some topological structures have emerged to be important across all these domains .triangles , which can be explained by homophily ( people become friends with those similar to themselves ) and transitivity ( friends of friends become friends ) .this abundance of triangles , along with the information they reveal , motivates metrics such as the _ clustering coefficient _ and the _ _ratio__s .the triangle structure of a graph is commonly used in the social sciences for positing various theses on behavior .triangles have also been used in graph mining applications such as spam detection and finding common topics on the www .the authors earlier work used distribution of degree - wise clustering coefficients as the driving force for a new generative model , blocked two - level erds - rnyi . the authors have also observed that relationships among degrees of triangle vertices can be a descriptor of the underlying graph .the information about triangles is usually summarized in terms of _ clustering coefficients_. let be a simple undirected graph with vertices and edges .let denote the number of triangles in the graph and be the number of _ wedges _ ( a path of length ) .the most common measure is the _ global clustering coefficient_ , which measures how often friends of friends are also friends .we show that we can achieve speed - ups of up to four orders of magnitude with extremely small errors ; see and .our approach is not limited to clustering coefficients , however .a per - vertex clustering coefficient , , is defined as the fraction of wedges centered at that participate in triangles .the mean value of is called the _ local clustering coefficient _ , . a more nuanced view of triangles is given by the _ degree - wise clustering coefficients_. this is a set of values indexed by degree , where is the average clustering coefficient of degree vertices . in addition to degree distribution ,many graphs are characterized by plotting the clustering coefficients , i.e. , versus .we summarize our notation and give formal definitions in .@ & number of vertices + & number of vertices of degree + & number of edges + & degree of vertex + & set of degree- vertices + & total number of wedges + & number of wedges centered at vertex + & total number of triangles + & number of triangles incident to vertex + & number of triangles incident to a degree vertex + @ & global clustering coefficient + & clustering coefficient of vertex + & local clustering coefficient + & degree - wise clustering coefficient + there has been significant work on enumeration of all triangles .recent work by cohen and suri and vassilvitskii give parallel implementations of these algorithms .arifuzzaman et al . give a massively parallel algorithm for computing clustering coefficients .enumeration algorithms however , can be very expensive , since graphs even of moderate size ( millions of vertices ) can have an extremely large number of triangles ( see e.g. , ) .eigenvalue / trace based methods have been used by tsourakakis and avron to compute estimates of the total and per - degree number of triangles .however , computing eigenvalues ( even just a few of them ) is a compute - intensive task and quickly becomes intractable on large graphs .most relevant to our work are sampling mechanisms .tsourakakis et al . started the use of sparsification methods , the most important of which is doulion .this method sparsifies the graph by keeping each edge with probability ; counts the triangles in the sparsified graph ; and multiplies this count by to predict the number of triangles in the original graph .various theoretical analyses of this algorithm ( and its variants ) have been proposed .one of the main benefits of doulion is that it reduces large graphs to smaller ones that can be loaded into memory . however, their estimate can suffer from high variance .alternative sampling mechanisms have been proposed for streaming and semi - streaming algorithms . yet, all these fast sampling methods only estimate the number of triangles and give no information about other triadic measures . in this paper, we introduce the simple yet powerful technique of _ wedge sampling _ for counting triangles .wedge sampling is really an algorithmic template , as opposed to a single algorithm , as various algorithms can be obtained from the same basic structure .some of the salient features of this method are : * versatility of wedge sampling : * we show how to use wedge sampling to approximate the various clustering coefficients : , , and . from these, we can estimate the numbers of triangles : and .moreover , our techniques can even be extended to find uniform random triangles , which is useful for computing triadic statistics .other sampling methods are usually geared towards only and .* provably good results with precise bounds : * the mathematical analysis of this method is a direct consequence of standard chernoff - hoeffding bounds .we obtain explicit time - error - accuracy tradeoffs . in other words , if we want an estimate for with error at most 10% with probabilityat least 99.9% ( say ) , we know we need only 380 wedge samples .this estimate is _ independent of the size of the graph _, though the preprocessing required by our method is linear in the number of edges ( to obtain the degree distribution ) . *fast and scalable : * our estimates converge rapidly , well within the theoretical bounds provided .although there is no other method that competes directly with wedge sampling , we compare with doulion .our experimental studies show that our wedge sampling algorithm is far faster , when the variances of the two methods are similar ( see in the appendix ) .we do not compare to eigenvalue - based approaches since they are much more expensive for larger graphs .we present the general method of wedge sampling for estimating clustering coefficients . in later sections, we instantiate this for different algorithms . we say a wedge is _ closed _ if it is part of a triangle ; otherwise , we say the wedge is _open_. thus , in , -- is an open wedge , while -- is a closed wedge .the middle vertex of a wedge is called its _ center _ , i.e. , wedges -- and -- are centered at .wedge sampling is best understood in terms of the following thought experiment .fix some distribution over wedges and let be a random wedge .let be the indicator random variable that is if is closed and otherwise .denote ] .then for , we have hence , if we set , then < \delta ] as the probability that a uniform random wedge is closed or , alternately , the fraction of closed wedges . to generate a uniform random wedge , note that the number of wedges centered at vertex is and .we set to get a distribution over the vertices .note that the probability of picking is proportional to the number of wedges centered at .a uniform random wedge centered at can be generated by choosing two random neighbors of ( without replacement ) .[ clm : random ] suppose we choose vertex with probability and take a uniform random pair of neighbors of .this generates a uniform random wedge .consider some wedge centered at vertex .the probability that is selected is .the random pair has probability of .hence , the net probability of is . compute for all vertices select random vertices ( with replacement ) according to probability distribution defined by .for each selected vertex , choose a uniform random pair of neighbors of to generate a wedge .output fraction of closed wedges as estimate of . shows the randomized algorithm -wedge sampler for estimating in a graph . combining the bound of with, we get the following theorem .[ thm : kappa ] set .the algorithm -wedge sampler outputs an estimate for the clustering coefficient such that with probability greater than .note that the number of samples required is _ independent of the graph size _ , but computing does depend on the number of edges , . to get an estimate on , the number of triangles , we output , which is guaranteed to be within of with probability greater than .we implemented our algorithms in c and ran our experiments on a computer equipped with a 2.3ghz intel core i7 processor with 4 cores and 256 kb l2 cache ( per core ) , 8 mb l3 cache , and an 8 gb memory .we performed our experiments on 13 graphs from snap and per private communication with the authors of . in all cases ,directionality is ignored , and repeated and self - edges are omitted .the properties of these matrices are presented in , where , , , and are the numbers of vertices , edges , wedges , and triangles , respectively .and and correspond to the global and local clustering coefficients .the last column reports the times for the enumeration algorithm .our enumeration algorithm is based on the principles of , such that each edge is assigned to its vertex with a smaller degree ( using the vertex numbering as a tie - breaker ) , and then vertices only check wedges formed by edges assigned to them for closure . as seen in wedge samplingworks orders of magnitude faster than the enumeration algorithm .detailed results on times can be seen in in the appendix .the timing results show tremendous savings ; for instance , wedge sampling only takes 0.015 seconds on ` as - skitter ` while full enumeration takes 90 seconds .show the accuracy of the wedge sampling algorithm .again detailed results on times can be seen in in the appendix . at 99.9% confidence ( ) , the upper bound on the error we expect for 2k , 8k , and 32k samples is .043 , .022 , and .011 , respectively .most of the errors are much less than the bounds would suggest .for instance , the maximum error for 2k samples is .007 , much less than that 0.43 given by the upper bound . shows the fast convergence of the clustering coefficient estimate ( for the graph amazon0505 ) as the number of samples increases .the dashed line shows the error bars at 99.9% confidence . in all our experiments ,the real error is always much smaller than what is indicated by .we now demonstrate how a small change to the underlying distribution on wedges allows us to compute the local clustering coefficient , .shows the procedure -wedge sampler .the only difference between -wedge sampler and -wedge sampler is in picking random vertex centers .vertices are picked uniformly instead of from the distribution .pick uniform random vertices . for each selected vertex , choose a uniform random pair of neighbors of to generate a wedge .output the fraction of closed wedges as an estimate for .[ thm : lcc ] set .the algorithm -wedge sampler outputs an estimate for the clustering coefficient such that with probability greater than .let us consider a single sampled wedge , and let be the indicator random variable for the wedge being closed .let be the uniform distribution on wedges .for any vertex , let be the uniform distribution on pairs of neighbors of . observe that }= \pr_{v \sim { { \cal v}}}[\pr_{(u , u ' ) \sim { { \cal n}}_v}[\textrm{wedge is closed}]]\ ] ] we will show that this is exactly . \\ & = & { \hbox{\bf e}}_{v \sim { { \cal v } } } [ \textrm{frac . of closed wedges centered at } ] \\ & = & { \hbox{\bf e}}_{v \sim { { \cal v } } } [ { \hbox{\bf e}}_{(u , u ' ) \sim { { \cal n}}_v}[x(\{(u , v),(u',v)\ } ) ] ] \\ & = & \pr_{v \sim { { \cal v}}}[\pr_{(u , u ' ) \sim { { \cal n}}_v}[\textrm{ is closed } ] ] \\ & = & { \hbox{\bf e}}[x]\end{aligned}\ ] ] for a single sample , the probability that the wedge is closed is exactly .the bound of completes the proof . and present the results of our experiments for computing the local clustering coefficients .more detailed results can be found in the appendix . experimental setup andthe notation are the same as in .the results again show that wedge sampling provides accurate estimations with tremendous improvements in runtime . in this case , we come closer to the theoretical error bounds . for instance , the largest different in the 2k sample case is 0.017 , which is much closer to the theoretical error bound of 0.043 .we demonstrate the power of wedge sampling by estimating the degree - wise clustering coefficients . we also provide a sampling algorithm to estimate , the number of triangles incident to degree- vertices .shows procedure -wedge sampler .pick uniform random vertices of degree . for eachselected vertex , choose a uniform random pair of neighbors of to generate a wedge .output the fraction of closed wedges as an estimate for .[ thm : dcc ] set .the algorithm -wedge sampler outputs an estimate for the clustering coefficient such that with probability greater than .the proof of the following is similar to that of .since is the average clustering coefficient of a degree vertex , we can apply the same arguments as in . by modifying the template given in , we can also get estimates for .now , instead of simply counting the fraction of closed wedges , we take a weighted sum .describes the procedure -wedge sampler .we let denote the total number of wedges centered at degree vertices .pick uniform random vertices of degree . for eachselected vertex , choose a uniform random pair of neighbors of to generate a wedge . for each wedge generated , let be the associated random variable such that .output as the estimate for .[ thm : dcc2 ] set .the algorithm -wedge sampleroutputs an estimate for the with the following guarantee : with probability greater than . for a single sampled wedge , we define .we will argue that the expected value of ] , partition the set of wedges centered on degree vertices into four sets .the set ( ) contains all closed wedges containing exactly degree- vertices .the remaining open wedges go into . for a sampled wedge , if , then . if , then nothing is added .the wedge is a uniform random wedge from those centered on degree- vertices .hence , = |s|^{-1}(|s_1| + |s_2|/2 + |s_3|/3) ] . algorithms in the previous section present how to compute the clustering coefficient of vertices of a given degree . in practice, it may be sufficient to compute clustering coefficients over bins of degrees .wedge sampling algorithms can still be extended for bins of degrees by a small adjustment of the sampling procedure . within each bin , we weight each vertex according to the number of wedges it produces .this guarantees that each wedge in the bin is equally likely to be selected. shows results on three graphs for clustering coefficients ; the remaining figures are shown in the in the appendix .the data is grouped in logarithmic bins of degrees , i.e. , .in other words , form the -th bin .we show the estimates with increasing number of samples . at 8k samples ,the error is expected to be less than 0.02 , which is apparent in the plots .observe that even 500 samples yields a reasonable estimate in terms of the differences by degree .shows the time to calculate all values , showing a tremendous savings in runtime as a result of using wedge sampling . in this figure , runtimes are normalized with respect to the runtimes of full enumeration .as the figure shows , wedge sampling takes only a tiny fraction of the time of full enumeration especially for large graphs .while most triadic measures focus on the number of triangles and their distribution , the triangles themselves , not only their count , can reveal a lot of information about the graph .the authors recent work has looked at the relations among the degrees of the vertices of triangles . in these experiments ,a full enumeration of the triangles was used , which limited the sizes of the graphs we could use . to avoid this computational burden , a uniform sampling of the triangles can be used . _wedge sampling provides a uniform sampling the triangles of a graph_. consider the uniform wedge sampling of .some of these wedges will be closed , so we generate a random set of triangles .each such triangle is an independent , uniform random triangle .this is because wedges are chosen from the uniform distribution , and every triangles contains exactly closed wedges .presents the results using triangle sampling to estimate the percentage of triangles where the maximum to minimum degree ratio is , which is motivated by an experiment in .the figure shows that accurate results can be achieved by using only 500 triangles and that wedge sampling provides an unbiased selection of triangles .the expected number of wedges to be sampled to generate triangles is , which means the method will be effective unless the clustering coefficient is extremely small .doulion is an alternative sampling mechanism for estimating the number of triangles in a graph .it has a single parameter .each edge is chosen independently at random with probability , leading to a subgraph of expected size ( is the total number of edges ) .we count the number of triangles in this subgraph , and estimate the total number of triangles by .it is not hard to verify that this is correct in expectation , but bounding the variance requires a lot more work .some concentration bounds for this estimate are known , but they depend on the maximum number of triangles incident to an edge in the graph .so they do not have the direct form of .some bad examples for doulion have been observed .doulion is extremely elegant and simple , and leads to an overall reduction in graph size ( so a massive graph can now fit into memory ) .common values of used are and . we show that wedge sampling performs at least as good as doulion in terms of accuracy , and has better runtimes .we run wedge sampling with 32k samples .we start with setting the doulion parameter , which has been used in the literature .note that the size of the doulion sample is , which is much larger than 32k .( for amazon0312 , one of the smaller graphs we consider , . )we run each algorithm 100 times .the ( average ) runtimes are compared in , where we see that wedge sampling is always competitive .the accuracy of the estimate is comparable for both algorithms . in in the appendix, we present the minimum , maximum , and standard deviation for the 100 runs .this also shows the good convergence properties of both algorithms , since even the minimum and maximum values are fairly close to the true clustering coefficient .we also try and note that wedge sampling with 32k samples is still faster in terms of runtime while offering somewhat better accuracy . by setting ,the sample size of doulion becomes comparable to 32k , but doulion is quite inaccurate ( the range between the maximum and the minimum is large ) . in of the appendix , we also compare results for the local clustering coefficient .we have proposed a series of wedge - based algorithms for computing various triadic measures on graphs .our algorithms come with theoretical guarantees in the form of specific error and confidence bounds .we want to stress that the number of samples required to achieve a specified error and confidence bound is independent of the graph size .for instance , 38,000 samples guarantees and error less than 0.1% with 99.9% confidence _ for any graph _ , which gives our algorithms an incredible potential for scalability .the limiting factors have to do with determining the sampling proportions ; for instance , we need to know the degree of each vertex and the overall degree distribution to compute the global clustering coefficient .the flexibility of wedge sampling along with the hard error bounds essentially redefines what is feasible in terms of graph analysis .the very expensive computation of clustering coefficient is now much faster ( enabling more near - real - time analysis of networks ) and we can consider much larger graphs than before . in an extension of this work , we are pursuing a mapreduce implementation of this method that scales to o(100 m ) nodes and o(1b ) edges , needing only a few minutes of compute time ( based on preliminary results ) . with triadic analysisno long being a computational burden , we can extend our horizons into new territories and look at directed triangles , attributed triangles ( e.g. , we might compare the clustering coefficient for `` male '' and `` female '' nodes in a social network ) , evolution of triadic connections , higher - order structures such a 4-cycles and 4-cliques , and so on .10 , _ patric : a parallel algorithm for counting triangles and computing clustering coefficients in massive networks _ , tech .report 12 - 042 , ndssl , 2012 ., _ counting triangles in large graphs using randomized matrix trace estimation _ , in kdd - ldmta10 , 2010 ., http://dl.acm.org/citation.cfm?id=545381.545464[_reductions in streaming algorithms , with an application to counting triangles in graphs _ ] , in soda02 , 2002 , pp . 623632 . , http://dx.doi.org/10.1145/1401890.1401898[_efficient semi - streaming algorithms for local triangle counting in massive graphs _ ] , in kdd08 , 2008 , pp . 1624 . , http://ngc.sandia.gov/assets/documents/berry-soda11_sand2010-4474c.pdf [ _ listing triangles in expected linear time on power law graphs with exponent at least _ ] , tech .report sand2010 - 4474c , sandia national laboratories , 2011 . , http://dx.doi.org/10.1145/1142351.1142388[_counting triangles in data streams _ ] , in pods06 , 2006 , pp . 253262 . , http://www.jstor.org/stable/10.1086/421787 [ _ structural holes and good ideas _ ] , american journal of sociology , 110 ( 2004 ) , pp .349399 . , http://dx.doi.org/10.1137/0214017 [ _ arboricity and subgraph listing algorithms _ ] , siam j. comput . , 14 ( 1985 ) ,210223 . ,_ triangle listing in massive networks and its applications _ , in kdd11 , 2011 , pp . 672680 ., http://dx.doi.org/10.1109/mcse.2009.120[_graph twiddling in a mapreduce world _ ] , computing in science & engineering , 11 ( 2009 ) , pp .2941 . , http://www.jstor.org/stable/2780243[_social capital in the creation of human capital _ ] , american journal of sociology , 94 ( 1988 ) , pp .s95s120 . ,_ degree relations of triangles in real - world networks and graph models _ , in cikm12 , 2012 ., http://dx.doi.org/10.1073/pnas.032093399[_curvature of co - links uncovers hidden thematic layers in the world wide web _ ] , pnas , 99 ( 2002 ) , pp .58255829 ., http://dx.doi.org/10.1145/1753846.1754097[_is a friend a friend ?: investigating the structure of friendship networks in virtual worlds _ ] , in chi - ea10 , 2010 , pp . 40274032 . , http://www.jstor.org/stable/2282952[_probability inequalities for sums of bounded random variables _ ] , j. american statistical association , 58 ( 1963 ) , pp. 1330 . , _ new streaming algorithms for counting triangles in graphs _ , in cocoon05 , 2005, pp . 710716 . , _ efficient triangle counting in large graphs via degree - based vertex partitioning _ , in waw10 , 2010 . , http://dx.doi.org/10.1016/j.tcs.2008.07.017 [ _ main - memory triangle computations for very large ( sparse ( power - law ) ) graphs _ ] , theoretical computer science , 407 ( 2008 ) , pp. 458473 . , http://dx.doi.org/10.1145/1298306.1298311[_measurement and analysis of online social networks _ ] , in imc07 , acm , 2007 , pp .2942 . , _ colorful triangle counting and a mapreduce implementation _ , information processing letters , 112 ( 2012 ) , pp277281 . , http://dx.doi.org/10.1146/annurev.soc.24.1.1 [ _ social capital : its origins and applications in modern sociology _] , annual review of sociology , 24 ( 1998 ) , pp . 124 . , http://dx.doi.org/10.1007/11427186_54[_finding , counting and listing all triangles in large graphs , an experimental study _ ] , in experimental and efficient algorithms , springer berlin / heidelberg , 2005 , pp . 606609 . , http://dx.doi.org/10.1103/physreve.85.056109[_community structure and scale - free collections of erds - rnyi graphs _ ] , physical review e , 85 ( 2012 ) , p. 056109 . , http://dx.doi.org/10.1145/1963405.1963491[_counting triangles and the curse of the last reducer _ ] , in www11 , 2011 , pp .607614 . , http://dx.doi.org/10.1109/icdm.2008.72[_fast counting of triangles in large real networks without counting : algorithms and laws _ ] , in icdm08 , 2008 , pp .608617 . ,_ spectral counting of triangles in power - law networks via element - wise sparsification _, in asonam09 , 2009 , pp . 6671 . , _ triangle sparsifiers _ , j. graph algorithms and applications , 15 ( 2011 ) , pp .703726 . , http://dx.doi.org/10.1145/1557019.1557111[_doulion : counting triangles in massive graphs with a coin _ ] , in kdd 09 , 2009 , pp .837846 . ,_ social network analysis : methods and applications _ , cambridge university press , 1994 ., http://dx.doi.org/10.1038/30918 [ _ collective dynamics of ` small - world ' networks _ ] , nature , 393 ( 1998 ) , pp .440442 . , http://dx.doi.org/10.1007/978-3-642-24082-9_83[_improved sampling for triangle counting with mapreduce _ ] , in convergence and hybrid information technology , g. lee , d. howard , and d. slezak , eds .6935 of lecture notes in computer science , springer berlin / heidelberg , 2011 , pp . 685689 . .available at http://snap.stanford.edu/.various detailed results are shown in the appendix .we present the runtime results for global clustering coefficient computations in .as mentioned earlier , we perform 100 runs of each algorithm and show the minimum , maximum , and standard deviations of the output estimates .we also show the relative speedup of wedge sampling over doulion with .we also used doulion to compute the local clustering coefficient . for this purpose, we predicted the number of triangles incident to a vertex by counting the triangles in the sparsified graph and then dividing this number by .the results of our experiments are presented in , which shows that doulion fails in accuracy even for .this is because local clustering coefficient is a finer level statistic , which becomes a challenge for doulion .wedge sampling on the other hand , keeps its accurate estimations with low variance . & & & & & & & & & & & & + amazon0312 & 401k & 2350k & 69 m & 3686k & 0.160 & 0.163 & 0.161 & 0.160 & 0.261 & 0.004 & 0.007 & 0.016 + amazon0505 & 410k & 2439k & 73 m & 3951k & 0.162 & 0.158 & 0.165 & 0.163 & 0.269 & 0.005 & 0.007 & 0.016 + amazon0601 & 403k & 2443k & 72 m & 3987k & 0.166 & 0.161 & 0.164 & 0.167 & 0.268 & 0.004 & 0.007 & 0.017 + as - skitter & 1696k & 11095k & 16022 m & 28770k & 0.005 & 0.006 & 0.006 & 0.006 & 90.897 & 0.015 & 0.019 & 0.026 + cit - patents & 3775k & 16519k & 336 m & 7515k & 0.067 & 0.064 & 0.067 & 0.068 & 3.764 & 0.035 & 0.040 & 0.056 + roadnet - ca & 1965k & 2767k & 6 m & 121k & 0.060 & 0.061 & 0.058 & 0.058 & 0.095 & 0.018 & 0.022 & 0.037 + web - berkstan & 685k & 6649k & 27983 m & 64691k & 0.007 & 0.005 & 0.006 & 0.007 & 54.777 & 0.007 & 0.009 & 0.016 + web - google & 876k & 4322k & 727m & 13392k & 0.055 & 0.055 & 0.054 & 0.056 & 0.894 & 0.009 & 0.011 & 0.020 + web - stanford & 282k & 1993k & 3944 m & 11329k & 0.009 & 0.013 & 0.008 & 0.009 & 6.987 & 0.003 & 0.005 & 0.011 + wiki - talk & 2394k & 4660k & 12594 m & 9204k & 0.002 & 0.004 & 0.003 & 0.002 & 20.572 & 0.021 & 0.024 & 0.033 + youtube & 1158k & 2990k & 1474 m & 3057k & 0.006 & 0.005 & 0.006&0.006 & 2.740 & 0.011 & 0.013 & 0.021 + flickr & 1861k & 15555k & 14670 m & 548659k & 0.112 & 0.110 & 0.113 & 0.112 & 567.160&0.019 & 0.026 & 0.051 + livejournal & 5284k & 48710k & 7519 m & 310877k & 0.124 & 0.127 & 0.126 & 0.124 & 102.142 & 0.048 & 0.051 & 0.073 + & & & & & & & & + amazon0312 & 0.421 & 0.427 & 0.417 & 0.420 & 0.301 & 0.001 & 0.002 & 0.008 + amazon0505 & 0.427 & 0.422 & 0.423 & 0.426 & 0.310 & 0.001 & 0.002 & 0.008 + amazon0601 & 0.430 & 0.435 & 0.421 & 0.430 & 0.314 & 0.001 & 0.002 & 0.007 + as - skitter & 0.296 & 0.280 & 0.288 & 0.297 & 88.290 & 0.002 & 0.009 & 0.036 + cit - patents & 0.092 & 0.089 & 0.094 & 0.091 & 4.081 & 0.001 & 0.003 & 0.012 + roadnet - ca & 0.055 & 0.049 & 0.059 & 0.054 & 0.112 & 0.000 & 0.002 & 0.006 + web - berkstan & 0.634 & 0.629 & 0.639 & 0.633 & 53.892 & 0.006 & 0.021 &0.085 + web - google & 0.624 & 0.624 & 0.619 & 0.628 & 0.996 & 0.001 & 0.004 & 0.013 + web - stanford & 0.629 & 0.612 & 0.635 & 0.633 & 6.868 & 0.002 & 0.010 & 0.038 + wiki - talk & 0.201 & 0.199 & 0.194 & 0.199 & 20.254 & 0.007 & 0.028 & 0.108 + youtube & 0.128 & 0.130&0.132 & 0.131&18.948 & 0.002&0.008 & 0.031 + flickr & 0.375 & 0.369 & 0.371 & 0.377 & 575.493 & 0.001 & 0.006 & 0.021 + livejournal & 0.345 & 0.338 & 0.348 & 0.345 & 102.142 & 0.001 & 0.004&0.015 + & & & & & & & & & & & & & & + amaz0312 & .160 & .155 & .166 & .002 & ..137 & .188 & .011 & .093 & .234 & .031 & .000 & .392 & .080 & 7.06 + amaz0505 & .162 & .159 & .167 & .002 & .133 & .193 & .011 & .098 & .241 & .028 & .000 & .370 & .081 & 7.37 + amaz0601 & .166 & .162 & .172 & .002 & .140 & .193 & .010 & .088 & .260 & .028 & .000 & .457 & .078 & 7.30 + skitter & .005 & .005 & .007 & .000 & .005 & .006 & .000 & .004 & .007 & .001 & .002 & .008 & .001 & 17.80 + cit - pat & .067 & .064 & .071 & .001 & .060 & .077 & .003 & .042 & .099 & .010 & .027 & .116 & .022 & 15.76 + road - ca & .060 & .058 & .065 & .001 & .000 & .133 & .023 & .000 & .250 & .062 & .000 & 1.001 & .193 & 5.84 + w - bersta & .007 & .006 & .008 & .001 & .006 & .008 & .000 & .006 & .008 & .000 & .004 & .011 & .001 & 16.28 + w - google & .055 & .052 & .059 & .001 & .051 & .060 & .002 & .044 & .071 & .005 & .021 & .107 & .016 & 1.20 + w - stan & .009 & .007 & .010 & .001 & .007 & .010 & .001 & .006 & .012 & .001 & .003 & .018 & .003 & 7.18 + wiki - t & .002 & .002 & .003 & .000 & .002 & .002 & .000 & .002 & .003 & .000 & .001 & .004 & .001 & 8.52 + youtube & .006 & .005 & .007 & .000 & .005 & .007 & .001 & .004 & .010 & .001 & .000 & .018 & .004 & 7.55 + flickr & .112 & .108 & .117 & .002 & .110 & .115 & .001 & .107 & .118 & .002 & .098 & .125 & .005 & 11.38 + livejour & .124 & .120 & .128 & .002 & .121 & .128 & .001 & .116 & .130 & .003 & .105 & .143 & .007 & 3.49 + & & & & & & & & & & & & & + amaz0312 & .421 & .415 & .426 & .003 & .395 & .463 & .014 & .318 & .558 & .047 & .195 & .869 & .140 + amaz0505 & .427 & .421 & .432 & .003 & .395 & .463 & .014 & .338 & .568 & .056 & .195 & .869 & .140 + amaz0601 & .430 & .423 & .437 & .003 & .403 & .466 & .013 & .329 & .633 & .058 & .239 & .819 & .120 + skitter & .296 & .288 & .303 & .003 & .272 & .322 & .011 & .206 & .384 & .039 & .122 & .667 & .105 + cit - pat & .092 & .088 & .096 & .002 & .085 & .099 & .003 & .066 & .126 & .011 & .028 & .199 & .035 + road - ca & .055 & .052 & .058 & .001 & .038 & .071 & .006 & .000 & .118 & .022 & .000 & .279 & .062 + w - bersta & .634 & .627 & .641 & .003 & .586 & .700 & .024 & .532 & .808 & .057 & .339 & 1.273 & .165 + w - google & .624 & .615 & .630 & .003 & .580 & .688 & .021 & .471 & .772 & .063 & .335 & 1.276 & .183 + w - stan & .629 & .622 & .636 & .003 & .532 & .737 & .038 & .441 & .982 & .109 & .218 & 1.218 & .230 + wiki - t & .201 & .195 & .206 & .002 & .171 & .240 & .015 & .055 & .379 & .067 & .006 & .609 & .160 + youtube & .128 & .167 & .179 & .002 & .128 & .218 & .015 & .055 & .294 & .057 & .007 & .767 & .182 + flickr & .375 & .368 & .381 & .003 & .316 & .411 & .016 & .212 & .553 & .066 & .086 & .774 & .154 + livejour & .345 & .337 & .355 & .003 & .330 & .359 & .006 & .296 & .401 & .023 & .214 & .500 & .060 +
graphs are used to model interactions in a variety of contexts , and there is a growing need to quickly assess the structure of a graph . some of the most useful graph metrics , especially those measuring social cohesion , are based on _ triangles_. despite the importance of these triadic measures , associated algorithms can be extremely expensive . we propose a new method based on _ wedge sampling_. this versatile technique allows for the fast and accurate approximation of all current variants of clustering coefficients and enables rapid uniform sampling of the triangles of a graph . our methods come with _ provable _ and practical time - approximation tradeoffs for all computations . we provide extensive results that show our methods are orders of magnitude faster than the state - of - the - art , while providing nearly the accuracy of full enumeration . our results will enable more wide - scale adoption of triadic measures for analysis of extremely large graphs , as demonstrated on several real - world examples .
the ability to detect and identify materials , hidden behind barriers , has potential applications within security and non - destructive testing .the thz range of the electromagnetic spectrum is particularly attractive for these applications , because many barrier materials , such as clothing , plastic , and paper only attenuate thz waves moderately , while other materials , such as explosives and related compounds have characteristic spectroscopic fingerprints in the thz region .a relevant geometry for these applications is the reflection geometry in which a pulsed or cw signal is sent towards an unknown structure and the amplitude and phase of the reflected signal is detected .the task is then to deduce the structure from the measured reflection coefficient .the situation is simplified in the effectively one - dimensional case , where the electromagnetic properties of the structure only vary in one direction .retrieval of the structure parameters then becomes a one - dimensional inverse - scattering problem .however , because the relevant materials are lossy in the thz range , they are also dispersive , according to the kramers kronig relations .thus when applying inverse - scattering algorithms to the thz range one must take into account absorption and material dispersion .there are several formulations and algorithms for the one - dimensional inverse scattering problem .in particular , the layer peeling algorithms have turned out to be very efficient , and used in a wide range of applications .these algorithms are based on the following , simple fact : consider the time - domain reflection impulse response of a layered structure . by causality ,the leading edge is only dependent on the first layer , since the wave has not had time to feel the presence of the other layers .thus one can identify the first layer of the structure from the impulse response .this information can be used to remove the influence from the layer , which leads to synthetic reflection data with the first layer peeled off .this procedure can be continued until the complete structure has been identified . in this workwe generalize the layer - peeling algorithm to dispersive stratified structures . providedthe material dispersion is sufficiently small , such that the time - domain fresnel reflections have durations less than the round - trip time in each layer , we can uniquely reconstruct the refractive indices of the structure ( section ii ) .the method is illustrated by numerical examples in section iii . finally , in section iv we prove that for larger dispersion , the inverse scattering problem does not have a unique solution in general .this is because one can not distinguish between the non - instantaneous temporal response of the medium itself ( due to dispersion ) , and the temporal response due to the stratification ( caused by reflections at the layer boundaries ) .thus , extra information is needed , such as an upper bound for the dispersion combined with a lower bound for the layer thicknesses .we first describe the model of the stratified medium . consider a layered , planar structure , consisting of layers with refractive indices and thicknesses , see fig .[ fig : layers ] . herethe index labels the layer .the light propagation in this structure is conveniently modeled using transfer matrices .for simplicity we limit the analysis to normal incidence .the transfer matrix of the transition from refractive index to is where is the fresnel reflection coefficient , associated with the index step between and .the transfer matrix of the pure propagation in layer is & 0 \\ 0 & \exp[-i\omega n_j(\omega)d_j / c ] \end{bmatrix}},\ ] ] where is the vacuum light velocity .note that eqs .( [ eq : step])-([eq : prop ] ) are also valid when the refractive index is complex .the refractive index is necessarily complex for dispersive materials , because dispersion is accompanied by loss according to the kramers kronig relations .the transfer matrix of the total structure is given by the reflection coefficient from the left , and the transmission coefficient for the electric field from left to right , are given by the ( 2,1)- and ( 2,2)-elements of : layers .the thicknesses and refractive indices of the layers are and , respectively.,width=377 ] we will now describe a layer - peeling method that can be used in the presence of weak dispersion .the precise condition for the dispersion will become clear below . to be able to reconstruct the structure , we assume that the refractive index of the zeroth layer and the reflection spectrum of the entire structure ( as seen from ) , are known . the goal is to calculate for all and for .the reflection spectrum of the layered structure can be expressed as follows : where is the impulse response , i.e. , the time - domain reflected field when a dirac delta pulse is incident to the structure .note the lower limit in the integral : for a non - dispersive layer 0 , the round - trip time to the first index step would be ; however , due to dispersion we can only be sure that the round - trip time is no less than .we take the forward and backward propagating frequency - domain fields at to be and , respectively , defining the field vector for now we assume that the layer thicknesses are known _ a priori _ ; the case with unknown layer thicknesses is treated below .the field vector before the beginning of the first layer ( ) is given by we now define the reflection spectrum after the zeroth layer has been `` peeled off '' : the index step at can be regarded as a frequency - dependent reflector with ( unknown ) reflectivity , in accordance with eq . .we assume that the dispersions of layers 0 and 1 are sufficiently small , such that the time - domain response associated with has duration less than . then we can write where in eq. the lower limit in the integral reflects the fact that any reflections from the later index steps are delayed by ( at least ) the round - trip time . having established eq ., we can now identify : with the local reflection coefficient in hand , we can calculate the refractive index of layer 1 using eq . .once has been found , we can calculate the reflection spectrum with the first layer removed : now we find ourselves in the same situation as before , so we can continue the process until all layers have been found . from eq .one obtains the _ complex _ reflection coefficient of each layer , and therefore , by eq . , both the real and imaginary parts of the refractive index .one may ask if the reconstructed refractive index automatically is causal , or whether the kramers kronig relations could be used in addition to ensure causality .the answer is that the lower limit in the integral ensures that is causal , and therefore by eq . that is analytic in the upper half - plane of complex frequency .provided as , is therefore guaranteed to be causal . in practical situationsthe available bandwidth is finite , the layer thicknesses may not be known _ a priori _ , and the reflection data contains noise .we will now consider these aspects . in practice, we only have reflection data in a limited bandwidth .in other words , the reflection data in eq . is necessarily multiplied by a window function , which is nonzero only in the interval .physically , this means that instead of probing the structure with a dirac delta pulse , we use an input pulse of duration : we require the duration of the time - domain response associated with to be less than , in order to distinguish between the response due to the first and the other layers .the time - domain response associated with may already have duration up to , so we must require , or equivalently , .this must be true for all layers , so where is a lower bound for the ( possibly unknown ) layer thicknesses .in addition to condition , we recall that the time - domain response associated with must have duration less than , which means that the minimum allowable layer thickness is limited by the narrowest dispersion feature in the structure .the response ( where denotes convolution ) is no longer guaranteed to vanish for , so we must extend the lower integration limit in eq . to contain the pulse : here is the `` start position '' of , i.e. , a possibly negative number such that the response from the first layer roughly is contained in the interval ] of the window function matches that of the dispersion of to be reconstructed .we will now describe how the layer peeling algorithm also can be used when the layer thicknesses are not known _ a priori_. recall that the time - domain responses associated with have duration less than for all .starting at a layer boundary , one can then perform layer peeling a distance using eq ., and calculate the resulting time - domain response .this removes the effect of the layer boundary , and the first signal in the transformed time - domain response is due to reflection from the next layer boundary .let denote the start position of this signal and let denote the start position of .if , this indicates that is less than the layer thickness . define a small thickness , which is sufficiently small to achieve the desired , spatial resolution .then we can transform the fields successively using eq . with the small thickness until , and we have arrived at an index step .we can then peel off the dispersive response associated with the index step and search for the next layer boundary , and so forth .if only is nonzero in the interval , the time - domain responses do not have well - defined fronts , and the procedure above for finding the layer thickness is ambiguous .however , one can use an alternative definition for the start position of the time domain signals , as shown in the numerical examples . for any real systemthere is a given signal - to - noise ratio , which may be frequency dependent .the layer peeling algorithm will fail if the reflection signal at a given index step becomes less than the noise .this can be due to either low fresnel reflection from the index step itself , or high reflection or material absorption in the preceding part of the structure .the case with high reflection in the preceding part of the structure was analyzed in ref . , and it was shown that the noise amplification factor during the layer peeling algorithm was of the order of where is the minimum power transmission through the structure .a similar conclusion can be reached by considering the effect of absorption .let denote the minimum detectable reflection coefficient .the maximum probing depth , , into a material with a single index step can be estimated by \frac{\delta n}{2n}\approx \varrho,\ ] ] where is the change in the real part of the refractive index and is the average real part of the refractive index at the step . solving for the the maximum probing depth , we obtain we observe that the maximum probing depth into the structure is inversely proportional to the material absorption . assuming , , , and gives , which corresponds to wavelengths .this is roughly the maximum depth one can expect the layer peeling algorithm to work for a material with comparatively low loss in the thz - range .as a first numerical example of the algorithm in sec . [ layerpeelingdisp ], we consider a structure consisting of two material layers . we assume vacuum for , a material with refractive index for , and a material with refractive index for .both materials are assumed to be dispersive and lossy .the task is to determine , , and , given knowledge of the reflection coefficient at in a finite frequency range .the refractive index of the first material is assumed to be in the form where is constant , and represents a lorentzian absorption feature , given by we take , , , and for the first material .here , where is a scaling frequency .note that must approach as for the refractive index in eq . to be causal .however , we here use the approximation that is constant in the frequency range and assume that has the correct behavior as .the refractive index of the second material is also assumed to be in the form eq .( [ eq : n_mat ] ) , with , but here the susceptibility is taken to be the sum of ten lorentzian absorption features of various amplitudes , bandwidths , and center frequencies , in the vicinity of .the resulting refractive index is seen in fig .[ fig:2nd_layer ] . in this example, we set thz , which leads to the vacuum wavelength mm .we take , and assume that the reflection coefficient is known in the frequency range 08 thz .the resulting power reflection coefficient is shown in fig .[ fig : ref_w ] . in the layer peeling algorithm , we take . in addition , we must choose an appropriate window function .the window function should have negligible energy outside the frequency window =[-8~\text{thz } , 8~\text{thz}] ] .we here set for in the numerical implementation of the algorithm , in accordance with the assumption that the reflection spectrum outside ] : \exp(i\omega j2\delta / c ) \label{rho1principle3}\ ] ] in the bandwidth .if were zero outside this bandwidth , the nyquist sampling theorem would immediately give the connection between ] equal to the associated fourier coefficients . with this procedure ,would be identical in the relevant bandwidth , however with lower limit in the sum . setting the lower limit to 0 amounts to finding the optimal causal and discrete approximation to expression . in the limit where vanishes for , the error in the approximation tends to zero .assuming that the discrete response ] , is real : is a physical time - domain response as resulting from a real impulse . peeling off this reflector , and removing the subsequent pure propagation in the layer with thickness , can be done with the associated transfer matrices , or equivalently , by applying the schur recursion formula the layer peeling process can be continued until all reflectors have been found . in other words ,two different structures give the same reflection response ; an index step between and , and several layers with non - dispersive , real refractive indices . to be able to solve the inverse scattering problem of a dispersive structure , it is therefore apparent that extra information ( in addition to the reflection spectrum ) must be known . in sec .[ layerpeelingdisp ] we used the extra information that the dispersion is sufficiently small , such that the time - domain response associated with each fresnel reflection has duration less than the round - trip time to the next index step .in addition we assumed for all .an inverse scattering algorithm is applied to retrieve the material parameters of stratified structures . even though this problem does not have a unique solution in general , there exist cases where the algorithm can be applied , given additional information about the structure .specifically , for a given , lower bound of the layer thicknesses , the dispersion must be sufficiently small , and the frequency range where the reflection coefficient is known must be sufficiently large .the retrieval of material parameters of hidden layers is challenging due to absorption , noise , and unknown layer thicknesses . despite these challenges , there exist cases where the algorithm is successful , as illustrated by the numerical examples .a. boutet de monvel and v. marchenko , `` new inverse spectral problem and its application , '' in `` inverse and algebraic quantum scattering theory ( lake balaton , 1996 ) , '' , vol .488 of _ lecture notes in phys . _( springer , berlin , 1997 ) , pp . 112 .
we consider the inverse scattering problem of retrieving the structural parameters of a stratified medium consisting of dispersive materials , given knowledge of the complex reflection coefficient in a finite frequency range . it is shown that the inverse scattering problem does not have a unique solution in general . when the dispersion is sufficiently small , such that the time - domain fresnel reflections have durations less than the round - trip time in the layers , the solution is unique and can be found by layer peeling . numerical examples with dispersive and lossy media are given , demonstrating the usefulness of the method for e.g. thz technology .
internet resources today are assigned by five regional internet registrars ( rirs ) .these non - profit organisations are responsible for resources such as blocks of ip addresses or numbers for autonomous systems ( ases ) .information about the status of such resources is maintained in publicly accessible rir databases , which are frequently used by upstream providers to verify ownership for customer networks . in general , networks are vulnerable to be hijacked by attackers due to the inherent lack of security mechanisms in the inter - domain routing protocol bgp .real attacks have been observed in the past that led to the development of a variety of detection techniques and eventually of security extensions to bgp .common to these attacks is a malicious claim of ip resources at the routing layer .however , claims of network ownership can be also made at rir level , a fact that has received little attention so far . in a history of more than three decades ,a vast number of internet resources have been handed out to numerous users under varying assignment policies .some ases or prefixes have never been actively used in the inter - domain routing , others changed or lost their original purpose when companies merged or vanished .it is not surprising that some internet resources became abandoned , i.e. resource holders ceased to use and maintain their resources . in this paper , we focus on threats that emerge from abandoned internet resources .currently , there is no mechanism that provides resource ownership validation of registered stakeholders . instead, the control over email addresses that are stored with rir database objects is often considered a proof of ownership for the corresponding resources .our contribution is a generalized attacker model that takes into account these shortcomings .we thoroughly evaluate the risk potential introduced by this attack by drawing on several data sources , and show that the threat is real .since this kind of attack enables an attacker to fully hide his identity , it makes hijacking more attractive , and significantly harder to disclose .consequently , we show that state - of - the - art detection techniques based on network measurements are ill - suited to deal with such attacks .even so , these attacks have been evidenced in practice , and should thus be taken into account by future research .we continue the discussion by establishing our attacker model in sec : model . in sec : resources , we estimate the risk potential of abandoned resources , and show that there is a real threat . as a result, we outline an approach to mitigate this threat , and discuss limitations of related work in sec : research .in particular , we outline the need for a system that provides resource ownership validation .we conclude our discussion in sec : conclusion .conventional attacks on bgp are based on its lack of origin validation , which allows an attacker to originate arbitrary prefixes or specific subnets from his own as .we propose a new attacker model that accounts for attackers to take ownership of abandoned resources .in such a scenario , an attacker is able to act on behalf of his victim , in particular to arrange upstream connectivity .misled upstream providers unknowingly connect one or several ases including prefixes of the victims as instructed by an attacker who successfully hides his true identity . following this model ,the anonymous attacker can participate in the cooperative internet exchange at arbitrary places without any formal incorrectness . in the following ,we generalize a real incident to derive preconditions that enable this kind of attack . in previous work , a corresponding attack has been observed in practice , which is known as the _ linktel incident_. the authors studied this attack and showed that a victim s prefixes originated from his own as , while the victim itself abandoned his business .the authors reconstructed the attacker s course of action to claim ownership of the abandoned resources .the linktel incident thereby revealed a major flaw in the internet eco - system : validation of resource ownership is most often based on manual inspection of rir databases . in this context , it was shown that the attacker was able to gain control over the victim s dns domain , and thus over corresponding email addresses .the involved upstream provider presumably validated that the attacker s email address was referenced by the hijacked resources rir database objects .given this proof of ownership , the upstream provider was convinced by the attacker s claim to be the legitimate holder of the resources .surprisingly , the attacker captured the victim s dns domain by simply re - registering it after expiration . for several months ,the attacker s abuse of the hijacked resources remained unnoticed . by combining several data sources, the authors showed that the hijacked networks were utilized to send spam , to host web sites that advertised disputable products , and to engage in irc communication .after the victim recovered his business , he learned that his networks were listed on spamming blacklists .however , the attacker s upstream provider refused to take action at first , since the victim was unable to refute the attacker s ownership claims .based on the insights gained from the linktel incident , we show that the attacker s approach can be generalized . to enable hijacking of internet resources , the following preconditions have to be met : ( a ) internet resources are evidentially abandoned and ( b ) the original resource holder can be impersonated .if an organisation goes out of business in an unsorted manner , these conditions are eventually met . as a first consequence ,the organisation ceases to use and maintain its resources .if this situation lasts over a longer period of time , the organisation s domain name(s ) expire .since day - to - day business lies idle , re - registration and thus impersonation becomes practicable for an attacker . at that moment, upstream connectivity can be arranged on behalf of the victim , since face - to - face communication is not required in general .routers can be sent via postal service , or even be rented on a virtualized basis .details on bgp and network configuration are usually exchanged via email , irc , or cellular phone , and payment can be arranged anonymously by bank deposits or other suitable payment instruments . without revealing any evidence about his real identity ,the attacker is able to stealthily hijack and deploy the abandoned resources .the implications of this attacker model are manifold .first , an attacker may act on behalf of a victim , thereby effectively hiding his own identity and impeding disclosure .this makes hijacking more attractive as it enables riskless network abuse .it hinders criminal prosecution , and could be used to deliberately create tensions between organisations or even countries . due to the lack of a system for resource ownership validation, these attacks only depend on idle organisations or missing care by legal successors of terminated businesses . even after the discovery of such an attack ,it is difficult for the victim to mitigate since reclaiming ownership is the word of one person against another at first .the linktel incident proves that this is not only a realistic scenario : such attacks are actually carried out in practice . the benefit of attacks based on abandoned resources can even be higher than in the case of conventional attacks . hijacking productive networksrarely lasts for more than a few hours , since the victim can receive great support in mitigating the attack .moreover , for most cases , the benefit is reduced to blackholing a victim s network with the youtube - pakistan incident being a prominent example . in addition , monitoring systems for network operators exist that raise alarms for unexpected announcements of their prefixes . however , due to the very nature of abandoned resources , virtually no one is going to take notice of an attack .our attacker model thus accounts for stealthily operating attackers who aim at persistently maintaining malicious services .we identify readily hijackable internet resources by searching rir databases for unmaintained resource objects .subsequently , we distinguish between resources that are still in use , with potential for network disruption , and resources that are fully abandoned and ready to be abused stealthily .such resources are especially attractive for attackers for two reasons .first , the resource is assigned to an organisation for operational use and thus represents a valid resource in the internet routing system .second , an attacker can easily claim ownership by taking control of the contact address referenced by corresponding rir database objects , i . by re - registering a domain name .consequently , we look for rir database objects that reference email addresses with _ expired dns names_. since the inference of invalid domain names can also be the result of poorly maintained resource objects or typing errors , it is important to take into account recent database activities for individual resource owners , and to correlate this information with bgp activity .the following analysis is based on archived ripe database snapshots over 2.5 years ( 23 february , 2012 till 9 july , 2014 ) .our results are representative for the european service region only , but similar analyses can be done with little effort for other service regions , too .ripe , like all other rirs , provides publicly available database snapshots on a daily basis .most of the personally related information is removed due to privacy concerns .some attributes , however , remain unanonymized , which we utilize to extract dns names .the ripe database holds more than 5.2 million objects .these objects can be updated from the web or via email .most of these objects optionally hold an email address in the ` notify ` field , to which corresponding update notifications are sent . despite anonymization , we found that these ` notify ` fields are preserved in the publicly available database snapshots , which is also the case for ` abuse - mailbox ` attributes . to extract dns names , we parse these email addresses where applicable . table : ripe_obj shows the distribution of stored objects by type along with the number of dns names we were able to extract .although we found more than 1.5 million references to dns names , the total number of _ distinct _ names is only 21,061 .this implies that , on average , more than 72 objects reference the same dns name .the overall fraction of objects that reference a domain name is 29.24% , which is surprisingly high since the database snapshots are considered to be anonymized . 0.75xrrr * object type * & * frequency * & + ` inetnum ` & 3,876,883 & 1,350,537 & ( 34.84% ) + ` domain ` & 658,689 & 97,557 & ( 14.81% ) + ` route ` & 237,370 & 50,300 & ( 21.19% ) + ` inet6num ` & 231,355 & 8,717 & ( 3.77% ) + ` organisation ` & 82,512 & 0 & ( 0.00% ) + ` mntner ` & 48,802 & 0 & ( 0.00% ) + ` aut - num ` & 27,683 & 6,838 & ( 24.70% ) + ` role ` & 20,684 & 14,430 & ( 69.76% ) + ` as - set ` & 13,655 & 2,500 & ( 18.31% ) + ` route6 ` & 9,660 & 723 & ( 7.48% ) + ` irt ` & 321 & 162 & ( 50.47% ) + * total * & * 5,239,201 * & * 1,531,764 * & * ( 29.24% ) * + hijackable internet resources are given by ` inetnum ` and ` aut - num ` objects , which represent blocks of ip addresses and unique numbers for autonomous systems respectively .exemplary database objects are provided in fig : ripe_entry , further details on the ripe database model and update procedures are available at .it is worth noting that the attacker neither needs authenticated access to the database nor does the attacker need to change the database objects .the attacker only needs to derive a valid contact point .we assume that the ( publicly available ) notification address usually belongs to the same dns domain as the technical contact point .detailed analysis is subject to future work ; in our study , we disregard groups of objects that reference more than a single dns domain as a precaution .the ripe database is mostly maintained by resource holders themselves .its security model is based on references to ` mntner ` ( maintainer ) objects , which grant update and delete privileges to the person holding a ` mntner ` object s password .this security model allows us to infer objects under control of the same authority by grouping objects with references to a common ` mntner ` object .we use these _ maintainer groups _ to estimate the impact of an attack for individual authorities : on average , we observed nearly 110 such references per ` mntner ` object , with a maximum of up to 436,558 references .the distribution of the number of objects per maintainer group is presented in fig : mntner_group_sizes . ....inetnum : 194.28.196.0 - 194.28.199.255 netname : ua - veles descr : llc " unlimited telecom " descr : kyiv notify : internet-isp.com.ua mnt - by : veles - mnt .... .... aut - num : as51016 as - name : vales descr : llc " unlimited telecom " notify : internet-isp.com.ua mnt - by : veles - mnt .... for each of the maintainer groups , we obtain the set of all dns names referenced by a group s objects .to unambiguously identify maintainer groups with expired domains , we merge disjoint groups that reference the same dns domain , and discard groups with references to more than one dns name . from an initial amount of 48,802 maintainer groups , we discard ( a ) 937 groups of zero size , i.e. unreferenced ` mntner ` objects , ( b ) 31,586 groups without domain name references , and ( c ) 4,990 groups with multiple references .the remaining 11,289 groups can be merged to 8,441 groups by identical dns names .we further discard groups that do not include any hijackable resources , i.e. ` inetnum ` and ` aut - num ` objects , which finally leads us to 7,907 object groups .note that the number of these groups is a lower bound : an attacker could identify even more with access to unanonymized ripe data .as discussed above , each of these groups is maintained by a single entity .if a group s dns name expires , we consider the entity s resources to be a valuable target for an attacker . to confirm that a set of resources is abandoned, our approach is based on complementary data sources .we start with domain names that expire , which is a strong yet inconclusive indication for a fading resource holder .we gain further evidence by considering only resources that are neither changed in the ripe database nor announced in bgp . including both administrative ( dns , ripe ) and an operational ( bgp ) measuresgives a comprehensive picture on the utilization of the resources .we used the _ whois system _ to query expiry dates for all extracted dns names ( cf . , section [ sec : grouping ] ) .fig : domain_expiry_dist shows the distribution of these dates . at the time of writing , 214 domain names have been expired .another 121 names expire within the week , given that the owners miss to renew their contracts .the most frequent top level domains are ` .com ` ( 27.9% ) , ` .ru ` ( 21.5% ) , and ` .net ` ( 13.0% ) , while the most frequent _ expired _ tlds are ` .ru ` ( 20.1% ) , ` .it ` ( 16.4% ) , and ` .com ` ( 9.81% ) .the longest valid domains are registered until 2108 and mostly represent governmental institutions .the longest expired domain has been unregistered for nearly 14 years .with respect to the maintainer groups derived above , a total of 65 groups that reference expired dns names remain .these groups hold 773 ` /24 `networks and 54 ases , and are subject to closer investigation . for each of the 7,907 maintainer groups divided into 7,842 valid groups and 65 with expired dns names we extracted the minimum time since the last change for any of its database objects . note that we filtered out automated bulk updates that affected _ all _ objects of a certain type .fig : lastchange_ripedb shows the distribution of database updates for groups with valid and for groups with expired domain names . while about 10% of the valid groups show changes within two months , dns - expired groups differ strikingly : the 10%-quantile is at nearly 5 months . hence , given these long times without updates , we consider resource groups that exhibit an object update within 6 months to be still maintained and not abandoned .note that we do not assume inactivity in absence of such changes . to confirm inactivity , we correlate the ripe database updates with activities in the global routing system . for that , we analyze all available bgp update messages from the routeviews oregon s feed for the same time frame .this data set comprises 83,255 files with 18.4 billion announcements and 1.04 billion withdraw messages for resources assigned by ripe . given this data , we are able to extract two indicators : ( 1 ) the time since an _ ip prefix _ was last visible from the routeviews monitor , and ( 2 ) the time since the last deployment of a ripe - registered _ as number _ by looking at as path attributes .fig : lastchange_bgp shows the distribution of last activity in bgp for any internet resource in our maintainer groups .nearly 90% of resources in valid groups are visible in bgp at the moment .surprisingly , most of the remaining groups did not show any activity at all during the last 2.5 years .about 75% of the dns - expired resources are present in today s routing table and are thus still actively used .the remaining resources did show some activity in the past ( 10% ) or were never observed in bgp during our analysis period ( 15% ) .these findings confirm our assumption that inactivity in the ripe database does not necessarily imply operational shutdown . while up to 85% of the expired resources were seen in bgp within the last 2.5 years , fig: lastchange_ripedb indicates that not more than 55% of the expired resources received an update in the ripe database .we further learn that some expired resources did show bgp activity in the past , and do not show any activity today .note that we disregard resources with recent bgp activity .these resources could potentially be hijacked already ; however , attacks that started before our analysis are beyond the scope of our approach .so far , we learned that 65 maintainer groups with a total of 773 ` /24 ` networks and 54 ases reference expired dns names .our activity measures further indicate that valid groups yield higher activity than expired groups . by combining these measures , we are able to infer resources that are inactive from both an administrative and an operational point of view .fig : lastchange_all shows the time since the latest change by any of these measures , i.e. , the minimum value of both measures .this combined activity measure clearly splits the 65 expired maintainer groups into two disjoint sets : 52 cases were active within the last 3 months , while 13 cases did not show any activity for more than 6 months .we consider these remaining 13 cases to be effectively abandoned .these resource groups represent a total number of 15 ` inetnum ` objects ( with an equivalent of 73 ` /24 ` networks ) and 7 ` aut - num ` ( i.e. , as number ) objects .now that we have identified vulnerable resources , we feel obliged to protect these resources .since any attacker could repeat our analysis , we are going to contact endangered resource holders before publishing our findings .although communication via e - mail is futile due to expired domains , we can fall back on telephone numbers provided in the ripe database to reach out for the operators .for the problem of abandoned internet resources , one might argue that the threat is not caused by a technical but a social problem because operators agree to their peering relations based on a weak authentication scheme .this scheme can be replaced by stronger verification the required data already exists .rirs have contracts with the owners of delegated resources and thus are aware of more reliable contact points ( e.g. , telephone numbers ) .however , the current situation shows that we need mechanisms , tools , and procedures which are not tedious for operators but allow for easy resource verification . our approach to identify abandoned resources can be easily extended to continuously monitor resources of all rirs .this would allow us to warn network operators about potential risks .finding scalable approaches to implement early warning and prevention in real - time , though , is an open research issue .current research is particularly focused on the detection of bgp hijacking attacks .proposed mitigation techniques look on the control plane , the data plane , or both ._ control plane _monitoring is used to identify anomalies in bgp routing tables to infer attacks .such approaches are prone to false positives due to legitimate causes for anomalies .techniques based on _ data plane _ measurements account for changes of the router topology , or of hosts in supposedly hijacked networks .these approaches rely on measurements carried out before and during an attack . beyond that, studies on the malicious intent behind hijacking attacks exist .all detection approaches require the observation of suspicious routing changes .attacks based on our attacker model take place outside the routing system , and thus do not lead to noticeable routing changes apart from a supposedly legitimized organisation starting to reuse its internet resources . hence , current detection systems are incapable to deal with this kind of attack . the dns has been widely studied in the context of malicious network activities , mainly concerning spammers or fraud websites .proactive blacklisting of domain names does not help in our scenario as the threat is effective on the routing layer .identifying orphaned dns servers is also out of scope of this paper as the attacker does not leverage the dns server but the expiring domain . despite its effectiveness, we consider our approach to detect and monitor abandoned resources as outlined above an intermediate solution only .in fact , we argue that there is a need for resource ownership validation .there is ongoing effort to increase the deployment of a _ resource public key infrastructure ( rpki ) _ . in its present state, the rpki allows for validation of route origins by using cryptographically secured bindings between as numbers and ip prefixes .this mechanism prevents common hijacking attacks . in terms of hijacking abandoned resources , however , this system is ineffective in its current form since the abandoned origin as is taken over as well , and origin validation performed by bgp routers will indicate a valid bgp update . even though the rpki itself can be misused , at the moment it represents the only mechanism for proofing securely ownership of internet resources .we merely lack a clear procedure in the context of abandoned internet resources .one approach could be the following operational rule : a peering request is only established when resource objects of the requesting peer exist in the rpki .recent time stamps for these objects indicate that the requesting peer has control over the resources as only authorized users can create such objects .such a scheme seems feasible from the operational perspective and might even increase the incentives to deploy rpki .rpki is part of _ bgpsec _ , an even larger effort to secure bgp .this extension to the protocol remedies the risk of hijacking abandoned resources due to its path validation capabilities : in our model , an attacker can not provide valid cryptographic keys to sign update messages as specified by bgpsec .however , the development of bgpsec is at an early stage , and the benefit compared to pure origin validation is questionable in particular in sparse deployment scenarios . future research should be carried out on enabling internet service providers to validate resource ownership of customers .we see the potential of such a system not only in preventing attackers from hijacking abandoned internet resources .it would also establish trust in customer - provider and peer - to - peer relationships , as well as in resource transfers issued by rirs or lirs .motivated by a real - world case study , we introduced a generalized attacker model that is aimed on the hijacking of abandoned internet resources .we showed that such an attack is feasible with little effort , and effectively hides the attacker s identity by acting on behalf of a victim . by studying orthogonal data sources over a period of more than 30 months, we could give evidence of a high risk potential of such attacks . only in the european rir database, we found 214 expired domain names that control a total of 773 ` /24 ` networks and 54 ases , all of which can be easily hijacked .about 90% of these resources are still in use , which enables an attacker to disrupt operational networks .the remaining 10% of the resources are fully abandoned , and ready to be stealthily abused .our findings led us to the conclusion that state - of - the - art systems are limited to deal with this kind of attack .more importantly , we argued that there is a need for _ resource origin validation_. such a framework would not only prevent attacks , but could also strengthen today s internet eco - system by establishing trust in resource ownership . in this paper , we sketched a new attack vector .up until now , it is unclear how common such attacks are ; our findings thus might trigger new malicious activities .however , we also showed that this attack is already known to attackers , and we sketched countermeasures to mitigate this concern .in addition , we contact the holders of vulnerable resources before publication of our findings .this work has been supported by the german federal ministry of education and research ( bmbf ) under support code 01by1203c , project _ peeroskop _ , and by the european commission under the fp7 project _eins _ , grant number 288021 .
the vulnerability of the internet has been demonstrated by prominent ip prefix hijacking events . major outages such as the china telecom incident in 2010 stimulate speculations about malicious intentions behind such anomalies . surprisingly , almost all discussions in the current literature assume that hijacking incidents are enabled by the lack of security mechanisms in the inter - domain routing protocol bgp . in this paper , we discuss an attacker model that accounts for the hijacking of network ownership information stored in regional internet registry ( rir ) databases . we show that such threats emerge from abandoned internet resources ( e.g. , ip address blocks , as numbers ) . when dns names expire , attackers gain the opportunity to take resource ownership by re - registering domain names that are referenced by corresponding rir database objects . we argue that this kind of attack is more attractive than conventional hijacking , since the attacker can act in full anonymity on behalf of a victim . despite corresponding incidents have been observed in the past , current detection techniques are not qualified to deal with these attacks . we show that they are feasible with very little effort , and analyze the risk potential of abandoned internet resources for the european service region : our findings reveal that currently 73 ` /24 ` ip prefixes and 7 ases are vulnerable to be stealthily abused . we discuss countermeasures and outline research directions towards preventive solutions .
one of the recent strategies to deal with interference is interference alignment . in a multiuser channel, the interference alignment method puts aside a fraction of the available dimension at each receiver for the interference and then adjusts the signaling scheme such that all interfering signals are squeezed in the interference subspace .the remaining dimensions are dedicated for communicating the desired signal , keeping it free from interference .cadambe and jafar , proposed the linear interference alignment ( lia ) scheme for ic and proved that this method is capable of reaching optimal degrees of freedom of this network .the optimal degrees of freedom for a user ic is obtained in the same paper to be .the proposed scheme in is applied over many parallel channels and achieves the optimal degrees of freedom as the signal - to - noise ratio ( snr ) goes to infinity . , proposed the so called ergodic interference alignment scheme to achieve 1/2 interference - free ergodic capacity of ic at any signal - to - noise ratio .this scheme is based on a particular pairing of the channel matrices .the scheme needs roughly the same order of channel extension compared with , to achieve optimum performance .the similar idea of opportunistically pairing two channel instances to cancel interference has been proposed independently by as well .however , ergodic interference alignment scheme considers only an special pairing of the channel matrices and does not discuss the general structure of the paired channels suitable for interference cancelation .this paper addresses the general relationship between the paired channel matrices suitable for canceling interference , assuming linear combining of paired channel output signals .using this general pairing scheme , to align interference at receiver , proposed scheme significantly lowers the required delay for interference to be canceled . from a different standpoint , this paper obtains the necessary and sufficient feasibility conditions on channel structure to achieve total dof of the ic using limited number of channel extension .so far , interference alignment feasibility literature have mainly focused on network configuration , see and references therein . to ease some of interference alignment criteria by using channel structure , investigates degrees of freedom for the partially connected ics where some arbitrary interfering links are assumed disconnected . in this channel model , examines how these disconnected links are considered on designing the beamforming vectors for interference alignment and closed - form solutions are obtained for some specific configurations .in contrast , our work evaluates the necessary and sufficient conditions on channel structure of an ic to make perfect interference alignment possible with limited number of channel extensions .consider the user ic consisting of transmitters and receivers each equipped with a single antenna , as shown in fig .[ figure : kuser ] .each transmitter wishes to communicate with its corresponding receiver .all transmitters share a common bandwidth and want to achieve the maximum possible sum rate along with a reliable communication .channel output at the receiver and over the time slot is characterized by the following input - output relationship : }(t)={\bf h}^{[k1]}(t){\bf x}_p^{[1]}(t)+{\bf h}^{[k2]}(t){\bf x}_p^{[2]}(t ) \cdots \nonumber \\+{\bf h}^{[kk]}(t){\bf x}_p^{[k]}(t)+{\bf z}^{[k]}(t ) , \end{aligned}\ ] ] where is the user index , is the time slot index , }(t) ] is the transmitted precoded signal vector of the transmitter which will shortly be defined , } ( t) ] is the additive white gaussian noise at the receiver .the noise terms are all assumed to be drawn from a gaussian independent and identically distributed random process with zero mean and unit variance .it is assumed that all transmitters are subject to a power constraint .the channel gains are bounded between a positive minimum value and a finite maximum value to avoid degenerate channel conditions ( e.g. the case of all channel coefficients being equal or a channel coefficient being zero or infinite ) .assume that the channel knowledge is causal and is available globally , i.e. over the time slot , every node knows all channel coefficients } ( \tau ) , \forall j , k \in \{1 , \ldots , k\ } , \tau \in \{1 , \ldots , t\} ] is a column vector which is obtained by coding the transmitted symbols over time slots of the channel , as will be explained below .} ] represent the symbol extension of } ] , respectively .} ] and sent along the vector } ] can be written as } = { \bf v}^{[1 ] } { \bf x}^{[1 ] } , \end{aligned}\ ] ] where } ] , } ] as its columns . in a similar way , and are encoded by transmitters and , respectively and sent to the channel as : } = { \bf v}^{[2 ] } { \bf x}^{[2 ] } , \end{aligned}\ ] ] } = { \bf v}^{[3 ] } { \bf x}^{[3 ] } .\end{aligned}\ ] ] the received signal at the receiver can be evaluated to be }&{}={}&{\bf h}^{[k1]}{\bf v}^{[1 ] } { \bf x}^{[1 ] } + { \bf h}^{[k2]}{\bf v}^{[2 ] } { \bf x}^{[2 ] } \nonumber \\ & & { + } \:{\bf h}^{[k3]}{\bf v}^{[3 ] } { \bf x}^{[3 ] } + { \bf z}^{[k]}.\end{aligned}\ ] ] receiver cancels the interference by zero forcing all } , j\not = i ] , the dimension of the interference signal should not be more than .this can be realized by perfectly aligning the received interference from transmitters and at the receiver , i.e. } { \bf v}^{[2 ] } \right ) = \textrm{span } \left ( { \bf h}^{[13 ] } { \bf v}^{[3 ] } \right ) , \label{se1 } \end{aligned}\ ] ] where denotes the column space of matrix .at the same time , receiver zero forces the interference from } ] . to achieve dimensions free of interference, we will have : } { \bf v}^{[1 ] } \right ) = \textrm{span } \left ( { \bf h}^{[23 ] } { \bf v}^{[3 ] } \right ) .\label{se2 } \end{aligned}\ ] ] in a similar way , } ] should be designed in a way to satisfy the following condition : } { \bf v}^{[1 ] } \right ) = \textrm{span } \left ( { \bf h}^{[32 ] } { \bf v}^{[2 ] } \right ) .\label{se3 } \end{aligned}\ ] ] hence , } ] and } ] are full rank almost surely .using this fact , ( [ se1 ] ) and ( [ se2 ] ) imply that } \right ) = \textrm{span } \left ( { \bf t } { \bf v}^{[1 ] } \right ) , \label{cae1 } \end{aligned}\ ] ] where } ) ^{-1 } { \bf h}^{[23 ] } ( { \bf h}^{[21]})^{-1 } { \bf h}^{[12 ] } ( { \bf h}^{[32 ] } ) ^{-1 } { \bf h}^{[31]}. \label{tm}\end{aligned}\ ] ] if } ] and } ] . since all channel matrices are diagonal , the set of eigenvectors for all channel matrices , their inverse and product are all identical to the set of column vectors of the identity matrix , i.e. the vectors of the from ^t ] , ( [ se1])-([se3 ] ) imply that } { \bf v}^{[j ] } \right ) , \quad \forall i , j \in \{1 , 2 , 3\}. \end{aligned}\ ] ] thus , at receiver , the desired signal } { \bf v}^{[1 ] } ] , and hence , receiver can not fully decode solely by zero forcing the interference signal . therefore ,if the channel coefficients are completely random and generic , we can not obtain degrees of freedom for the three user single antenna ic through lia schemes .in previous section , if the objective was to align interference at two of the receivers , receivers and for instance , it could be easily attained using ( [ se1 ] ) and ( [ se2 ] ) .though , as discussed above , ( [ se3 ] ) which refers to interference alignment criteria at receiver , can not be satisfied simultaneously with ( [ se1 ] ) and ( [ se2 ] ) .instead , assume channel matrices which contribute to interference at receiver would be of a form that already satisfies ( [ se3 ] ) , interference alignment would then be accomplished .we can wait for the specific form of the channel to happen .the question we intend to answer in the following is that what is the necessary and sufficient condition on channel structure to make perfect interference alignment feasible in finite channel extension .the following theorem summarizes the main result of this paper .[ 3usertheo ] in a three user ic , the necessary and sufficient condition for the perfect interference alignment to be feasible in finite channel extension is to have the following structure on the channel matrices : } ) ^{-1 } { \bf h}^{[23 ] } ( { \bf h}^{[21]})^{-1 } { \bf h}^{[12 ] } ( { \bf h}^{[32 ] } ) ^{-1 } { \bf h}^{[31]}\nonumber\\&&{=}\:{\bf p } \left [ \begin{array}{c c c } \tilde{{\bf t } } & 0 & 0 \\ 0 & \tilde{{\bf t } } & 0 \\ 0 & 0 & f(\tilde{{\bf t } } ) \end{array } \right ] { \bf p}^t , \label{caem}\end{aligned}\ ] ] where is a permutation matrix , is an arbitrary diagonal matrix with nonzero diagonal elements , and with in the range , and is a mapping whose domain is an arbirary diagonal matrix and range is a diagonal matrix whose set of diagonal elements is a subset of diagonal elements of .theorem [ 3usertheo ] simply states that matrix has no unique diagonal element .[ lemma1 ] assuming that } ] . from ( [ cae1 ] )we conclude that there exists a dimensional matrix such that }={\bf v}^{[1]}{\bf z}.\end{aligned}\ ] ] assume that is an eigenvector of i.e. , where is its corresponding eigenvalue , then } { \bf u } \not = 0 ] , which is in } \right ) ] . } \right ) ] has dimension , it should have basis vectors of the form , where at least of s are nonzero .let s call vectors with this form as non vectors . since of s eigenvectors lie in } \right ) ] is a matrix consisted of non eigenvectors of as its columns , it is concluded that } \right ) \in \textrm{span } \left ( { \bf s } \right ) ] because } \right ) \in \textrm{span } \left ( { \bf s } \right ) ] implies that } { \bf v}^{[j ] } \right ) , \quad \forall i , j \in\{1 , 2 , 3\}.\end{aligned}\ ] ] thus , at receiver , the total dimension of the desired signal } { \bf v}^{[1]} ] , is less than , and desired signals are not linearly independent from the interference signals , and hence , receiver can not fully decode solely by zeroforcing the interference signal .lemma [ lemma2 ] conlcludes the proof of necessary part of theorem [ 3usertheo ] .the sufficient part is easily proved by noting the fact that the matrix with the form given in ( [ caem ] ) has non eigenvectors with the property that and where is defined as a matrix consisted of s as its columns .every subset of these eigenvectors can be considered as the columns of user transmit precoding matrix } ] and } ] can be considered as the user transmit precoding matrix .} ] can be obtained using ( [ cae2 ] ) and ( [ cae3 ] ) .for the rest of the paper , every matrix which can be written in the form of ( [ caem ] ) , with the same permutation matrix and mapping function , would be stated as .it can easily be seen that if and , so is and .if the condition ( [ caem ] ) is true with the following form { \bf p}^t , \label{caems}\end{aligned}\ ] ] where is an an arbitrary diagonal matrix , } ] can also be designed as any other matrix having the same column vector subspace with ( [ vdesig ] ) .} ] are determined accordingly using ( [ cae2 ] ) and ( [ cae3 ] ) , respectively .assuming channel aiding condition with the form given in ( [ caems ] ) , consider the special case of } = { \bf t}_p , \quad \forall i , j , \quad i \not = j ] is the condition to satisfy the requirement of ergodic interference alignment in , therefore , ergodic interference alignment is the special case of the scheme presented in this paper .channel aiding conditions obtained in this paper can be considered as the perfect interference alignment feasibility conditions on channel structure . statedconditions on channel structure are not exactly feasible , assuming generic channel coefficients .approximation can be used and its effect on residual interference can be analyzed .overall , this paper aims at reducing the required dimensionality and signal to noise ratio for exploiting degrees of freedom benefits of interference alignment schemes .n. lee , d. park , and y. kimi , `` degrees of freedom on the k - user mimo interference channel with constant channel coefficients for downlink communications , '' in _ proc .ieee global commun ._ , honolulu , hawaii , dec .2009 , pp .
interference alignment(ia ) is mostly achieved by coding interference over multiple dimensions . intuitively , the more interfering signals that need to be aligned , the larger the number of dimensions needed to align them . this dimensionality requirement poses a major challenge for ia in practical systems . this work evaluates the necessary and sufficient conditions on channel structure of a 3 user interference channel(ic ) to make perfect ia feasible within limited number of channel extensions . it is shown that if only one of interfering channel coefficients can be designed to a specific value , interference would be aligned perfectly at all receivers . interference channels , interference alignment , degrees of freedom , generic channel coefficients , vector space .